id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
16596587 | pes2o/s2orc | v3-fos-license | Explicit summation of the constituent WKB series and new approximate wave functions
The independent solutions of the one-dimensional Schr\"odinger equation are approximated by means of the explicit summation of the leading constituent WKB series. The continuous matching of the particular solutions gives the uniformly valid analytical approximation to the wave functions. A detailed numerical verification of the proposed approximation is performed for some exactly solvable problems arising from different kinds of potentials.
Introduction
Perturbation theory, the variational method and the WKB approximation are very extensively used in quantum mechanics. If we deal with perturbation theory or with the variational method then similar questions arise. How to find the unperturbed Hamiltonian or how to find the trial function for an arbitrarily given potential? Universal answers are absent. In this sense both mentioned methods are incomplete. In contrast, the WKB approximation is directly determined by a given potential. However the conventional WKB approximation has unphysical singularities. An old problem in semiclassical analysis is the development of global uniform approximations to the wave functions. In previous works [1,2], an essential improvement of the WKB approach was introduced for the logarithmic derivatives of the wave functions. In the present paper, we construct the second-order continuous approximation to the wave functions. The quality of the approximate wave functions is verified by means of a comparison with the exact solutions for different kinds of potentials.
We consider the linear one-dimensional Schrödinger equation where Q(q) = 2m (V (q) − E) for an arbitrary potential V (q). The logarithmic derivative of a wave function Ψ(q,h) satisfies the nonlinear Riccati equation The WKB approach deals just with functions Y (q,h). In this approach, two independent solutions Y ± (q,h) of the Riccati equation are represented by their asymptotic expansions in powers of Plank's constanth. The usual WKB approximation contains a finite number of leading terms Y ± n (q) from the complete expansions Y ± as (q,h). This approximation is not valid at turning points where Q(q) = 0.
As it is well known, the WKB series is divergent. Numerous references regarding asymptotic expansions may be found in [3]. The direct summation of a divergent series does not exist. By summing one means finding a function to which this series is the asymptotic expansion [4]. In recent years many studies have been devoted to extracting some useful information about the exact eigenfunctions from the divergent WKB series (see, e.g., [5] and references therein). There are several investigations on the properties of the WKB terms [6,7]. Unlike entirely exact but very complicated methods for some classes of potentials (see, e.g., [8]) our new way of using the WKB series gives an approximate but very simple and universal method of solving the Schrödinger equation.
Explicit summation of the constituent WKB series
Since [2] is likely to be inaccessible for the large majority of readers we reproduce previous results. First of all, the analysis of the well-known recursion relations [4,6] shows that the WKB terms are of the form where Q ′ (q) = dQ(q)/dq, Q ′′ (q) = d 2 Q(q)/dq 2 and Q (j) (q) = d j Q(q)/dq j . Second, the substitution of (5) into (3) allows us to reconstruct the asymptotic WKB series as an infinite sum of new constituent (partial) asymptotic series in powers of the ratio Q/h 2/3 . With the help of the recursion relations (4) we derive simple expressions for two leading sequences of coefficients A ± n,j . Here the numbers B ± n,1 are determined by the following recursion relations: and B ± n,2 is connected with B ± n,1 as follows: The complete series Y ± as (q,h) are approximated by a finite number of leading constituent series Z ± as,j (q,h) in contrast to the use of a finite number of leading terms Y ± n (q) in the conventional WKB approach. If we can find functions Z ± j (q,h) which are represented by asymptotic expansions Z ± as,j (q,h), then we obtain new approximations to the solutions of the Riccati equation. The number of used constituent series corresponds to the order of a proposed approximation. For instance the expressions ±h −1 Q 1/2 + Z ± 1 (q,h) are interpreted as the first-order approximations. In this paper, we consider only the second-order approximations we are able to rewrite the leading constituent expansions in the form where we separate the asymptotic series in a The leading terms of these series may be deduced by using equations (6) and (7). Our aim is to sum constituent series (10) - (11). In other words, we must find functions y ± j (a) which are represented by these expansions. In order to perform the identification we substitute the approximate function into the Riccati equation (2). As a result we get the following equations: for the functions y ± j (a). Direct verification shows that the asymptotic expansions (10) and (11) satisfy these equations.
Equation (15) is the Riccati equation for the logarithmic derivatives of linear combinations of the well-studied Airy functions Ai(a) and Bi(a) [9]. We select particular solutions by means of the known asymptotics (12). In the classically allowed region where Q(q) < 0 (a < 0) we derive the explicit expressions and in the classically forbidden region where Q(q) > 0 (a > 0) we get the other solutions Finally, we can obtain the solutions of the linear equation (16) with asymptotics (13) in the closed form Although the functions (17) -(20) have the asymptotic expansions (10) -(11) if |a| is large it should be stressed that the obtained functions possess different expansions if |a| is small. Replacing y ± j byỹ ± j in expression (14) we get the second pair of approximate solutions. It is not surprising that the asymptotics of our approximation coincide with the WKB asymptotics far away from the turning points. At the same time our approximation reproduces the known [4] satisfactory approximation near the turning points. Naturally, our approximation gives the exact result for the linear potential V (q) = kq of a uniform field. Note that this potential represents an example of the explicit summation of the WKB series for the logarithmic derivative of a wave function.
Approximate wave functions for the two-turning-point problem
With the aid the uniformly valid approximation to solutions of the Riccati equation derived in preceding section, we can now construct approximate wave functions. We consider the problem with two real turning points q − and q + (q + > q − ). The potential has its minimum at point q m . The first and second derivatives of the smooth potential are continuous at point q m . Two pairs of independent solutions of the Schrödinger equation are approximated by func- In accordance with the requirements of quantum mechanics, we must retain only the decreasing solutionsΨ − ap (q) in the classically forbidden regions (q < q − and q > q + ). In the classically allowed region (q − < q < q + ) we retain a linear combination of two oscillatory solutions Ψ + ap (q) and Ψ − ap (q). By matching particular solutions at the turning points q − and q + , we obtain the continuous approximate wave function which is represented by following formulas: if q − < q < q + , and Here we have the new quantization condition which determines the spectral value E sp (n) of energy implicitly. We denote the wave functions with E = E sp (n) as Ψ ap (q, n). Then we may choose the value of an arbitrary constant C in order to ensure the usual normalization Ψ ap (n)|Ψ ap (n) = 1 where |Ψ ap (n) is the vector in Hilbert space which corresponds to the function Ψ ap (q, n). The proposed approximation is an alternative to the well-known [4,10] Langer approximation [11] which employs ah-expansion different from the WKB series. Thus the approximate eigenfunctions are determined completely. However a question arises regarding the optimal approximate eigenvalues, because the value E sp (n) is not a unique choice.
Since explicit expressions for wave functions have already been obtained, we are able to calculate the expectation valuesĒ where e is an arbitrary parameter whileĤ and |Ψ ap (n) are given. It is natural to require that the discrepancy vector should not contain a component proportional to the approximate eigenvector. In other words, we consider the orthogonality condition Ψ ap (n)|D(e, n) =Ē(n) − e = 0 as a criterion for the selection of the optimal approximate eigenvalue. As a result we just get E(n) while E sp (n) does not fulfil the above requirement. It should also be noted that the scalar product D(e, n)|D(e, n) is minimized at e =Ē(n).
Verification of the proposed approximation
Now we must verify our approximation numerically for exactly solvable problems. We compare the normalized approximate wave functions Ψ ap (q, n) with the normalized exact wave functions Ψ ex (q, n). | 2014-10-01T00:00:00.000Z | 2001-02-21T00:00:00.000 | {
"year": 2001,
"sha1": "4379e41686c3f060e77bc50b8811716c8ea88585",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/jam/2002/683610.pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "a36b8825ff7f662b332eb6a9dde1e4ab68be2d28",
"s2fieldsofstudy": [
"Mathematics",
"Physics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
} |
245537974 | pes2o/s2orc | v3-fos-license | MHD dynamo in swirling turbulence: from deterministic to helical distributed chaos
Using results of laboratory experiments, direct numerical simulations, geomagnetic and solar observations, it is shown that high moments of helicity distribution can dominate power spectra of the magnetic field generated by the magnetohydrodynamic (MHD) dynamo in swirling turbulence even for the cases with zero global helicity. The notion of helical distributed chaos has been used for this purpose.
I. INTRODUCTION
The swirling flows are characterized by strong (local) helicity and differential rotation, which are typical properties of the flows in the stars' and planets' interiors. In the case of electrically conducting fluids, these properties (at certain conditions) can strongly intensify the conversion of the kinetic energy of the fluid's motion into magnetic energy and support the magnetohydrodynamic (MHD) dynamo.
It is known that the MHD dynamo is exited due to the nonlinear instabilities and is developed through the deterministic chaos states (see for instance Ref. [1] and references therein). For bounded and smooth dynamical systems one of the simplest ways to determine the presence of deterministic chaos is to compute their power spectrum. The exponential frequency spectrum is a good indication in this case [2]- [6].
A generalization of the deterministic chaos notion for the systems with randomly fluctuating characteristic frequency f c (the distributed chaos) allows consideration of the turbulent MHD dynamo with stretched exponential spectra (2) f [Hz] log 10 E(f) a.u.
exp-(f/f c ) A specific form of the stretched exponential spectra with β = 1/2 observed in the direct numerical simulations, laboratory, geomagnetic and solar observations, has been used in the present paper to confirm that the considered MHD processes are dominated by the high moments of helicity distribution where v and ω = [∇ × v] are velocity and vorticity fields, even for the cases with zero global helicity.
II. DETERMINISTIC CHAOS IN MHD
In an experiment, described at the site Ref. [7], a solid sphere rotates with a constant angular velocity Ω 0 to produce toroidal flow and a hydrofoil propeller with a constant angular velocity Ω i (located in the center of the sphere) which pumps along the vertical (rotation) axis to approximate poloidal flow. A weak axial magnetic field B 0 was imposed on the flow of the electrically conducting fluid (liquid sodium) filling the sphere. The inducted by the fluid motion magnetic field was measured by a Hall probe mounted near the experiment to exclude the imposed magnetic field as much as possible. Figure 1 shows, in the linear-log scales, power spectra of the signal obtained by the Hall probe at Ω 0 /2π = 5Hz, Ω i /2π = −13 Hz . The spectral data were taken from the site Ref. [7]. The dashed straight line is drawn in Fig. 1 to indicate the exponential spectrum Eq. (1) typical for the chaotic systems. It should be noted that the f c corresponds to the first dominating peak in the spectrum.
In paper Ref. [1] results of a direct numerical simulation (DNS) of a subcritical transition to MHD dynamo (without an imposed external magnetic field) at the magnetic Prandtl number P r m = 0.5 were reported.
The equations describing dynamics of the incompressible electrically conducting fluid with an associated magnetic field can be written as ln E(f) The velocity and normalized magnetic field v and b = B/ √ µ 0 ρ have the same dimension in the Alfvénic units, f is the forcing function.
In this numerical simulation, a mechanical propeller was simulated by the Taylor-Green vortex forcing in the box geometry without rotation or thermal convection (the forcing wavenumber k 0 = 2). The boundary conditions for the Eqs. (4-7) were taken periodic in all three dimensions. Figure 2 shows a typical magnetic power spectrum obtained in this simulation and corresponding to a chaotic attractor (the spectral data were taken from the Fig. 9c of the Ref. [1]). As in Fig. 1, the dashed straight line is drawn in Fig. 2 to indicate the exponential spectrum Eq. (1) typical for the chaotic systems, and the f c corresponds to the first dominating peak in the spectrum. The authors of the Ref. [1] have also noted that the dynamo states observed in their simulation are similar to the transitional dynamo states observed in the VKS dynamo experiment [8]. We will return to this experiment with more details below.
The modulation and transport of the galactic cosmic rays within the heliosphere is constantly under a strong influence of the Sun's open magnetic flux, which represents the magnetic solar activity. On the other hand, the 14 C production rate on the Earth is related to the cosmic ray flux. In paper Ref. [9] results of a dendrochronologically dated radiocarbon concentrations based reconstruction of the magnetic solar activity for the last 11400 years were reported. Figure 3 shows the power spectrum of the magnetic solar activity for this period. The 10-year averaged data for the spectrum computation were taken from the site Ref.
[10]. The spectrum was computed using the Maximum Entropy Method, specially developed for relatively short data sets. As in Figs. 1 and 2, the dashed straight line is drawn in Fig. 3 to indicate the exponential spectrum Eq. (1) typical for the chaotic systems, and the f c corresponds to the first dominating peak in the spectrum (see also Ref. [11]).
III. HIGH MOMENTS OF THE HELICITY DISTRIBUTION
For the case when viscous dissipation can be neglected the dynamics of the mean helicity can be described by equation where ... denotes an average over the spatial volume. It is clear that the mean helicity is not an inviscid invariant for this case. If, however, only the large-scale motions provide the main part to the correlation ω · (−[b × (∇ × b)] + f ) , the correlation is rapidly decreasing with spatial scales in the chaotic and turbulent flows. As a consequence, the higher moments of the helicity distribution can be considered as inviscid quasi-invariants in this case [12], [13].
To show this, one can divide the spatial domain into a network of the imaginary non-overlapping subdomains V i moving with the fluid (in the Lagrangian description) [12] [13]. The boundary conditions on the surface of each subdomain are taken in the form ω · n = 0. Then the moments of order n for the helicity distribution can be then defined as where the helicity H j for the subdomain V j Due to the rapid reduction of the correlation ω · (−[b × (∇ × b)] + f ) with spatial scales the subdomains' helicities H j can be approximately considered as inviscid invariants for the subdomains characterized by the small enough spatial scales. These subdomains should provide the main contribution to the high moments I n (n ≫ 1) for the turbulent or strongly chaotic flows (cf. [14]). Hence, the high moments I n can be approximately considered as inviscid invariants even when the global helicity I 1 cannot. As for the viscous case, the high moments I n can be considered as adiabatic quasi-invariants in the inertial range of scales.
It should be also noted that even in the case when the global helicity is equal to zero (due to a spatial symmetry, for instance) the high moments I n (at least with the even n) are non-zero [13].
The basins of attraction of the chaotic attractors corresponding to the adiabatic invariants I n are usually different for different values of n. Chaotic attractor with a smaller value of n has a thicker basin of attraction than that with a larger value of n. Therefore, usually, the flow is dominated by I n with the smallest value of n for which the I n can be already considered as a finite adiabatic invariant.
In the Alfvénic units, b has the same dimension as velocity and, therefore, one can use the dimensional considerations to obtain a relationship between the characteristic values b c and f c in the fluid motion dominated by the adiabatic invariant I n b c ∝ |I n | 1/(4n−3) f αn c (12) with α n = 2n − 3 4n − 3 (13) Then for n ≫ 1 the α n ≃ 1/2.
IV. HELICAL DISTRIBUTED CHAOS AND MHD DYNAMO
For more intense fluid motions (or/and for other boundary conditions) the parameter f c can have strong fluctuations. In this case, a more adequate approach should use an ensemble average over the fluctuating parameter to compute the power spectrum. The probability distribution P (f c ) can be readily calculated from the Eq. (12) (at n ≫ 1) if the characteristic magnetic field b c is normally distributed. Using some simple algebra on can obtain in this case where f β is a constant. Substituting Eq. (15) into Eq. (14) we obtain Analogous consideration for the spatial distributed chaos one can find, for instance, in Refs. [15], [16].
In paper Ref. [17] a comparison of the results of the famous von Karman sodium (VKS) experiment on the MHD dynamo in swirling turbulence with a relevant direct numerical simulation have been reported. The von Karman swirling turbulence was produced in a cylindrical vessel (with an inner copper shell and annulus located in the midplane) between two counter-rotating impellers. Figure 4 shows a schematic of the VKS experiment configuration. The magnetic field fluctuations were measured by a Hall probe P located in the bulk of the flow (see Fig. 4). Figure 5 shows in the log-log scales the power spectrum of the self-sustained (MHD dynamo) axial magnetic field fluctuations measured by the probe P as the solid black curve. The same Fig. 5 shows also (as the solid gray curve) analogous power spectrum for the corresponding signal obtained in a direct numerical simulation made in a spatial box using the Eqs. (4-7) with periodic boundary conditions. The mechanical forcing produced in the VKS experiment by the two counter-rotating impellers (see Fig. 4) was simulated in the DNS by the two Taylor-Green vortices. The frequency axis in the Fig. 5 was normalized by the 'forcing' frequency F 0 = u rms /L (u rms is the root mean square of the velocity fluctuations, and 2L is the spatial domain side) for the DNS and F 0 = 10 Hz (the rotation rate of the impellers) for the VKS experiment.
It should be noted that for the VKS experiment and for the corresponding DNS the velocity fluctuations are strong and a well-defined mean velocity is absent in the bulk of the swirling turbulent flow. Therefore, Taylor's 'frozen-in' hypothesis cannot be applied to these flows (see, for instance, Ref. [18]). Hence, the spectra in the Fig. 5 can be interpreted as true temporal ones. The dashed curve in the Fig. 5 indicates the stretched exponential spectrum Eq. (16) corresponding to the helical distributed chaos. the term f in the Eq. (9) and the consideration of the Sections III and IV can be readily generalized on the rotational and buoyancy-driven fluid motion, i.e. on the realistic geomagnetic and solar dynamos.
The Coriolis and buoyancy forces can be included in
The geomagnetic dipole moment (normalized by spatial volume -V ) (17) is usually used to describe the global magnetic field. In the Alfvénic units b = B/ √ µ 0 ρ the normalized geomagnetic dipole moment µ has the same dimension as velocity. Therefore, the above used dimensional considerations Eqs. (12)(13) can be applied for this case as well as their consequence Eq. (16).
In paper Ref. [19] a power spectrum of the geomagnetic dipole moment for the period 0-1 Myr was computed using data from drift sediments in the Iceland Basin (Ocean Drilling Program -ODP, site 983 [20]). Figure 6 shows this spectrum in the log-log scales (the spectral data were taken from Fig. 6 of the Ref [19]). The dashed curve in the Fig. 6 indicates the stretched exponential spectrum Eq. (16) corresponding to the helical distributed chaos (the analysis for a much longer-term period 0-160 Myr can be found in the Ref. [16]).
The global magnetic solar activity dynamics can be described by the time series of the sunspot number, which is a scalar one. To understand the underlying magnetohydrodynamics, one needs a reconstruction of corresponding multi-dimensional phase space. It was estimated [21] that for this purpose embedding dimension D=3 can be sufficient (see also Ref. [22]). The solar magnetic field cycle is about 22 years (11 years magnetic field polarity reversals). This means that the underlying magnetohydrodynamics must have the corresponding symmetry group. Since the sunspot number time series does not possess such symmetry one should obtain a cover system (possessing the symmetry group) which is dynamically (locally) equivalent to the system without the symmetry group [23]. In the Ref. [21] such cover system was constructed for the period 1750-2005 yy. Figure 7 shows the power spectrum for the reconstructed (cover) time series. The reconstructed data (cover time series) were taken from the site [22] and the spectrum was computed using the Maximum Entropy Method. The dashed curve in the Fig. 7 indicates the stretched exponential spectrum Eq. (16) corresponding to the helical distributed chaos.
V. ACKNOWLEDGMENT I thank P. Odier for a consultation related to his paper. | 2021-12-30T02:16:07.020Z | 2021-12-28T00:00:00.000 | {
"year": 2021,
"sha1": "eb78af0976faa74262089ce1bed62b69e779ea0c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "eb78af0976faa74262089ce1bed62b69e779ea0c",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
12238975 | pes2o/s2orc | v3-fos-license | A generalisation of the Gilbert-Varshamov bound and its asymptotic evaluation
The Gilbert-Varshamov (GV) lower bound on the maximum cardinality of a q-ary code of length n with minimum Hamming distance at least d can be obtained by application of Turan's theorem to the graph with vertex set {0,1,..,q-1}^n in which two vertices are joined if and only if their Hamming distance is at least d. We generalize the GV bound by applying Turan's theorem to the graph with vertex set C^n, where C is a q-ary code of length m and two vertices are joined if and only if their Hamming distance at least d. We asymptotically evaluate the resulting bound for n->\infty and d \delta mn for fixed \delta>0, and derive conditions on the distance distribution of C that are necessary and sufficient for the asymptotic generalized bound to beat the asymptotic GV bound. By invoking the Delsarte inequalities, we conclude that no improvement on the asymptotic GV bound is obtained. By using a sharpening of Turan's theorem due to Caro and Wei, we improve on our bound. It is undecided if there exists a code C for which the improved bound can beat the asymptotic GV bound.
Introduction
Let A q (n, d) denote the maximum cardinality of code of length n and minimum Hamming distance at least d over an alphabet Q with q letters. Moreover, for 0 ≤ δ ≤ 1, let α q (δ) denote the limes superior of the maximum rate of q-ary codes of relative distance δ, that is, α q (δ) = lim sup n→∞ 1 n log q A q (n, δn).
According to the asymptotic Plotkin bound [1, Thm. 5.2.5] α q (δ) = 0 for δ ≥ 1 − 1 q ; for 0 < δ < 1 − 1 q , the value of α q (δ) is unknown. The Gilbert-Varshamov (GV) bound [1, Thm. 5.1.7] states that The asymptotic version of the GV bound [1, Thm. 5. 1.9] reads as follows: where h q is the q-ary entropy function, defined as To the best of the author's knowledge, no lower bound on α q (δ) improving on (1) is known for q < 46. For an extensive survey on literature on the Gilbert-Varshamov bound and improvements on it, we refer to [2]. In [3], it was observed that the GV bound can be obtained by application of Turán's theorem [4,Thm. 3.2.1] to the graph with vertex set Q n , in which two vertices are joined by an edge if and only if their Hamming distance is at least d. By using this graph-theoretical approach and applying a refined version of Turán's theorem for locally sparse graphs, Jiang and Vardy [2] obtained an improvement of the GV bound for binary codes with a multiplicative factor n. This result was generalized to q-ary codes by Vu and Wu who proved the following [5, Thm. 1.2]. Theorem 1. Let q be a fixed positive integer and let β, β ′ be constants satisfying 0 < β ′ < β < q−1 q . There is a positive constant c depending on q and β such that for any β ′ n < d < βn, In this paper, we use the graph-theoretical approach to obtain a generalization of the GV bound. Again, we consider a graph in which two vertices are joined by an edge if and only if their Hamming distance is at least d. The vertex set does not equal Q n , but it equals C n , where C ⊆ Q m is a fixed q-ary code of length m. We use Turán's theorem to obtain a lower bound on the size of the largest clique in this graph, and employ a bounding technique from [6] to obtain a manageable asymptotic expression. We analyze the generalized asymptotic GV bound, and by employing the Delsarte inequalities [1, Sec. 5.3], we infer that it cannot improve the asymptotic GV bound. We end with an improvement of our bound based on a sharpening of Turán's theorem due to Caro and Wei, and derive a necessary and sufficient condition on C for this improved bound to beat the asymptotic GV bound. We have not been able to decide if there exist codes C satisfying this condition.
Turán's theorem
Let G be a simple graph without loops. A clique in G is a set of vertices of which each pair is joined by an edge. It is intuitively clear that a graph with many edges should contain a large clique. This is quantified by Turán's theorem, of which we use the following version (for a proof, see e.g. [4, Thm. 3.2.1]).
Theorem 2.
A simple graph without loops with v vertices and e edges contains a clique of size at least v 2 / v 2 − 2e .
Distance enumerator of a code
We define the distance enumerator polynomial B(x) of C as B(x) = m j=0 B j x j . It is easy to check that for each n ≥ 1, the code C n has distance enumerator polynomial (B(x)) n .
A bounding lemma
The following lemma is similar to a bounding technique that can be found in [6].
Main result and its proof
Proof. For each integer n, we consider the graph G with vertex set C n in which two vertices are joined by an edge if and only if they have Hamming distance at least d = ⌈δmn⌉. The number of edges e thus is given by is the j-th coefficient of the distance enumerator of C n . Application of Turán's bound to G yields the existence of a subcode D of C n with minimum distance at least d and cardinality at least We now invoke Lemma 3 and find that for each x ∈ (0, 1], Combining the above inequalities, we find that the code D of length mn satisfies
Theorem 4 contains a parameter x that can be optimized over. By straightforward differentiation, one finds that the optimizing value x satisfies the equation xB ′ (x) − δmB(x) = 0. For a given code C, it seems in general a hopeless task to obtain a closed expression for the largest bound on α q (δ) that can be obtained from Theorem 4. We set ourselves instead a different goal, viz. to determine if Theorem 4 can improve on the asymptotic GV bound. To this end, for each δ ∈ (0, 1 − 1 q ) and x ∈ (0, 1), we define For a pair (x, δ) optimizing F (x, δ), we have that .
Lemma 5. There exists a δ for which Theorem 4 yields an improvement on the asymptotic GV bound if and only if for some δ ∈ (0, 1 − 1 q ) ) < 1.
Theorem 6. The largest lower bound on α q (δ) that can be obtained from Theorem 4 is the asymptotic GV bound 1 − h q (δ).
Proof. By substituting z = 1 − q q−1 δ in the condition of Lemma 5, we obtain the equivalent condition that for some z ∈ (0, 1), We write the left hand side of (2) as the polynomial A(z) = m i=0 A i z i . By choosing z = 0, we find that A 0 = A(0) = 1 |C| B(1) = 1. According to the Delsarte inequalities, that form the basis of the linear programming bound [1, Sec. 5.3], A i ≥ 0 for all i. So for z ≥ 0, we have that A(z) ≥ A 0 = 1. ⊓ ⊔
Extension of the main result
We extend our result by using the sharpening of Turán's theorem from Theorem 7 below; an elegant proof, attributed to Caro and Wei, can be found in [4, p. 95].
Theorem 7. Let G = (V, E) be a simple graphs without loops. For each v ∈ V , let d v be the number of neighbours of v. Then G contains a clique of size at least By using a convexity argument and the fact that v∈V d v = 2|E|, it can be shown that Theorem 7 implies Theorem 2, and that they give the same result for regular graphs, i.e., if all vertices have equally many neighbours. If we thus apply our construction with a code C for which the number of codewords at a given distance from a word c ∈ C actually depends on the choice of c, we may improve our main result. For describing this improvement, we introduce the following notion: for a given code C and c ∈ C, the local distance enumerator B c (x) is defined as Theorem 8. Let C ⊆ Q m . For each δ ∈ (0, 1) and x ∈ (0, 1), we have Note that Theorem 8 reduces to Theorem 4 if all local distance distributions are equal.
Proof. We apply Theorem 7 to the graph with vertex set C n , in which two vertices are joined by an edge if and only if they have Hamming distance at least d. In this way, we infer the existence of a code of length mn and minimum Hamming distance at least d of size at least c∈C n 1 |{y ∈ C n | d(c, y) ≤ d}| For each c∈ C n and x ∈ (0, 1], we have that It is easy to see that for c = (c 1 , c 2 , . . . , c n ) ∈ C n , we have that and so there exists a code of length mn and size at least c=c1,...,cn∈C n where the final equality can be proved by induction on n.
Lemma 9. Theorem 8 improves on the asymptotic GV bound for some δ ∈ Proof. Similar to the proof of Lemma 5 and Theorem 6. ⊓ ⊔ We have not been able to decide if there exist codes for which the inequality in Lemma 9 is met. We note that for certain codes C, e.g. for C = {0, 1} 3 \{(0, 0, 0)}, the left-hand side of the condition in Lemma 9 is monotonically decreasing in z, although not all individual terms have this property. | 2011-06-30T12:32:57.000Z | 2011-06-30T00:00:00.000 | {
"year": 2011,
"sha1": "42d724c1407679434d9b6979fa9391bf3ad5e7cc",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "204b6b9125048b570a6c82c1cf6817760fafd935",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
} |
14254570 | pes2o/s2orc | v3-fos-license | An Algorithm for Calculating the Probability of Classes of Data Patterns on a Genealogy
Felsenstein’s pruning algorithm allows one to calculate the probability of any particular data pattern arising on a phylogeny given a model of character evolution. Here we present a similar dynamic programming algorithm. Our algorithm treats the tree and model as known. The algorithm makes it feasible to calculate the probability that a randomly selected character will be a member of a particular class of character patterns. Specifically, we are interested in binning patterns by the number of parsimony steps and the set of states observed at the tips of the tree. This algorithm was developed to expand the range of data set sizes that can be used with Waddell et al.’s marginal testing approach for assessing the adequacy of a model. The algorithms introduced can also be used in likelihood calculations which correct for ascertainment biases. For example, Lewis introduced an Mkv model which corrects for the lack of constant sites. The probability of a constant pattern arising can be calculated using the algorithm that we present, or by enumerating all possible constant patterns and calculating the probability of each one. Because the number of constant data patterns is small, both methods are efficient. However, elaborations of the Mkv model (such as those in Nylander et al) require calculating the probability of parsimony-uninformative patterns arising. For large trees and characters with many possible character states, the number of possible parismony-uninformative patterns is immense. In these cases, the algorithms introduced here will be more efficient. The algorithm has been implemented in open source software written in C++.
Background
Conducting likelihood-based phylogenetic inference requires calculating the probability that a particular set of characters would arise under the assumption that the evolutionary process is described by a combination of tree topology, branch lengths, and numerical parameters for a model of character evolution.In a landmark paper 1 , Felsenstein introduced a dynamic programming algorithm, the pruning algorithm, which allows one to perform this set of probability calculations efficiently for a discrete-state character.Felsenstein's algorithm sweeps down the tree once, making its computational complexity linear with respect to N, the number of tips in the tree.At each internal node that is the parent of another internal node, it must consider the transition probabilities between all possible pairs of unseen states.Thus the algorithm scales with the square of the number of character states, K.The number of possible ancestral character state combinations that could result in any pattern is on the order of K (N-2) , but the pruning algorithm enables the probability of the pattern to be
Background
Conducting likelihood-based phylogenetic inference requires calculating the probability that a particular set of characters would arise under the assumption that the evolutionary process is described by a combination of tree topology, branch lengths, and numerical parameters for a model of character evolution.In a landmark paper 1 , Felsenstein introduced a dynamic programming algorithm, the pruning algorithm, which allows one to perform this set of probability calculations efficiently for a discrete-state character.Felsenstein's algorithm sweeps down the tree once, making its computational complexity linear with respect to N, the number of tips in the tree.At each internal node that is the parent of another internal node, it must consider the transition probabilities between all possible pairs of unseen states.Thus the algorithm scales with the square of the number of character states, K.The number of possible ancestral character state combinations that could result in any pattern is on the order of K (N-2) , but the pruning algorithm enables the probability of the pattern to be 1 PLOS Currents Tree of Life calculated in a number of steps that scales on the order (N-2)K 2 .
In some contexts, we would like to be able to calculate the probability that any member of a class of patterns would arise on a tree.For example, Waddell et al. 2 introduced a method for assessing the adequacy of a substitution model in phylogenetics.They noted that tests of model adequacy introduced by Reeves 3 and Goldman 45 often lack power, particularly for data sets with a large number of sequences.These tests use a likelihood-ratio test statistic to compare the probability of the data under a phylogenetic model to the probability of the data under an "unconstrained", multinomial model.The multinomial model has a free parameter for every possible data pattern.The likelihood under this unconstrained model is an upper bound on the likelihood for any independent-sites model 4 because the unconstrained model can perfectly match the relative frequency of every observed pattern.In these tests, the inherent lack of power arises from the enormous number of free parameters in the multinomial model.The number of possible patterns grows exponentially with the number of tips in the tree.Because each of the N leaves can assume any of the K states, there are K N possible patterns.The multinomial model makes no constraint on the expected frequencies (other than that they sum to 1), so there are K N -1 free parameters in the model.For the test to detect that the phylogenetic model is inadequate, the likelihood improvement associated with the unconstrained model must be large enough to overcome the substantial penalty for overparameterization that comes with this very large number of free parameters.
Waddell et al. 2 provide a more powerful test using a likelihood ratio, binning the data patterns into groups of similar characters.They suggest grouping the characters into bins based on the observed number of steps (according to the parsimony criterion) and the set of states that were observed.A well-constructed marginal test, such as their test, can detect deficiencies in the model caused by underestimating certain aspects of the process of molecular evolution.For example, if a particular amino acid is required at a site in a protein-coding sequence, then the third base position of the codon may be constrained to be a purine.Over long periods of evolution, sites will exhibit a large number of substititions, but only two states (A or G).An iid (independent, identically-distributed) model of nucleotide change will consistently underpredict the prevalence of such patterns.By binning all patterns that display only A and G and that imply 9 steps according to parsimony The algorithm presented below is a dynamic programming approach to calculating the probability of a data pattern belonging to a class of patterns.Specifically, these classes of patterns all share the same set of observed states, the number of steps according to parsimony, and downpass state set according to the Fitch 6 algorithm.The probabilities used in the marginal test of Waddell et al. 2 can be obtained from these probabilities by summing over all possible downpass state sets.When referring to "the Fitch algorithm" below, we refer to the "preliminary phase" (commonly referred to as the "downpass") of identifying possible ancestral states in the terminology of Fitch 6 .This part of the parsimony reconstruction algorithm was originally published in Fitch .It allows one to calculate the parsimony score of an unordered character in a single pass down the tree.At each internal node, the algorithm composes a set of states.This state set, referred to as the downpass state set, is not the set of possible states in the most parsimonious reconstruction.It is only the preliminary phase of creating the most parsimonious reconstruction.Nevertheless, it is useful because when we encounter an internal node in Fitch's downpass, the only pieces of necessary information are the downpass state sets of the calculated in a number of steps that scales on the order (N-2)K 2 .
In some contexts, we would like to be able to calculate the probability that any member of a class of patterns would arise on a tree.For example, Waddell et al.The algorithm presented below is a dynamic programming approach to calculating the probability of a data pattern belonging to a class of patterns.Specifically, these classes of patterns all share the same set of observed states, the number of steps according to parsimony, and downpass state set according to the Fitch 6 algorithm.The probabilities used in the marginal test of Waddell et al. 2 can be obtained from these probabilities by summing over all possible downpass state sets.When referring to "the Fitch algorithm" below, we refer to the "preliminary phase" (commonly referred to as the "downpass") of identifying possible ancestral states in the terminology of Fitch 6 .This part of the parsimony reconstruction algorithm was originally published in Fitch .It allows one to calculate the parsimony score of an unordered character in a single pass down the tree.At each internal node, the algorithm composes a set of states.This state set, referred to as the downpass state set, is not the set of possible states in the most parsimonious reconstruction.It is only the preliminary phase of creating the most parsimonious reconstruction.Nevertheless, it is useful because when we encounter an internal node in Fitch's downpass, the only pieces of necessary information are the downpass state sets of the calculated in a number of steps that scales on the order (N-2)K 2 .
In some contexts, we would like to be able to calculate the probability that any member of a class of patterns would arise on a tree.For example, Waddell et al. 2 introduced a method for assessing the adequacy of a substitution model in phylogenetics.They noted that tests of model adequacy introduced by Reeves 3 and Goldman 45 often lack power, particularly for data sets with a large number of sequences.These tests use a likelihood-ratio test statistic to compare the probability of the data under a phylogenetic model to the probability of the data under an "unconstrained", multinomial model.The multinomial model has a free parameter for every possible data pattern.The likelihood under this unconstrained model is an upper bound on the likelihood for any independent-sites model 4 because the unconstrained model can perfectly match the relative frequency of every observed pattern.In these tests, the inherent lack of power arises from the enormous number of free parameters in the multinomial model.The number of possible patterns grows exponentially with the number of tips in the tree.Because each of the N leaves can assume any of the K states, there are K N possible patterns.The multinomial model makes no constraint on the expected frequencies (other than that they sum to 1), so there are K N -1 free parameters in the model.For the test to detect that the phylogenetic model is inadequate, the likelihood improvement associated with the unconstrained model must be large enough to overcome the substantial penalty for overparameterization that comes with this very large number of free parameters.
Waddell et al. 2 provide a more powerful test using a likelihood ratio, binning the data patterns into groups of similar characters.They suggest grouping the characters into bins based on the observed number of steps (according to the parsimony criterion) and the set of states that were observed.A well-constructed marginal test, such as their test, can detect deficiencies in the model caused by underestimating certain aspects of the process of molecular evolution.For example, if a particular amino acid is required at a site in a protein-coding sequence, then the third base position of the codon may be constrained to be a purine.Over long periods of evolution, sites will exhibit a large number of substititions, but only two states (A or G).An iid (independent, identically-distributed) model of nucleotide change will consistently underpredict the prevalence of such patterns.By binning all patterns that display only A and G and that imply 9 steps according to parsimony The algorithm presented below is a dynamic programming approach to calculating the probability of a data pattern belonging to a class of patterns.Specifically, these classes of patterns all share the same set of observed states, the number of steps according to parsimony, and downpass state set according to the Fitch 6 algorithm.The probabilities used in the marginal test of Waddell et al. 2 can be obtained from these probabilities by summing over all possible downpass state sets.When referring to "the Fitch algorithm" below, we refer to the "preliminary phase" (commonly referred to as the "downpass") of identifying possible ancestral states in the terminology of Fitch 6 .This part of the parsimony reconstruction algorithm was originally published in Fitch .It allows one to calculate the parsimony score of an unordered character in a single pass down the tree.At each internal node, the algorithm composes a set of states.This state set, referred to as the downpass state set, is not the set of possible states in the most parsimonious reconstruction.It is only the preliminary phase of creating the most parsimonious reconstruction.Nevertheless, it is useful because when we encounter an internal node in Fitch's downpass, the only pieces of necessary information are the downpass state sets of the 2 PLOS Currents Tree of Life node's children and the minimal number of parsimony steps accrued in the subtrees rooted at each child.
Specifically, the downpass starts by initializing the leaves of the tree such that a leaf's downpass state set is identical to the set of states observed for that taxon and the parsimony score accrued is 0. Let D n represent the downpass state set of a node and S n denote the minimal number of parsimony steps contributed by the subtree rooted at node n.A(n) denotes the first child of node n and B(n) denotes the second child.The algorithms described are restricted to fully resolved trees.Because branch rotation is not significant in phylogenetics, the designation of which child is the "first" and which is the "second" is arbitrary.The downpass algorithm of Fitch is performed as a postorder traversal, and at an internal node n: The dynamic algorithm described below relies on the fact that we can pre-calculate all of the possible downpass state sets, and all of the combinations of child nodes' downpass state sets that could result in these state sets.
Description of the algorithm
The algorithm proceeds by calculating the probability of generating different classes of patterns for the subtree rooted at a node.For the subtree rooted at node n, let Q s,t,d,a (n) denote the probability of generating a specific class of patterns conditional on an ancestral state, where s denotes the number of parsimony steps in the subtree, t denotes the set of states being observed at the tips of the subtree, d is the downpass state set of node n, and a denotes the character state that for node n.Thus, Q s,t,d,a (n) is the probability of generating any pattern that displays s steps, the states t, and a downpass of d in the subtree given that state a was the ancestral state at node n.
The algorithm will sweep over the tree in postorder traversal (leaves to root), and fill in a lookup table at each node to hold these probabilities.Let S denote the set of all of the states; for a DNA sequence matrix, S={A,C,G,T}.Note that the first subscript of Q is a non-negative integer that cannot exceed the maximum possible parsimony score.The second subscript, t, (the set of observed states) indexes each possible set of observed states.This is the power set of S with the empty set excluded.We do not consider missing data, and therefore do not need to consider the possibility that no states will be observed in a subtree.We will denote the power set of S as Y(n) and the power set of S with the empty set excluded as Z(S).The size of Z(S) is 2 |S| -1.The third subscript, d, indexes the power set of the observed state set.Once again the empty set is excluded from this power set, because the downpass state set in Fitch's algorithm is never empty.Because a state must be observed in a leaf of the subtree for that state to appear in the downpass state set, we only need to consider subsets of the observed state set.The fourth subscript indexes the states, thus it must be of size |S|.
We can initialize a lookup table Q 0,{x},{x},x (n) = 1.0 for each leaf node, n, and each state x ∈ S. All other elements of the Q lookup table are set to 0.0 for the leaf nodes.This initialization reflects the fact that there is no opportunity for evolution within the leaf node (the node represents the current state of the OTU).Thus, for any state , x, at the leaf node there is a probability of 1 that the observed state set and the downpass state set will both be {x}, and every other outcome has a probability of 0.
For an internal node, we can fill in the Q lookup table by considering the two possible ways in which a downpass can be formed: via intersection and via union of the downpass state sets of the children.In particular, Q s,t,d,a (n) node's children and the minimal number of parsimony steps accrued in the subtrees rooted at each child.
Specifically, the downpass starts by initializing the leaves of the tree such that a leaf's downpass state set is identical to the set of states observed for that taxon and the parsimony score accrued is 0. Let D n represent the downpass state set of a node and S n denote the minimal number of parsimony steps contributed by the subtree rooted at node n.A(n) denotes the first child of node n and B(n) denotes the second child.The algorithms described are restricted to fully resolved trees.Because branch rotation is not significant in phylogenetics, the designation of which child is the "first" and which is the "second" is arbitrary.The downpass algorithm of Fitch is performed as a postorder traversal, and at an internal node n: The dynamic algorithm described below relies on the fact that we can pre-calculate all of the possible downpass state sets, and all of the combinations of child nodes' downpass state sets that could result in these state sets.
Description of the algorithm
The algorithm proceeds by calculating the probability of generating different classes of patterns for the subtree rooted at a node.For the subtree rooted at node n, let Q s,t,d,a (n) denote the probability of generating a specific class of patterns conditional on an ancestral state, where s denotes the number of parsimony steps in the subtree, t denotes the set of states being observed at the tips of the subtree, d is the downpass state set of node n, and a denotes the character state that for node n.Thus, Q s,t,d,a (n) is the probability of generating any pattern that displays s steps, the states t, and a downpass of d in the subtree given that state a was the ancestral state at node n.
The algorithm will sweep over the tree in postorder traversal (leaves to root), and fill in a lookup table at each node to hold these probabilities.Let S denote the set of all of the states; for a DNA sequence matrix, S={A,C,G,T}.Note that the first subscript of Q is a non-negative integer that cannot exceed the maximum possible parsimony score.The second subscript, t, (the set of observed states) indexes each possible set of observed states.This is the power set of S with the empty set excluded.We do not consider missing data, and therefore do not need to consider the possibility that no states will be observed in a subtree.We will denote the power set of S as Y(n) and the power set of S with the empty set excluded as Z(S).The size of Z(S) is 2 |S| -1.The third subscript, d, indexes the power set of the observed state set.Once again the empty set is excluded from this power set, because the downpass state set in Fitch's algorithm is never empty.Because a state must be observed in a leaf of the subtree for that state to appear in the downpass state set, we only need to consider subsets of the observed state set.The fourth subscript indexes the states, thus it must be of size |S|.
We can initialize a lookup table Q 0,{x},{x},x (n) = 1.0 for each leaf node, n, and each state x ∈ S. All other elements of the Q lookup table are set to 0.0 for the leaf nodes.This initialization reflects the fact that there is no opportunity for evolution within the leaf node (the node represents the current state of the OTU).Thus, for any state , x, at the leaf node there is a probability of 1 that the observed state set and the downpass state set will both be {x}, and every other outcome has a probability of 0.
For an internal node, we can fill in the Q lookup table by considering the two possible ways in which a downpass can be formed: via intersection and via union of the downpass state sets of the children.In particular, Q s,t,d,a (n) node's children and the minimal number of parsimony steps accrued in the subtrees rooted at each child.
Specifically, the downpass starts by initializing the leaves of the tree such that a leaf's downpass state set is identical to the set of states observed for that taxon and the parsimony score accrued is 0. Let D n represent the downpass state set of a node and S n denote the minimal number of parsimony steps contributed by the subtree rooted at node n.A(n) denotes the first child of node n and B(n) denotes the second child.The algorithms described are restricted to fully resolved trees.Because branch rotation is not significant in phylogenetics, the designation of which child is the "first" and which is the "second" is arbitrary.The downpass algorithm of Fitch is performed as a postorder traversal, and at an internal node n: The dynamic algorithm described below relies on the fact that we can pre-calculate all of the possible downpass state sets, and all of the combinations of child nodes' downpass state sets that could result in these state sets.
Description of the algorithm
The algorithm proceeds by calculating the probability of generating different classes of patterns for the subtree rooted at a node.For the subtree rooted at node n, let Q s,t,d,a (n) denote the probability of generating a specific class of patterns conditional on an ancestral state, where s denotes the number of parsimony steps in the subtree, t denotes the set of states being observed at the tips of the subtree, d is the downpass state set of node n, and a denotes the character state that for node n.Thus, Q s,t,d,a (n) is the probability of generating any pattern that displays s steps, the states t, and a downpass of d in the subtree given that state a was the ancestral state at node n.
The algorithm will sweep over the tree in postorder traversal (leaves to root), and fill in a lookup table at each node to hold these probabilities.Let S denote the set of all of the states; for a DNA sequence matrix, S={A,C,G,T}.Note that the first subscript of Q is a non-negative integer that cannot exceed the maximum possible parsimony score.The second subscript, t, (the set of observed states) indexes each possible set of observed states.This is the power set of S with the empty set excluded.We do not consider missing data, and therefore do not need to consider the possibility that no states will be observed in a subtree.We will denote the power set of S as Y(n) and the power set of S with the empty set excluded as Z(S).The size of Z(S) is 2 |S| -1.The third subscript, d, indexes the power set of the observed state set.Once again the empty set is excluded from this power set, because the downpass state set in Fitch's algorithm is never empty.Because a state must be observed in a leaf of the subtree for that state to appear in the downpass state set, we only need to consider subsets of the observed state set.The fourth subscript indexes the states, thus it must be of size |S|.
We can initialize a lookup table Q 0,{x},{x},x (n) = 1.0 for each leaf node, n, and each state x ∈ S. All other elements of the Q lookup table are set to 0.0 for the leaf nodes.This initialization reflects the fact that there is no opportunity for evolution within the leaf node (the node represents the current state of the OTU).Thus, for any state , x, at the leaf node there is a probability of 1 that the observed state set and the downpass state set will both be {x}, and every other outcome has a probability of 0.
For an internal node, we can fill in the Q lookup table by considering the two possible ways in which a downpass can be formed: via intersection and via union of the downpass state sets of the children.In particular, Q s,t,d,a (n)
PLOS Currents Tree of Life
= I s,t,d,a (n) + U s,t,d,a (n) where I and U use the same subscripting as Q.The I term conditions on the fact that d was formed via an intersection, and the U term denotes the probability of the pattern conditional on the fact that d was formed by a union in Fitch's algorithm.Because the intersection in Fitch's algorithm does not increase the number of steps assigned to a subtree, we can calculate the I term from the combinations of Q terms in the children of n that have parsimony scores that sum to s.To express this mathematically we will introduce several variables.s A represents the number of parsimony steps contributed by the subtree rooted at the first child, A(n).When we are considering the case of an intersection leading to d, we know that the downpass state set of each child must be a superset of d.Because a downpass state set of d requires at least |d| -1 changes, each child's subtree must contribute at least this number of steps.Thus we have to consider values of s A that range from |d|-1 up to s + 1 -|d|.We will use c A to denote the set of states observed in that subtree, but not in the downpass state set of that subtree; note that c A must be a subset of t-d.Similarly, g A is the set of states in the downpass state set of A(n) but not in d; note that g A must be a subset of c A .We will use a function abbreviated C[...] to refer to the probability of a child subtree displaying a particular class of patterns given the state of the ancestral node n is a.In particular: the arguments specify the number of steps in the child's subtree, the observed state set of the child's subtree, the downpass state set of the child, the actual state of the parental node, and A(n) for the child node.This function is a similar to portion of the pruning algorithm of Felsenstein.Here P(i | a, e[A(n)]) denotes the transition probability, which is the probability of a character state a in the ancestor evolving to state i in a child across a branch of length e[A(n)] (the descendant node, A(n), uniquely specifies an edge in the tree).This notation allows us to express the events of interest in the first child's subtree.We will use a second function, W, (defined below), to calculate the probability of the necessary events occurring in the second child's subtree.
Taken together, these functions allow us to calculate the I term: as a summation over all possible contributions of the first child's subtree.The W function here contributes the probability of evolutionary events in the second that must occur in order to guarantee s steps, an observed state set of t, and a downpass of d in node n.The general form is similar to terms seen above: but the ranges of the summations differs from the previous expressions.Once again, c B is a subset of t-d, but c B must include all of the states in t that were not in d+c A .This constraint is necessary because the union of the states observed in the first and second subtrees must be equal to t.So c B must be chosen such that d ∪ c A ∪ c B = t.The range of g B in the summation in W must be the subsets of c B , but it must be restricted to states not found in g A .This restricted range in the summation is required because if g A and g B had a non-empty intersection, these common states would also be found in the ancestor's downpass set (thus the downpass would be larger than the d downpass that we aim to calculate).
To calculate the U s,t,d,a (n) term mentioned above, we must consider the possible outcomes in each subtree.In this case, we rely on the fact that the union of the downpass state sets of the two child nodes must be equal to d, and neither downpass can be the empty set: = I s,t,d,a (n) + U s,t,d,a (n) where I and U use the same subscripting as Q.The I term conditions on the fact that d was formed via an intersection, and the U term denotes the probability of the pattern conditional on the fact that d was formed by a union in Fitch's algorithm.Because the intersection in Fitch's algorithm does not increase the number of steps assigned to a subtree, we can calculate the I term from the combinations of Q terms in the children of n that have parsimony scores that sum to s.To express this mathematically we will introduce several variables.s A represents the number of parsimony steps contributed by the subtree rooted at the first child, A(n).When we are considering the case of an intersection leading to d, we know that the downpass state set of each child must be a superset of d.Because a downpass state set of d requires at least |d| -1 changes, each child's subtree must contribute at least this number of steps.Thus we have to consider values of s A that range from |d|-1 up to s + 1 -|d|.We will use c A to denote the set of states observed in that subtree, but not in the downpass state set of that subtree; note that c A must be a subset of t-d.Similarly, g A is the set of states in the downpass state set of A(n) but not in d; note that g A must be a subset of c A .We will use a function abbreviated C[...] to refer to the probability of a child subtree displaying a particular class of patterns given the state of the ancestral node n is a.In particular: the arguments specify the number of steps in the child's subtree, the observed state set of the child's subtree, the downpass state set of the child, the actual state of the parental node, and A(n) for the child node.This function is a similar to portion of the pruning algorithm of Felsenstein.Here P(i | a, e[A(n)]) denotes the transition probability, which is the probability of a character state a in the ancestor evolving to state i in a child across a branch of length e[A(n)] (the descendant node, A(n), uniquely specifies an edge in the tree).This notation allows us to express the events of interest in the first child's subtree.We will use a second function, W, (defined below), to calculate the probability of the necessary events occurring in the second child's subtree.
Taken together, these functions allow us to calculate the I term: as a summation over all possible contributions of the first child's subtree.The W function here contributes the probability of evolutionary events in the second that must occur in order to guarantee s steps, an observed state set of t, and a downpass of d in node n.The general form is similar to terms seen above: but the ranges of the summations differs from the previous expressions.Once again, c B is a subset of t-d, but c B must include all of the states in t that were not in d+c A .This constraint is necessary because the union of the states observed in the first and second subtrees must be equal to t.So c B must be chosen such that d ∪ c A ∪ c B = t.The range of g B in the summation in W must be the subsets of c B , but it must be restricted to states not found in g A .This restricted range in the summation is required because if g A and g B had a non-empty intersection, these common states would also be found in the ancestor's downpass set (thus the downpass would be larger than the d downpass that we aim to calculate).
To calculate the U s,t,d,a (n) term mentioned above, we must consider the possible outcomes in each subtree.In this case, we rely on the fact that the union of the downpass state sets of the two child nodes must be equal to d, and neither downpass can be the empty set: = I s,t,d,a (n) + U s,t,d,a (n) where I and U use the same subscripting as Q.The I term conditions on the fact that d was formed via an intersection, and the U term denotes the probability of the pattern conditional on the fact that d was formed by a union in Fitch's algorithm.Because the intersection in Fitch's algorithm does not increase the number of steps assigned to a subtree, we can calculate the I term from the combinations of Q terms in the children of n that have parsimony scores that sum to s.To express this mathematically we will introduce several variables.s A represents the number of parsimony steps contributed by the subtree rooted at the first child, A(n).When we are considering the case of an intersection leading to d, we know that the downpass state set of each child must be a superset of d.Because a downpass state set of d requires at least |d| -1 changes, each child's subtree must contribute at least this number of steps.Thus we have to consider values of s A that range from |d|-1 up to s + 1 -|d|.We will use c A to denote the set of states observed in that subtree, but not in the downpass state set of that subtree; note that c A must be a subset of t-d.Similarly, g A is the set of states in the downpass state set of A(n) but not in d; note that g A must be a subset of c A .We will use a function abbreviated C[...] to refer to the probability of a child subtree displaying a particular class of patterns given the state of the ancestral node n is a.In particular: the arguments specify the number of steps in the child's subtree, the observed state set of the child's subtree, the downpass state set of the child, the actual state of the parental node, and A(n) for the child node.This function is a similar to portion of the pruning algorithm of Felsenstein.Here P(i | a, e[A(n)]) denotes the transition probability, which is the probability of a character state a in the ancestor evolving to state i in a child across a branch of length e[A(n)] (the descendant node, A(n), uniquely specifies an edge in the tree).This notation allows us to express the events of interest in the first child's subtree.We will use a second function, W, (defined below), to calculate the probability of the necessary events occurring in the second child's subtree.
Taken together, these functions allow us to calculate the I term: as a summation over all possible contributions of the first child's subtree.The W function here contributes the probability of evolutionary events in the second that must occur in order to guarantee s steps, an observed state set of t, and a downpass of d in node n.The general form is similar to terms seen above: but the ranges of the summations differs from the previous expressions.Once again, c B is a subset of t-d, but c B must include all of the states in t that were not in d+c A .This constraint is necessary because the union of the states observed in the first and second subtrees must be equal to t.So c B must be chosen such that d ∪ c A ∪ c B = t.The range of g B in the summation in W must be the subsets of c B , but it must be restricted to states not found in g A .This restricted range in the summation is required because if g A and g B had a non-empty intersection, these common states would also be found in the ancestor's downpass set (thus the downpass would be larger than the d downpass that we aim to calculate).
To calculate the U s,t,d,a (n) term mentioned above, we must consider the possible outcomes in each subtree.In this case, we rely on the fact that the union of the downpass state sets of the two child nodes must be equal to d, and neither downpass can be the empty set:
Validation
As described in the caption of Table 1, we validated our analytical approach by re-analyzing the data set examined by Waddell et al. 2 .The counts that Waddell et al. found did not deviate significantly from the expected counts based on our algorithm.We converted the expected number of sites per dataset (shown in the Table) to the counts observed by Waddell et al. by multiplying the expected number of sites by the size of their simulation (100,000 sites).We compared these observed counts to the expectations from our results using a χ 2 goodness-of-fit test (χ 2 test statistic = 20.9,df = 24) to obtain a P-value of 0.64 for the null hypothesis that our algorithm produces the same probabilities that Waddell et al.were approximating.
Extensions
In addition to conducting marginal tests of models of sequence evolution, other applications require us to calculate the probability of a class of data patterns.Felsenstein 10 introduces a correction for ascertainment bias which involves calculating the probability of variable patterns.This can be easily done by calculating the probability of the constant patterns and subtracting this from one.More advanced forms of correcting for ascertainment bias are more difficult to correct for.For example, Nylander et al. 11 proposed correcting for the fact that morphological character matrices often lack parsimony-uninformative sites.To implement their correction, one must be able to calculate the probability of the uninformative class of patterns.Exhaustively enumerating these patterns is feasible for binary characters, but the methods that we introduce in this work will allow the usage of this form of correction on data sets that have multi-state characters.
Further work will include producing specialized forms of these algorithms designed for the case in which the rate matrix is symmetric.
(
for example), Waddell et al 2 's marginal test reveals the repeated under-prediction of this class of data patterns by an iid model.Importantly, the test can do this without introducing a large number of free parameters in the multinomial model that provides the reference likelihood.This results in a more powerful test.To calculate the probability of any member of a class of patterns arising on a tree, Waddell et al 2 simulated a large number of characters and counted the proportion of them which displayed one of the patterns in the class.This simulationbased approximation clearly does not scale to large trees.The algorithm that we introduce here will enable the relatively efficient calculation of the probability of a class of data patterns, thus making the marginal tests of Waddell et al.2 available for a larger range of phylogenetic problems.
(
for example), Waddell et al 2 's marginal test reveals the repeated under-prediction of this class of data patterns by an iid model.Importantly, the test can do this without introducing a large number of free parameters in the multinomial model that provides the reference likelihood.This results in a more powerful test.To calculate the probability of any member of a class of patterns arising on a tree, Waddell et al 2 simulated a large number of characters and counted the proportion of them which displayed one of the patterns in the class.This simulationbased approximation clearly does not scale to large trees.The algorithm that we introduce here will enable the relatively efficient calculation of the probability of a class of data patterns, thus making the marginal tests of Waddell et al.2 available for a larger range of phylogenetic problems.
Fig. 1 :
Fig. 1: Computational time as a function of number of tips in a tree
Fig. 1 :
Fig. 1: Computational time as a function of number of tips in a tree
Fig. 1 :
Fig. 1: Computational time as a function of number of tips in a tree 2introduced a method for assessing the adequacy of a substitution model in phylogenetics.They noted that tests of model adequacy introduced by Reeves 3 and Goldman 45 often lack power, particularly for data sets with a large number of sequences.These tests use a likelihood-ratio test statistic to compare the probability of the data under a phylogenetic model to the probability of the data under an "unconstrained", multinomial model.The multinomial model has a free
Table
1, we validated our analytical approach by re-analyzing the data set examined by Waddell et al.2.The counts that Waddell et al. found did not deviate significantly from the expected counts based on our algorithm.We converted the expected number of sites per dataset (shown in the Table) to the counts observed by Waddell et al. by multiplying the expected number of sites by the size of their simulation (100,000 sites).We compared these observed counts to the expectations from our results using a χ 2 goodness-of-fit test (χ 2 test statistic = 20.9,df = 24) to obtain a P-value of 0.64 for the null hypothesis that our algorithm produces the same probabilities that Waddell et al.were approximating.
Table
1, we validated our analytical approach by re-analyzing the data set examined by Waddell et al.2.The counts that Waddell et al. found did not deviate significantly from the expected counts based on our algorithm.We converted the expected number of sites per dataset (shown in the Table) to the counts observed by Waddell et al. by multiplying the expected number of sites by the size of their simulation (100,000 sites).We compared these observed counts to the expectations from our results using a χ 2 goodness-of-fit test (χ 2 test statistic = 20.9,df = 24) to obtain a P-value of 0.64 for the null hypothesis that our algorithm produces the same probabilities that Waddell et al.were approximating.
Table 1 :
Validation of DataBelow is a comparison of the expected number of sites in different pattern classes for a tree of 730-base RAG1 sequences from 40 species of mammals.The tree, model, and data are the same as those used by Waddell et al. 2, and the expected number of sites from their simulation-based techniques were obtained by summing elements in their table 2 to correspond to the classes of patterns calculated by our algorithm.They estimated the probability of pattern classes by calculating the relative frequency of the patterns based on 100,000 simulated sites.
ExtensionsIn addition to conducting marginal tests of models of sequence evolution, other applications require us to calculate the probability of a class of data patterns.Felsenstein 10 introduces a correction for ascertainment bias which involves calculating the probability of variable patterns.This can be easily done by calculating the
Table 1 :
Validation of DataBelow is a comparison of the expected number of sites in different pattern classes for a tree of 730-base RAG1 sequences from 40 species of mammals.The tree, model, and data are the same as those used by Waddell et al. 2, and the expected number of sites from their simulation-based techniques were obtained by summing elements in their table 2 to correspond to the classes of patterns calculated by our algorithm.They estimated the probability of pattern classes by calculating the relative frequency of the patterns based on 100,000 simulated sites.
ExtensionsIn addition to conducting marginal tests of models of sequence evolution, other applications require us to calculate the probability of a class of data patterns.Felsenstein 10 introduces a correction for ascertainment bias which involves calculating the probability of variable patterns.This can be easily done by calculating the
Table 1 :
Validation of DataBelow is a comparison of the expected number of sites in different pattern classes for a tree of 730-base RAG1 sequences from 40 species of mammals.The tree, model, and data are the same as those used by Waddell et al. 2, and the expected number of sites from their simulation-based techniques were obtained by summing elements in their table 2 to correspond to the classes of patterns calculated by our algorithm.They estimated the probability of pattern classes by calculating the relative frequency of the patterns based on 100,000 simulated sites. | 2017-04-06T17:54:56.374Z | 2012-12-14T00:00:00.000 | {
"year": 2012,
"sha1": "3d0cf3e0e723ccf9f3eaee4ec6a79fb438f0eb57",
"oa_license": "CCBY",
"oa_url": "https://europepmc.org/articles/pmc3712476",
"oa_status": "GREEN",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "3d0cf3e0e723ccf9f3eaee4ec6a79fb438f0eb57",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
} |
253527281 | pes2o/s2orc | v3-fos-license | Gas Array Sensors based on Electronic Nose for Detection of Tuna (Euthynnus Affinis) Contaminated by Pseudomonas Aeruginosa
Background: Fish is a food ingredient that is consumed throughout the world. When fishes die, their freshness begins to decrease. The freshness of the fish can be determined by the aroma it produces. The purpose of this study is to monitor the odor of fish using a collection of gas sensors that can detect distinct odors. Methods: The sensor was tested with three kinds of samples, namely Pseudomonas aeruginosa, tuna, and tuna that was contaminated with P. aeruginosa bacteria. During the process of collecting sensor data, all samples were placed in a vacuum so that the gas or aroma produced was not contaminated with other aromas. Eight sensors were used which were designed and implemented in an electronic nose (E-nose) device that can withstand aroma. The data collection process was carried out for 48 h, with an interval of 6 h for each data collection. Data processing was performed by using the principal component analysis and support vector machine (SVM) methods to obtain a plot score visualization and classification and to determine the aroma pattern of the fish. Results: The results of this study indicate that the E-nose system is able to smell fish based on the hour with 95% of the cumulative variance of the main component in the classification test between fresh tuna and tuna fish contaminated with P. aeruginosa. Conclusion: The SVM classifier was able to classify the healthy and unhealthy fish with an accuracy of 99%. The sensors that provided the highest response are the TGS 825 and TGS 826 sensors.
Introduction
Tuna (Euthynnus affinis) is a seawater fish that has high economic value. It contains high protein content and is rich in omega 3 fatty acids. Every 100 g has a chemical composition consisting of 69.40% water, 1.50% fat, 25.00% protein, and 0.03% carbohydrates. One of the causes of fish damage is high water content (70%-80% of the weight of the meat) which makes it easy for microorganisms to breed. Fish damaged by microorganisms will produce volatile nitrogenous bases, also known as total volatile nitrogen bases, which mostly consist of trimethylamine (TMA), dimethylamine, and ammonia. TMA is an organic compound containing nitrogen, carbon, and hydrogen atoms, with the formula of NR3. These compounds can be used to determine the freshness of fish. [1] The poor process of storing fish will cause the fish to rot quickly. According to Jay (2005), bacteria that cause fish to rot include Pseudomonas (32%-60%) and Bacillus sp. (<18%). The bacterium Pseudomonas aeruginosa as one of the bacteria that causes fish spoilage is a Gram-negative, rod-shaped, movable, aerobic bacterium that is commonly found in water, soil, plants, humans, and animals. [2] P. aeruginosa is a pathogenic bacterium in humans. It is invasive and toxigenic, so patients who have low immune systems can get infections. [3] Besides that, P. aeruginosa can interfere with the human digestive tract by enterotoxins, resulting in food poisoning. and distilled water (1:4); and the factor of 2 is immersion time (60s, 100s). The results showed that the best quality of albacore tuna was in the L1C3 treatment (Physalis leaf extract 50 ml +200 ml distilled water and soaking time 60') with the number of bacterial colonies being 16 × 10 5 cfu/g. The results also revealed a water content of 41.33% and pH of 6 with a less bright appearance, flexible dense texture, flat eyeballs, fresh smell or smell, and bright red gill color.
The assessment of fish-quality degradation still uses sensory methods such as appearance, texture, smell, and color. [5] So far, to clarify the level of freshness of fish, the human nose is used as an odorant in addition to physical detection. However, in reality, human olfaction has weaknesses, especially in standardization because of the subjective assessment of each human being. One of the efforts for early detection of fish quality is to use an electronic nose (E-nose). [6] E-nose is an instrument that works to imitate the working principle of the sense of smell. [7] In the mechanism of the biological nose, there are mucus and vibrissae in the nasal cavity which serve as a filter and concentration of odorant molecules. Aroma molecules are carried to the epithelial tissue due to the passive pressure exerted by the lungs. The glucose epithelium contains millions of sensory cells and olfactory receptors located in the membranes of these cells. Receptors convert chemical signals into electroneurographic signals. This unique pattern of electroneurographic signals is decoded by a craft neural network. [8] In the general design of the E-nose, the pump functions as a lung, the sampling system acts as mucus and vibrissae in the nasal passages, sensor arrays act as olfactory receptors, and a signal processing system using a computer functions as the processing of the olfactory neural network. [9] E-nose consists of an array of gas sensors as a substitute for olfactory receptors that function to detect odors or scents. The aroma detected by several gas sensors will then form a certain pattern. [10] The detection of freshwater fish quality has been carried out by Lintang et al. [11] The study used three kinds of freshwater fish samples. The results of this study indicate that the E-Nose system can cluster the aroma of freshwater fish using the PCA method with the percentage of the first main component, namely 98.7% (onion), 98.8% (catfish), and 99.5% (tilapia). Sensors that gave a high response to each sample were the TGS 2620 and TGS 2600 sensors. The TGS 822 sensor gave a high response to fish when they were not fit for consumption. Furthermore, research done by Fachri Rosyad and Danang Lelono classified the purity of beef based on the E-nose by using the principal component analysis (PCA) method. [12] They used mixed beef samples with variations in pork content of 20%, 40%, 60%, and 80% of the total sample mass, and the data were collected for 10 days.
The E-nose used in this study has eight sensors consisting of sensors TGS 2620, TGS 2611, TGS 822, TGS 832, TGS 2602, TGS 2600, TGS 826, and TGS 825. Each sensor has sensitivity to a certain type of gas. When interacting with volatile compounds from a sample, each sensor responds in the form of different voltages and forms a unique pattern for each detected sample. Hidayat (2015) suggested that the TGS gas sensor consists of three parts, namely the sensing element, the sensor base, and the sensor cap. [13] The gas-sensing element of the TGS sensor uses metal oxides, such as SnO 2 . [14] The heater on this sensor functions as a trigger for the sensor to be able to detect the expected gas target after being given a 5 V voltage. Two metal elements are spaced at a predetermined distance. If the sensor detects gas, the density of the space between the metals will increase or decrease. When the resistance gets smaller, the current will flow so that the sensor voltage output will be large. The TGS gas sensor-sensing element material uses metal oxides, such as SnO 2 . The heater which is used as a heating element for the sensing element works optimally with temperatures between 300°C and 550°C. [15] At low temperature, the reaction rate in the metal oxide surface is very slow. When the metal oxide grains are heated at high temperatures in free air, oxygen will be absorbed by the surface of the metal oxide grains, resulting in a negative charge. The donor electrons in the surface of the metal oxide grains are sent toward the adsorbed oxygen. This event leaves a positive charge in the layer. Therefore, a barrier potential is formed which can hinder the flow of electrons. [16] When there are another gas and gas reduction, deoxidation reaction will occur which leads to the concentration of oxygen gas on the surface of the sensing material decreasing. This causes a decrease in the barrier potential so that the electrical resistance will also decrease and electrons will easily flow through the potential barrier. [17] The mechanism of increasing the concentration of charge carriers resulting from the interaction between the semiconductor material and the reduced gas is described in the following equation [18] : The above equation shows the oxygen adsorbed on the empty lattice of the sensing material. Oxygen is absorbed on the surface of the metal oxide causing the electrons in the conduction band to decrease and a depletion region is formed so that the electrical resistance is higher than when no oxygen is absorbed. [19] The electrons produced by the reduced gas are the result of the reaction of oxygen ions to the reduced gas X(g). As a result of this event, electrons will return to the conduction band and the depletion region is reduced, then the electrical resistance will decrease as the number of carrier concentrations increases. [20] The E-nose system has four main components, namely the gas sensor array, headspace system, data acquisition, and pattern recognition. [21] Gas sensors used in making E-noses include conductive polymer gas sensors, quartz-micro balance, surface acoustic waves, and metal oxides. The headspace system has two processes, namely the sensing and purging processes. The data acquisition system can be performed by using a microcontroller. Methods commonly used to read certain patterns include PCA, linear discriminant analysis, partial least squares, multiple linear regression, cluster analysis, along with network methods such as artificial neural network, such as multi-layer perceptron, fuzzy inference systems, self-organizing map, radial basis function, genetic algorithms, neuro-fuzzy systems, and adaptive resonance theory. [22,23] In the food industry, E-nose can be used as odor identification to monitor production processes, such as detecting pathogenic fungi that attack strawberry crops. [24] Arshak et al.'s research in 2004 proved that E-nose is able to sense the existence of microorganism pollution in food products, by sensing the odor patterns result coming from the organism's metabolic processes. [16] In 2015, Triyana et al. succeeded in making a gas sensor that detects the aroma of tempeh during fermentation to verify the tempeh aroma profile related to microorganisms growth. [17] Based on its advantages, which are rapid and nondestructive detection, the E-nose has been widely used in many types of meat evaluation. [25] However, in medical field, E-nose is also able to detect bacterial biofilms that cause many oral diseases, such as Streptococcus mutans. [26] In recent years, the development of electronic sensor technology such as electronic tongue and E-nose has shown favorable application for pattern detection in daily life. [27] The present study aims to characterize fresh tuna and P. aeruginosa bacteria contaminated tuna based on the shelf time by using the pattern of gas sensor array system on the E-nose.
Sample preparation
1-2 oz of P. aeruginosa was taken from oblique agar and then put into 9 ml of TSB and homogenized. Bacterial cultures were incubated for 2 h. Furthermore, 1 ml of culture was taken and put into a cuvette to calculate its optical density by using a spectrophotometer. After that, the culture solution was added with 2 ml of 2% sucrose and vortexed to make it homogeneous. 2 ml of samples was taken using a micropipette to be put into a 10 ml beaker glass. Then, the bacteria were incubated for 48 h at 37°C. The treatments were administered to the bacteria after going through the incubation process. The fresh tuna fish meat sample, which had been cut weighing 3 g, was then contaminated with P. aeruginosa bacteria.
Preheating time sensor
All sensors were warmed up first for 30 min so that they were stable and could work properly. The sample used in this process is clean air or commonly known as the baseline.
Normalization of sensors
The stability test of eight gas sensors was carried out, each of which has sensitivity to a certain gas. This process was done to equalize the baseline of each sensor to make further data processing easier. A baseline is a sensor response to reference substances, for example, clean air or nitrogen gas. [11] Baseline normalization is done by reducing each datum value by the first value. [28] where Y n is the value of sensor data and Y 1 is the first or lowest value of the data obtained.
Sensor response test to H 2 S
The sensor response test to H 2 S was done by using a concentration of 1-5 ppm. Furthermore, H 2 S gas was sensed to obtain the stress results for each test.
Sample testing
After the samples were prepared, they were placed in a closed sample room. Next, the repetition test was conducted by taking four peaks of the E-nose response signal for each sample. The odor on was set at 180 s, while the odor off was set at 160 s. Each test was carried out three times with odor off and odor on cycles on the sample. The sample sensing process uses sampling rate of 17 Hz After data were collected, the sensor was left exposed to free air for 5 min before continuing to the next sample measurement.
Principal component analysis
The data result from the sample testing was processed with an average of two repetitions. The correlation between the sensor voltage output value of two times the type and concentration of gas can be used to obtain information about the freshness of the fish to be tested. Fish meat will produce a different sensory response to fish meat that begins to rot. The data analysis was performed by using the PCA method. PCA is a method that involves a mathematical procedure that transforms a large number of correlated variables into a small number of uncorrelated variables, without losing important information in it. [14] The PCA procedure aims to simplify the observed variables by shrinking or reducing their dimensions, which is done by eliminating the correlation between the independent variables through the transformation of the original independent variable to a new variable that is not correlated at all or commonly called the principal component. PCA transforms data into new coordinates, where the first coordinate is the first principal component obtained from the first largest eigenvalue and the second coordinate is the second principal component obtained from the second-largest eigenvalue, and so on. After several components of PCA results that are independent of multicollinearity are obtained, these components become new independent variables that will be regressed or analyzed for their effects on the dependent variables using the regression analysis.
In this study, the range values of obtained data from E-nose were too high. Hence, before applying PCA, we did data normalization to scale our data into (0, 1). In machine learning, the data normalization is compulsory to get more higher accuracy (new). [29] The responded from E-nose was normalized using min-max scaler on python. The formula of min-max scaler is given below: After applying the min-max scaler, the data were scaled between (0, 1). Then, the feature extraction process was performed using PCA.
Here are some mathematical steps of PCA algorithm implementation. After implementing the PCA with three components, the labeling process was performed. The sample of extracted features and labels are given in Table 1.
Support vector machines
Support vector machine (SVM) classifier is one of the machine learning techniques that can help solve big data classification problems. [30] Through kernel trick, the SVM classifier can separate data among higher feature space.
The SVM kernel can be represented by the following formula: In the above equation, φ(x) is referring to a function that can shift the features vector x i and x j and then merge both features into a single feature. To classify different domains of data, many kernel functions of SVM have been developed. Linear SVM classifier does not affect the transformation of data. The polynomial SVM kernel using degree d transforms the data by adding simple nonlinear. A radial base kernel is another type of SVM kernel that can classify different types of data efficiently. [31,32] In the current study, we aim to classify fresh and contaminated tuna. Hence, SVM is categorized into supervised learning algorithm in machine learning that analyze the given dataset and find out the patterns in data.
Here are some mathematical steps of SVM algorithm implementation. 1. SVM algorithm usually determines the regression model function by using following minimization function.
In above equation, w represents weight of the vector, c represents penalty factor, ξ i * and ξ i represent the relaxation component, ξ(x) indicates linear transformation function, b represents the offset, and ε represents the upper limit of error. 2. The Lagrange multiplier is initiated now, which can be represented by a i * and a i . The following equations are showing the optimization model.
In SVM algorithm, two parameters are crucial to adjust. The first parameter is penalty factor which is represented by c, and the second parameter is kernel which is represented by γ.
Gas sensor series heating results
The heating of each sensor (preheating) had been carried out before the sensors were used so that the reactions with gases cause a change in the resistance value at the output. This initial treatment was done to prepare the sensor in steady-state conditions. Preheating was carried out at room temperature and in clean air conditions. Each sensor has a different standard of preheating time according to the datasheet of each sensor published by the sensor manufacturer. The heating time of the sensor to stabilize is shown in Figure 1. It can be seen that the preheating of each sensor was stable at 60 s with the assumption that at that time all sensors were stable and ready for use.
Electronic nose H 2 S sensor response
H 2 S gas is an indicator of the odor produced by spoilage samples of tuna. Therefore, variations in the concentration of H 2 S were carried out including 1 ppm, 2 ppm, 3 ppm, 4 ppm, and 5 ppm. The sensor response to changes in H 2 S concentration is shown in Figure 2. It can be seen that each sensor reacted to H 2 S with different sensitivities. This was indicated by the increasing value of the voltage output to each sensor along with an increase in the amount of concentration.
From the datasheet, it is known that the sensors that were sensitive to H 2 S include the TGS 2602 and TGS 825 sensors. Therefore, a test for the TGS 2602 and TGS 825 sensors was carried out based on the shelf-life of the sample. These test results are seen in Figure 3.
H 2 S gas occurs due to natural processes as a bond product from the decomposition of organic substances by bacteria or because it is intentionally made. The formation of H 2 S gas was obtained from the following reaction equation: The procedure for making H 2 S gas refers to research done by Prasetyo (2002) using FeS and 1M HCl which were then reacted with a mass composition of 0.0001 g and 10 ml. [33] The gas formed was then stored in a 600 ml tube and immediately tested into the E-nose to obtain a voltage value that responds to H 2 S gas. After that, the voltage value was converted to ppm so that the results were obtained as shown in the picture.
Electronic nose response to sample
As can be seen in Figure 4, E-nose produced different sensor responses in each test of three sample types, namely P. aeruginosa bacteria, tuna, and tuna contaminated with P. aeruginosa bacteria. The sensor response results revealed a typical response value for each sample so that it has a different type of sensor with the highest output.
Accuracy test
An accuracy test was performed to determine the closeness of the measurement results to the actual value. Accuracy is a close match between the results of a measurement and the correct value of the quantity being measured. It is necessary to test the percentage of recovery (% recovery) and to measure the accuracy of the test result. Accuracy is considered to be either within the recovery tolerance (% recovery) of 10% or within the range of 90%-110%.
The results of the accuracy test on H 2 S gas detected by the TGS 2602 and TGS 825 sensors are shown in Table 2.
Principal component analysis score plots
PCA was done to analyze gas sensor series data for detection tests and classification of samples so that the ability of the gas sensor series can be known and the optimal type of sensor for this study can be determined. The PCA score plot graph can be used to determine the existence of groupings, clusters, and trends. The existing data grouping indicates the existence of 2 or more data distributions. Figure 4 shows PCA plot scores that were differentiated based on (a) sample storage time and (b) types of samples. Figure 5 presents a graph of the score plot that was tested based on the aroma of fish that is fit for consumption. It is found that fish that is suitable for consumption had a storage period of 0-18 h. Meanwhile, fish that is not fit for consumption gathers elsewhere. This means that the PCA method is able to distinguish between the aroma of fish samples that are fit for consumption (fresh) and not (rotten).
PCA method can obtain the data variation of P. aeruginosa, tuna fish, and P. aeruginosa bacteria-contaminated tuna fish. Eigenvalue that generated from PCA score plot explained the difference of data information in the new coordinate of principal component.
Interpretation of principal component analysis loading plots
The results of the loading plot for all samples are shown in Figure 6. It is found that the variables with values close to 1 or −1 are the TGS 825 and TGS 826 sensors. This shows that the TGS 825 and TGS 826 sensors are the most influential sensors and the most responsive to the sample. Loading plots shown in Figure 7 are used to identify the most influential variables on the PCA component. If the loading plot value of a variable is 0, then that variable is considered to have the least effect on component analysis. Meanwhile, if the variable has a value close to 1 or −1, it indicates that the variable has the most influence on component analysis on PCA.
Interpretation with support vector machine
SVM is categorized into supervised learning model; it can be used for both regression and classification problems. For the binary classification tasks, SVM is the most commonly used method in machine learning. SVM grew in popularity becoming one of the most widely used machine learning algorithms. SVM is being used in a variety of disciplines, such as biomedicine and handwriting recognition problems. [34] Clinical diagnosis, weather forecasting, stock exchange analysis, and image analysis are among applications that employ SVM. SVM is the most commonly used machine learning algorithm that learns from experience and assigns the targets to the objects. For instance, in order for SVM to differentiate among real and fake credit cards, it must examine a huge number of actual and fake credit card pictures. SVM's primary role is to distinguish binary-tagged data based on a line that achieves the largest gap between the labels. [29] Most of the supervised machine learning algorithms suffer with the curse of dimensionality. When a machine learning method retrieves a small number of instances and has little expertise in the context of many features, it suffers from the curse of dimensionality. The efficiency of a model may be harmed as a result of such constraints. The SVM classifier was shown to be vulnerable toward the dimensionality curse. [35] Because of these advantages, we used SVM classifier to classify the healthy and unhealthy fish meat. We analyze the data using Weka tool (ref.weka), which is publicly available. After getting PCA features with three components, the data were divided into two parts. 80% of data were used to train the model while 20% of data were used to test the model. We used 10-fold cross-validation method to evaluate our SVM model. The performance metrics of SVM classifier are given in Table 3. The SVM classifier was able to classify the data with 99.50% of accuracy.
Discussion
The E-nose sensor was preheated before starting the sensing process. This initial treatment was carried out to prepare the sensor in steady-state conditions. Preheating was performed at room temperature and in clean air conditions. [36] Each sensor has a different standard of preheating time according to the datasheet published by the sensor manufacturer. [37] The preheating process was done at an interval of 60 s. The sensor characteristics test was conducted using H 2 S gas. H 2 S gas is one of the odors produced by tuna samples. The calibration was carried out with variations in H 2 S concentrations, namely 1 ppm, 2 ppm, 3 ppm, 4 ppm, and 5 ppm. The sensor response to changes in H 2 S concentration is shown in Figure 2, where the most responsive sensors to H 2 S gas were TGS 2602 and TGS 825. These results are in accordance with the sensor datasheet. Figure 3 shows that the TGS 825 sensor has a sensor response that increases along with increased shelf-life. Meanwhile, the TGS 2602 sensor peaked at the 30 th h. E-Nose sensor testing was carried out on P. aeruginosa bacteria, tuna, and tuna contaminated with P. aeruginosa bacteria. The tests were carried out based on the samples' shelf-life. The samples that have been made were stored based on variations from 0 to 48 h, and the test was done every 6 h and two times repetitions.
The working mechanism of the electric nose system is detecting aroma by the sensor array, signal preprocessing, as well as processing by pattern recognition system and computational analysis. [38,39] Initially, the odor to be detected is exposed to a sensor array, which functions similarly to the human olfactory cell. Analog data from the sensor will be converted into digital data by an analog to digital converter (ADC) to be saved to a computer and further analyzed. The data from the ADC will be preprocessed first. The processing serves to prepare the signal so that it can be easily processed by a pattern recognition machine. This stage works identically as the vesicle layer in the human sense of smell. The final stage is processing by the pattern recognition system. This section aims to classify and predict unknown samples. The function of this section resembles the function of the olfactory center in the brain. [36] Thus, the E-nose system detects and classifies aromas automatically as a quality controller of aroma recognition, especially for the food industry. Fish stored at room temperature will experience decay, due to the growth of microorganism activity and foul-smelling enzymes that occur due to the formation of ammonia (NH 3 ). Ammonia is what causes fish to produce a bad smell. Research on the ammonia content produced per hour continues to increase because the protein in the sample continues to be damaged as the shelf-life increases. [40,41] The mechanism of the E-nose to detect the odor of (a) bacteria and (b) tuna fish is shown in Figure 8.
At first, the smell of the sample is tested on a sensor that has been preheated before. The sensor works by detecting the gases contained in the sample odor. A sensor is a device that functions to detect symptoms or signals originating from changes in energy such as electrical energy. The sensor used in this study is a gas sensor that can respond to the concentration of certain particles such as atoms, molecules, or ions in the gas and convert it into an electrical signal. [42] Commonly, the sensor uses a metal oxide semiconductor material to detect certain gases. Changes in the electrical properties of metal oxide semiconductors are caused by interactions with gas molecules preceded by the absorption of oxygen in the semiconductor. Oxygen molecules are adsorbed on the semiconductor surface and capture electrons from the conduction band. [28] The formation of H 2 S by microorganisms indicates the decomposition of amino acids (the smallest part of a protein) containing sulfur which are produced when proteins are hydrolyzed to meet the nutrient needs of microorganisms. [43] The use of P. aeruginosa in this study aims to determine the level of decay of fish contaminated with bacteria. These bacteria generate one or more pigments produced by aromatic amino acids such as tyrosine and phenylalanine. [44,45] Data analysis using PCA aims to reduce the dimensions of the correlated variables into a linearly uncorrelated reduced variable called the principal component to explain profusely the variance that occurs with the minimum number of principal components. The number of input variables in the PCA process is eight variables that represent the number of sensors on the E-nose. These variables will eventually be reduced into two dimensions consisting of the first main component (PC1) and the second main component (PC2), which can represent the percentage of variance values obtained. The significance of the total variance of the data that occurs is used to create a two-dimensional data visualization graph for qualitative analysis and interpretation of information. Figure 4 presents the result of a two-dimensional score plot on the two main components for the samples. The two main components of the score plot graph explain the 95% variance percentage. Figure 5 shows PCA functions in capturing variations in fish and fish contaminated with bacteria.
Conclusion
The electronic nose is able to detect the quality of tuna (E. affinis) and tuna contaminated with P. aeruginosa based on odor with a percentage of the variance of the two main components of 95%. E-nose which consists of eight gas sensors including TGS 825, TGS 2600, TGS 2620, TGS 832, TGS 822, TGS 826, TGS 2602, and TGS 2611 can identify rotting tuna (E. affinis) based on the smell which is indicated by the increasing g value of the concentration of gas produced and the increasing value of the voltage received by the E-nose. The results of this study indicate that the electronic nose system is able to smell fish based on the hour with 95% of the cumulative variance of the main component in the classification test between fresh tuna and tuna fish contaminated with P. aeruginosa. The SVM classifier was able to classify the healthy and unhealthy fish with accuracy of 99%. The sensors that provided the highest response are the TGS 825 and TGS 826 sensors.
Financial support and sponsorship
None.
Conflicts of interest
There are no conflicts of interest. | 2022-11-16T15:15:23.492Z | 2022-10-01T00:00:00.000 | {
"year": 2022,
"sha1": "1186fcf27f61c74c9b13c74385b15e87fe978dda",
"oa_license": "CCBYNCSA",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "2a6c1feedd1a1a1bf0a6842bc25bda383106226d",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
2520672 | pes2o/s2orc | v3-fos-license | Nutraceutical Properties of Olive Oil Polyphenols. An Itinerary from Cultured Cells through Animal Models to Humans
The increasing interest in the Mediterranean diet hinges on its healthy and anti-ageing properties. The composition of fatty acids, vitamins and polyphenols in olive oil, a key component of this diet, is considered a key feature of its healthy properties. Therefore, it is of significance that the Rod of Asclepius lying on a world map surrounded by olive tree branches has been chosen by the World Health Organization as a symbol of both peace and well-being. This review travels through most of the current and past research, recapitulating the biochemical and physiological correlations of the beneficial properties of olive tree (Olea europaea) polyphenols and their derivatives found in olive oil. The factors influencing the content and beneficial properties of olive oil polyphenols will also be taken into account together with their bioavailability. Finally, the data on the clinical and epidemiological relevance of olive oil and its polyphenols for longevity and against age- and lifestyle-associated pathologies such as cancer, cardiovascular, metabolic and neurodegenerative diseases are reviewed.
Introduction
Humanity living in developed countries is experiencing an increase in life expectancy; however, this positive outcome seems to be at the cost of a greater incidence of lifestyle-and age-associated diseases. These include cardiovascular diseases (CVDs), cancer and amyloid pathologies, both systemic (e.g., type 2 diabetes, T2DM) and neurodegenerative (e.g., Alzheimer's (AD) and Parkinson (PD) diseases). Most of these pathologies are particularly hard to treat due to their slow progression, possibly spanning several decades, and the appearance of their clinical signs at mid-or old age, when cell loss is already conspicuous and irreversible. It is then evident that, in the absence of early reliable diagnostic tools and effective therapies, prevention is still the best strategy to combat these pathological conditions.
It is not surprising that the shift of researchers' attention from "cure" to "prevention" has gradually led to an extension of the focus of their search, by adding "food", and hence "diet", to "drugs". Several epidemiological and observational studies support the belief that traditional alimentary regimens such as the Mediterranean (MD) and Asian diets are associated with improved ageing and a reduced incidence of age-associated diseases, including CVDs, cancer and cognitive decline [1]. The new design of the MD pyramid, proposed by the Mediterranean Diet Foundation Expert Group [2], emphasizes the importance, in addition to caloric restriction (CR), of frugality, conviviality, physical activity and adequate rest; it also confirms the importance of the plant-based core (vegetables, fruits, legumes, grains, nuts and seeds) and, in particular, of extra virgin olive oil (EVOO) as the main lipid source. Moreover, a key feature of the MD is the high intake of phytonutrients (notably vitamins and natural phenols) that, by themselves, can induce multiple signalling pathways involved in protein homeostasis, DNA repair, metabolism regulation and antioxidant defences that recall a caloric restriction regime [3,4].
A key feature of natural phenolics is their remarkable antioxidant power. The latter has been associated with many beneficial properties of plant polyphenols via modulation of oxidative pathways [2], through direct action on enzymes, proteins, receptors and several types of signalling pathways [3,4] as well as by interfering with epigenetic modifications of chromatin [5]. In particular, the beneficial effects of olive oil and olive leaf extracts were already known in the ancient world, and scientifically investigated since the last couple of centuries, leading to a focus on their biological properties, including the antioxidant, antimicrobial, hypoglycemic, vasodilator and antihypertensive effects, whose clinical significance was first reported in 1950 [6]. Some of these properties have led to the inclusion in the European Pharmacopoeia (Ph. Eur.) of the 80% alcoholic extract of olive leaves [7], containing oleuropein (OLE), hydroxytyrosol (HT), caffeic acid, tyrosol, apigenin and verbascoside [8].
The increasing interest in natural polyphenols has produced a plethora of studies that have investigated their medical efficacy in vitro, in cell cultures, in model organisms and, to a lesser extent, in humans, together with the biochemical and biological modifications underlying their effects. Plant polyphenols, or their molecular scaffolds, can also be the starting point in developing new drugs especially designed to combat chronic inflammatory states, atherosclerosis and the risk of thrombosis related to CVDs [9], cancer [10], amyloid deposition associated with AD and T2DM, and age-associated neurodegeneration [1,11].
Here we review the results of the studies on the polyphenols found in the olive tree and in the EVOO and the most recent advances towards their possible clinical use, mainly concerning neurodegenerative diseases, atherosclerosis, cancer, T2DM and the metabolic syndrome.
Olive Tree Polyphenols
Natural phenolic substances are secondary plant metabolites, a major group of plant compounds (over 8000) chemically characterized by the presence of one or more aromatic rings with one or more hydroxyl substituents [1]. Plant polyphenols are elaborated as phytoalexins used to combat pests and bacterial infections. The olive tree (Olea europaea) produces its own battery of polyphenols that includes flavonols, lignans and glycosides. The latter belong to the class of iridoids, a type of monoterpenes composed of a cyclopentane ring fused to a six-atom oxygen heterocycle; the molecules containing a broken cyclopentane ring are known as secoiridoids.
Olive tree polyphenols are found in the lipid and water (as minute droplets), fractions of olive oil, and include the phenolic alcohols, HT (3,4-dihydroxyphenylethanol, 3,4-DHPEA) and tyrosol (p-hydroxyphenylethanol, p-HPEA) and their secoiridoid precursors. These include the HT ester of elenolic acid (known as oleuropein, OLE), the main responsible for the bitter taste of olive leaves and drupes; the dialdehydic derivative of decarboxymethyl elenolic acid bound to either HT (3,4-dihydroxyphenylethanol-elenolic acid dialdehyde, 3,4-DHPEA-EDA, also known as oleacein) or to tyrosol (p-hydroxyphenylethanol-elenolic acid dialdehyde, p-HPEA-EDA, also known as oleocanthal). The latter is the main responsible for the burning sensation that occurs in the back of the throat when consuming EVOO [12,13]. Olive tree polyphenols also include verbascoside, the caffeoylrhamnosylglucoside of HT, a phenolic acid derivative, the lignans 1-acetoxypinoresinol and pinoresinol, and other secoiridoids.
Olive tree polyphenols may be responsible for some of the properties of medical interest in this plant; these include anti-atherogenic, antihepatotoxic, hypoglycemic, anti-inflammatory, antitumor, antiviral and immunomodulator activities [14,15] that appear only in part related to the antioxidant power of these molecules. OLE, demethyloleuropein and ligstroside, together with their metabolic derivatives (elenolic acid, HT), are the most abundant phenolics in the EVOO [16].
Phenolic concentration in EVOO depends on several variables such as (i) the olive cultivar ( Figure 1) and the ripening stage of fruit [17]; (ii) environmental factors (altitude, cultivation practices, and amount of irrigation); (iii) extraction conditions (heating, added water and malaxation); (iv) extraction systems used to separate oil from olive pastes (pressure, centrifugation systems); and (v) storage conditions and time, due to spontaneous oxidation, and suspended particle deposition [18]. At best, the content of OLE in EVOO can reach levels exceeding 60 mg/100 g (Figure 1).
Phenolic concentration in EVOO depends on several variables such as (i) the olive cultivar ( Figure 1) and the ripening stage of fruit [17]; (ii) environmental factors (altitude, cultivation practices, and amount of irrigation); (iii) extraction conditions (heating, added water and malaxation); (iv) extraction systems used to separate oil from olive pastes (pressure, centrifugation systems); and (v) storage conditions and time, due to spontaneous oxidation, and suspended particle deposition [18]. At best, the content of OLE in EVOO can reach levels exceeding 60 mg/100 g (Figure 1).
Biochemical Effects of Olive Polyphenols Considered as Caloric Restriction Mimickers
The high content in plant polyphenols, one of the main features underlying the beneficial effects of the MD, is provided mainly by the use of EVOO, the main source of alimentary lipids. The wide and increasingly recognized beneficial properties of plant polyphenols have led to proposing them as nutraceuticals and the aliments containing them as functional foods. The latter are defined as "Natural or processed foods that contain known or unknown biologically-active compounds; these foods, in defined, effective, and non-toxic amounts, provide a clinically proven and documented health benefit for the prevention, management, or treatment of chronic diseases" [19]. Extensive research, clinical trials, epidemiological and observational studies have for a long time described the close association between the MD and the Asian diet regarding their polyphenols and a number of physiological and metabolic effects [1,[20][21][22][23]; the latter, most often, are similar to those associated with caloric restriction (CR) in humans [24,25], indicating that these substances are CR mimickers [26] ( Table 1).
The effectiveness of CR to prolong lifespan and to reduce the risk of age-associated diseases is widely recognized [25]. However, a CR regime can hardly be sustained for long periods of time; this is why diet integration with factors able to mimic the beneficial effects of a reduction of caloric intake can be highly appreciated. Plant polyphenols, including olive ones, induce CR-like effects in muscle, brain, fat tissue and kidney in several ways, particularly through the activation and increased levels of sirtuins (Sirt) [26,27].
Sirt are NAD-dependent type-3 deacetylases [28] whose activity is modulated by the metabolic state of the cells and induced by CR. More contradictory data on CR-induced Sirt1 changes have been reported in liver, where decreased Sirt1 levels were found with an ensuing decrease of hepatic fat synthesis and accumulation [29]. Sirt are involved in lifespan and metabolism regulation in
Biochemical Effects of Olive Polyphenols Considered as Caloric Restriction Mimickers
The high content in plant polyphenols, one of the main features underlying the beneficial effects of the MD, is provided mainly by the use of EVOO, the main source of alimentary lipids. The wide and increasingly recognized beneficial properties of plant polyphenols have led to proposing them as nutraceuticals and the aliments containing them as functional foods. The latter are defined as "Natural or processed foods that contain known or unknown biologically-active compounds; these foods, in defined, effective, and non-toxic amounts, provide a clinically proven and documented health benefit for the prevention, management, or treatment of chronic diseases" [19]. Extensive research, clinical trials, epidemiological and observational studies have for a long time described the close association between the MD and the Asian diet regarding their polyphenols and a number of physiological and metabolic effects [1,[20][21][22][23]; the latter, most often, are similar to those associated with caloric restriction (CR) in humans [24,25], indicating that these substances are CR mimickers [26] (Table 1).
The effectiveness of CR to prolong lifespan and to reduce the risk of age-associated diseases is widely recognized [25]. However, a CR regime can hardly be sustained for long periods of time; this is why diet integration with factors able to mimic the beneficial effects of a reduction of caloric intake can be highly appreciated. Plant polyphenols, including olive ones, induce CR-like effects in muscle, brain, fat tissue and kidney in several ways, particularly through the activation and increased levels of sirtuins (Sirt) [26,27].
Sirt are NAD-dependent type-3 deacetylases [28] whose activity is modulated by the metabolic state of the cells and induced by CR. More contradictory data on CR-induced Sirt1 changes have been reported in liver, where decreased Sirt1 levels were found with an ensuing decrease of hepatic fat synthesis and accumulation [29]. Sirt are involved in lifespan and metabolism regulation in varying organisms [26,29]. Among the Sirt family, Sirt1, the most investigated, protects the cells against oxidative stress and DNA damage. Many of the cellular effects of Sirt1 are mediated by gene regulation following its ability to control the acetylation/deacetylation state, and hence the activity, of several transcription factors, including p53, FOXOs, NFκB, Nrf2, PPARα/γ, PGC1α and LXR [30][31][32][33][34][35]. These factors are known to be involved in the control of apoptosis, autophagy, cell proliferation, oxidative stress, inflammation, protein synthesis, carbohydrate and lipid metabolism. Sirt induction by CR, resveratrol or other plant polyphenols [36] results in many cellular outcomes and is considered responsible for the epigenetic effects of these molecules [5] (see below). In particular, it has been reported that Sirt induction (i) counteracts elevated inflammation and lowers cholesterol and triglyceride synthesis [37]; (ii) reduces oxidative damage markers and increases expression of Nrf2-dependent genes that modulates antioxidant factors in mice fed with a diet rich in olive oil phenolics [38]; (iii) activates Nrf2 thus attenuating oxidative stress with endothelium protection in mice fed with resveratrol [39]; (iv) downregulates the pro-inflammatory agent NFκB thus inhibiting the inflammatory response in rat heart subjected to myocardial ischemia and reperfusion [40]; and (v) inhibits directly the transcriptional activity of PPARγ, with ensuing anti-adipogenic effects [41]. In addition, a cross-talk does exist between Sirt1 and AMP-activated protein kinase (AMPK), a sensor of the energy state of the cell. The AMPK-Sirt1 relation results in mutual activation [42] with modulation of the response of the organism to limited nutrients or increased energy demand and autophagy activation. Actually, the latter, in addition to CR [43], has been reported to be induced by Sirt1 and AMPK [44] that, in turn, can be activated by resveratrol [45], or by olive polyphenols [46][47][48]. Table 1. EVOO (extra virgin olive oil) polyphenols, similarly to resveratrol, act as caloric restriction (CR) mimickers.
Resveratrol
Sirt activation [25,27,30,36] Ó Increased antioxidant defenses via Nrf-2 induction [39] Reduced inflammation via NFκB down regulation [40] Autophagy induction via AMPK activation [45] HT [41] Autophagy induction via AMPK activation or direct modulation of insulin/IGF1/AKT and the mTOR pathways [46][47][48] The data reviewed above indicate that most of the biochemical and physiological effects of plant polyphenols go well beyond their known antioxidant power and support mechanistically their apparent protection against a number of diseases (see below). In this respect, as pointed out above, EVOO polyphenols, notably OLE, have been shown to directly modulate the insulin/IGF1/AKT and the mTOR pathways, whose downregulation results in FOXO3 activation with ensuing transcription of homeostatic genes favoring longevity and reducing inflammatory states. mTOR, a master regulator of cell life, is one of the most potent upstream regulators of autophagy; activation of the latter appears as one of the ways olive polyphenols can induce most of their beneficial effects against neurodegeneration [46,47]. Genetic inhibition of autophagy results in degenerative modifications in mammalian cells that compromise the longevity-promoting effects of CR and recall the aging-associated ones; conversely, normal or pathological aging is often associated with impaired autophagy [49]. A plethora of studies has clearly shown that plant polyphenols, including those in the olive tree, control the phosphorylation state of signaling molecules such as PI3K, Akt, eNOS, AMPK and STAT3; these are involved in the mechanism of ischemic preconditioning [50] and in autophagy promotion via Sirt1 activation and/or via Ca 2+ increase with ensuing stimulation of the calcium/calmodulin-dependent protein kinase kinase β (CAMKKβ)-AMPK-mTOR pathway [48]. In cancer cells, the same polyphenols appear to promote cell death by stimulation of apoptosis with features that appear to depend on the cell type [51][52][53], as better specified below.
Overall, the data currently available support the idea that different plant polyphenols, including those from the olive tree, are able to mimic CR effects by affecting the same, or very similar, cellular targets and can therefore be taken into consideration for prevention and/or long-term treatment of aging-associated diseases resulting from chronic inflammation or transcriptional, redox or metabolic derangement.
Possible Uses of Olive Polyphenols in Disease Prevention and Therapy
The above conclusions are confirmed by an increasing body of studies carried out on cultured cells, model animals and humans (for the latter, see also Section 6). These studies provide compelling evidence that plant polyphenols, including olive polyphenols, are potential candidates for prevention and therapy of a number of diseases and pathological conditions, particularly cancer and several aging-associated degenerative diseases [54] (Figure 2). The following section summarizes recent studies on olive polyphenols providing support to this view. AMPK and STAT3; these are involved in the mechanism of ischemic preconditioning [50] and in autophagy promotion via Sirt1 activation and/or via Ca 2+ increase with ensuing stimulation of the calcium/calmodulin-dependent protein kinase kinase β (CAMKKβ)-AMPK-mTOR pathway [48]. In cancer cells, the same polyphenols appear to promote cell death by stimulation of apoptosis with features that appear to depend on the cell type [51][52][53], as better specified below. Overall, the data currently available support the idea that different plant polyphenols, including those from the olive tree, are able to mimic CR effects by affecting the same, or very similar, cellular targets and can therefore be taken into consideration for prevention and/or long-term treatment of aging-associated diseases resulting from chronic inflammation or transcriptional, redox or metabolic derangement.
Possible Uses of Olive Polyphenols in Disease Prevention and Therapy
The above conclusions are confirmed by an increasing body of studies carried out on cultured cells, model animals and humans (for the latter, see also Section 6). These studies provide compelling evidence that plant polyphenols, including olive polyphenols, are potential candidates for prevention and therapy of a number of diseases and pathological conditions, particularly cancer and several aging-associated degenerative diseases [54] (Figure 2). The following section summarizes recent studies on olive polyphenols providing support to this view.
Olive Polyphenols and Cancer
Information on the anticancer power of olive polyphenols is increasingly available. The data refer mainly to studies carried out on cultured mesothelioma, pancreatic, hepatoma, HeLa, prostate and particularly breast cancer cells, and also on tumors in animal models. These studies have highlighted the effects of OLE and HT on calcium dynamics by acting on ion T-type Ca 2+ channels leading to increased calcium concentrations and impaired cell proliferation [55,56]. A number of studies have highlighted the anti-proliferative and pro-apoptotic effects of olive polyphenols on cancer cells [57] leading to the conclusion that these effects stem from different mechanisms depending on the cell type (Table 2).
Olive Polyphenols and Cancer
Information on the anticancer power of olive polyphenols is increasingly available. The data refer mainly to studies carried out on cultured mesothelioma, pancreatic, hepatoma, HeLa, prostate and particularly breast cancer cells, and also on tumors in animal models. These studies have highlighted the effects of OLE and HT on calcium dynamics by acting on ion T-type Ca 2+ channels leading to increased calcium concentrations and impaired cell proliferation [55,56]. A number of studies have highlighted the anti-proliferative and pro-apoptotic effects of olive polyphenols on cancer cells [57] leading to the conclusion that these effects stem from different mechanisms depending on the cell type (Table 2).
OLE and HT were shown to reduce angiogenesis via downregulation of cyclooxygenase-2 (COX-2) expression, prostanoid production and matrix metallopeptidase 9 (MMP-9) protein release, together with reduction of intracellular ROS levels and NFκB activation [58]. Polyphenol-stimulated apoptosis has been reported to proceed (i) via caspase activation involving pro-apoptotic Bcl-2 family members and PI3K/AKT signaling in pancreatic cancer and hepatoma cells [51,56] and (ii) through the dose-dependent cytoplasmic increase of the c-Jun-N-terminal kinase (cJNK), p53, p21, Bax and cytochrome c in HeLa and cervix carcinoma cells [52]. The activation of the p53 or the G protein-coupled estrogen receptor 1/30 (GPER1/GPR30) pathway [53,59], or the inhibition of the anti-apoptotic and pro-proliferation protein NFκB and cyclin D1, its main oncogenic target, in breast cancer cells [60] has also been shown.
Olive phenols seem to be particularly effective in breast cancer cell models. In these cells, HT was shown to induce cell cycle arrest in the G0/G1 phase by a decrease in the cyclin D1 level [61] and OLE apparently prevents cancer metastasis by increasing the tissue inhibitors of metalloproteinases (TIMPs) and by suppressing the MMP gene expression [62]. OLE has also been reported to inhibit aromatase, a cytochrome P450 family enzyme proposed as an important pharmacological target for breast cancer treatment [63], and to induce a complete recovery of sensitivity to trastuzumab (>1000-fold increase) in SKBR3/Tzb100 cells, a model of acquired resistance. The latter effect provides one of the first examples of how selected nutrients provided by an EVOO-enriched MD (particularly OLE aglycone) affect positively Human epidermal growth factor receptor (HER2)-driven breast cancer [64].
OLE is able to reduce cell proliferation through inhibition of fatty acid synthase (FAS) gene expression in certain colorectal cancer cells [65] and in prostate cancer cells. The latter effect results from reduced cell viability and the induction of both thiol group modifications, of reactive oxygen species (ROS) as well as of the expression of γ-glutamylcysteine synthetase, pAkt and heme oxygenase-1 [66]. OLE has also been reported to disrupt actin filaments with ensuing disassembly of cytoskeleton in different cell lines [67] and to shut down, in MDCK cells, epithelial-mesenchymal differentiation, a key process in the progression toward organ failure and metastasis of organ fibrosis and cancer [68].
A recent review has summarized data concerning the anti-cancer activity of olive polyphenols reported in literature, proposing that the effect of EVOO secoiridoids is related to the activation of gene signatures associated with protection against cell aging and stress, including ER stress and the unfolded protein response, Sirt1 and Nrf2 signaling [69]. Moreover, EVOO polyphenols activate AMPK, promote apoptosis in cancer cells and suppress several genes related to the Warburg effect and to cancer stem cell renewal. Finally, EVOO polyphenols prevent age-related changes in cell size, heterogeneity, arrangement and staining of human fibroblasts for β-galactosidase, associated with cell senescence, at the end of their proliferative life period [69]. As such, EVOO polyphenols can be considered a new family of plant-produced gerosuppressants that molecularly "repair" the AMPK/mTOR-driven path, leading to prevention against aging and age-related diseases, including cancer [69]. [61] In conclusion, the data presently available support the idea that OLE and other olive polyphenols hold promise as potential chemotherapeutic agents for treatment of malignant mesothelioma, breast, pancreatic, prostate cancer and other types of tumors. Nevertheless, these potential beneficial effects need full confirmation in animal models and extensive investigation in human subjects before they can be proposed as new possible anti-cancer agents.
Olive Polyphenols and Cardiovascular Disease (CVD)
Cardiovascular endothelium is a main target of all major risk factors for heart disease (hypertension, hyperglycemia, hyperlipidemia, inflammation, aging), and its damage is one of the first steps in the development of CVD. Cardiomyopathy can also be caused by a number of drugs, including the antineoplastic antibiotic doxorubicin [70]. The key feature shared by the CVD risk factors is increased ROS; in turn, ROS production by endothelium mitochondria contributes significantly to heart disease [71,72]. Olive polyphenols (OLE, HT, oleacein, elenolic acid and thyrosol) display important beneficial properties against atherosclerosis [73] and CVD (reviewed in [74]) (Table 3). Finally, at nutraceutical doses, OLE was reported to protect against doxorubicin-induced cardiomyopathy in rat models by positive modulation of oxidative stress markers [75], AMPK activation and iNOS suppression [76]. The strong antioxidant properties of these substances, particularly HT and OLE [74], could explain, at least in part, these effects. In fact, OLE, HT and tyrosol are able to reduce the kinetics and the extent of lipid peroxidation [77] and protect against ischemia/reperfusion-induced oxidative stress provided to isolated rat hearts at doses corresponding to the average intake in a normal MD [78]. Moreover, OLE has been reported to reduce the extent of infarcted tissue, total cholesterol and triglyceride levels in both normal and hypercholesterolemic rabbits, thus providing cardioprotection even before the ischemic event [79]. Taken together, these data warrant further studies to provide convincing support of the future use of olive polyphenols, or some derivatives, as cardioprotective agents.
Olive oil, and particularly its oil polar lipid extract, have been shown to be antiatherogenic by reducing the level of platelet-activating factor, thus reducing platelet aggregation [80]. OLE appears to possess the highest anti-atherosclerotic power among all olive polyphenols, mostly resulting from cholesterol regulation. In particular, EVOO polyphenols have been shown to enhance the expression of genes related to cholesterol efflux from cells to HDL in humans [81] and also to promote the HDL-dependent cholesterol efflux by increasing HDL size, stability and resistance against oxidation [82]. The anti-atherosclerotic effects of OLE in atherosclerotic rabbits has also been reported to imply a reduction of serum levels of total cholesterol, LDL, HDL, triglycerides, NFκB, and several chemokines [83]. In this sense, a potent anti-atherosclerotic power was also reported for thyrosol through activation of a molecular pathway starting with protein kinase B (PKB)/AKT phosphorylation, Sirt1 expression and deactivation of the transcription factor FOXO3, which upregulates pro-apoptotic genes and eNOS [84]. These data were confirmed by subsequent studies showing that olive oil polyphenols downregulate pro-atherogenic genes in healthy humans in the context of a traditional MD [85] and positively modulate post-prandial vascular function (arterial stiffness) and the inflammatory status (interleukin-8, IL8, production), two factors of CVD risk in vivo [86]. The latter effects were elicited by repressing the expression of several pro-inflammatory genes, including those encoding the cytokines IL-6 and IL-8 [87], possibly through their redox activity. It must be noted that IL-8, and other pro-inflammatory cytokines, have been proposed to play an important role in the development of atherosclerosis [88], and its circulating levels have been associated with future risk of CVD [89]. Finally, cardioprotection by olive polyphenols has been supported by studies showing their ability to enhance fat oxidation and to optimize cardiac energy metabolism in high-fat-diet rats as well as to improve myocardial oxidative stress in standard-fed rats [90].
Over the course of the last decade, in addition to the already quoted effects on the classical risk factors for CVD, several scientists have studied the beneficial effects of olive oil and olive leaf extracts on thrombosis-associated factors (primary and secondary hemostasis, platelet aggregation, fibrinolysis), a pathophysiological condition closely related to CVD [91,92]. Finally, pre-treatment with OLE and oleacein of endothelial progenitor cells that produce neovascularization of ischemic tissue and de novo formation of endothelium in injured arterial walls, increased cell survival and reduced senescent cells and intracellular ROS production, possibly following activation of the Nrf2/heme oxygenase pathway [93].
Overall, these and other data support the notion that olive phenolics can be beneficial for cardiovascular health, suggesting their importance to lower the risk of CVD. Olive leaf extract Reduction of post-prandial inflammation [86] 4.3. Olive Polyphenols, Obesity, Type 2 Diabetes and the Metabolic Syndrome EVOO polyphenols display a multifaceted activity against metabolic disorders (Table 4). Obesity, an increasingly widespread condition affecting millions of people mainly in developed countries, is also associated, among others, with increased risk of CVD. Several olive leaf components, including OLE, HT and others have been shown to be effective against obesity by suppressing dose-dependently intracellular triglyceride accumulation and the expression of adipogenesis-stimulating factors during adipocyte differentiation [94][95][96]. Other studies have reported that olive polyphenols can be effective in reducing food intake and fat tissue accumulation by regulating the expression of molecules involved in adipocyte proliferation and thermogenesis at the mitochondrial level [97]. Finally, it must be reminded that most of these effects are shared with other plant polyphenols such as resveratrol, epigallocathechin and curcumin. The latter have been reported to reduce body weight, fat mass and triglycerides by lowering adipocyte viability and preadipocyte proliferation (i) by suppressing adipocyte differentiation and triglyceride accumulation; (ii) by stimulating lipolysis and fatty acid oxidation and (iii) by reducing obesity-associated inflammation [98].
Diabetes, notably type 2 diabetes (T2DM), is a condition closely associated with obesity and CVD and these states concur with the so-called metabolic syndrome; the latter also includes non-alcoholic fatty liver disease (NAFLD), a condition whose severity spans from simple triglyceride accumulation in the liver parenchyma (steatosis) to non-alcoholic steatohepatitis (NASH). T2DM and other pathological states associated with deregulation of carbohydrate and lipid metabolism can be positively targeted by olive oil polyphenols. The positive outcome of the administration of OLE and other olive polyphenols against derangement of carbohydrate metabolism is supported by many studies. The latest reports have highlighted that the molecular determinants of these effects are interwoven with those associated with the reduction of obesity and liver steatosis as well as with cardioprotection (see above) ( Figure 3).
The reported anti-diabetic effects comprise: (i) the inhibition of the amylin tendency to aggregate into amyloid, whose toxic deposits in the pancreatic β-cells are considered to affect cell viability in T2DM [99]; (ii) the reduction of serum glucose and cholesterol levels with restoration of the antioxidant perturbations in rat [100,101] and rabbit [102] models of diabetes; (iii) the modification of the expression of genes implicated, among others, in lipogenesis, thermogenesis and insulin resistance in high-fat-diet mice [97]; (iv) the reduction of the digestion and intestinal absorption of dietary carbohydrates both in the mucosal and in serosal sides of the intestine of diabetic rats [101], together with the improvement of glucose homeostasis with a reduction of glycated hemoglobin and fasting plasma insulin levels in humans [103,104]; (v) a significant rise in insulin sensitivity and pancreatic β-cell secretory capacity in middle-aged overweight men [104] with a measurable and rapid change of the expression of genes mechanistically related to insulin sensitivity and to the metabolic syndrome [105]; (vi) the reduction of the metabolic activity, cardioprotection and prevention of inflammation and of cytokine-induced oxidative damage of pancreatic β-cells [98,106]; (vii) the improvement of the antioxidant status in healthy elderly people [107] and the protection of insulin-secreting β-cells against H 2 O 2 toxicity by modulating redox homeostasis with protection of cell physiology against oxidative stress [108]; (viii) the prevention of the inflammatory response and of cytokine-mediated oxidative cell damage with downregulation of the genes involved in adipocyte differentiation [93]; (ix) the downregulation of Wnt10b inhibitory genes and upregulation of β-catenin protein expression as well as of key adipogenic and thermogenic genes in high-fat-diet mice [95,109] as well as of genes involved in galanin-mediated signaling; (x) the upregulation of genes involved in Wnt10b-signaling in C57BL/6N mice [110] and (xi) the increase of signal molecules active in fasting conditions (IL-6, Insulin-like growth factor-binding protein 1 and 2, IGFBP-1 and IGFBP-2), yet in the absence of significant effects on interleukin-8, TNF-α, CRP, lipid profile, liver function ambulatory blood pressure and thickness of carotid wall layers [102]. the antioxidant perturbations in rat [100,101] and rabbit [102] models of diabetes; (iii) the modification of the expression of genes implicated, among others, in lipogenesis, thermogenesis and insulin resistance in high-fat-diet mice [97]; (iv) the reduction of the digestion and intestinal absorption of dietary carbohydrates both in the mucosal and in serosal sides of the intestine of diabetic rats [101], together with the improvement of glucose homeostasis with a reduction of glycated hemoglobin and fasting plasma insulin levels in humans [103,104]; (v) a significant rise in insulin sensitivity and pancreatic β-cell secretory capacity in middle-aged overweight men [104] with a measurable and rapid change of the expression of genes mechanistically related to insulin sensitivity and to the metabolic syndrome [105]; (vi) the reduction of the metabolic activity, cardioprotection and prevention of inflammation and of cytokine-induced oxidative damage of pancreatic β-cells [98,106]; (vii) the improvement of the antioxidant status in healthy elderly people [107] and the protection of insulin-secreting β-cells against H2O2 toxicity by modulating redox homeostasis with protection of cell physiology against oxidative stress [108]; (viii) the prevention of the inflammatory response and of cytokine-mediated oxidative cell damage with downregulation of the genes involved in adipocyte differentiation [93]; (ix) the downregulation of Wnt10b inhibitory genes and upregulation of β-catenin protein expression as well as of key adipogenic and thermogenic genes in high-fat-diet mice [95,109] as well as of genes involved in galanin-mediated signaling; (x) the upregulation of genes involved in Wnt10b-signaling in C57BL/6N mice [110] and (xi) the increase of signal molecules active in fasting conditions (IL-6, Insulin-like growth factor-binding protein 1 and 2, IGFBP-1 and IGFBP-2), yet in the absence of significant effects on interleukin-8, TNF-α, CRP, lipid profile, liver function ambulatory blood pressure and thickness of carotid wall layers [102]. Finally, two recent studies have reported that (i) the anti-oxidant and anti-inflammatory properties of the polyphenols in olive leaf extracts attenuate the metabolic, structural and functional modifications in the heart and liver of rats with diet-induced metabolic syndrome [111] and (ii) the administration of an OLE-enriched supplement to the Tsumura, Suzuki obese diabetes (TSOD) mouse model of T2DM reduced hyperglycemia and impaired glucose tolerance and, less evidently, oxidative stress, when the administration was extended in the long term. In this study, OLE supplementation was ineffective in reducing obesity [112]. In other cases, the anti-obesity and anti-steatosis effects of olive polyphenols have been associated with increased lipid metabolism and energy expenditure as well as with the modulation of glucose homeostasis mentioned above [96]. Finally, two recent studies have reported that (i) the anti-oxidant and anti-inflammatory properties of the polyphenols in olive leaf extracts attenuate the metabolic, structural and functional modifications in the heart and liver of rats with diet-induced metabolic syndrome [111] and (ii) the administration of an OLE-enriched supplement to the Tsumura, Suzuki obese diabetes (TSOD) mouse model of T2DM reduced hyperglycemia and impaired glucose tolerance and, less evidently, oxidative stress, when the administration was extended in the long term. In this study, OLE supplementation was ineffective in reducing obesity [112]. In other cases, the anti-obesity and anti-steatosis effects of olive polyphenols have been associated with increased lipid metabolism and energy expenditure as well as with the modulation of glucose homeostasis mentioned above [96].
Olive polyphenols display significant protection against liver disease resulting from altered lipid metabolism. In particular, OLE protects HepG2 and FL83B cells against free fatty acid (FFA)-induced hepatocellular steatosis via reduction of FFA-induced lipogenesis following lowered extracellular-regulated kinase (ERK) activation, reduced expression of genes involved in adipocyte differentiation, and Wnt10b inhibition in hepatocytes [113]. OLE also appears to protect against oxidative stress-mediated liver damage by increasing the expression of genes involved in liver lipogenesis, oxidative stress and inflammatory response [114] as well as by reversing, in visceral adipose tissue, the downregulation of thermogenic genes involved in uncoupled respiration and mitochondrial biogenesis induced by a high fat diet [95]. Moreover, olive polyphenols have been reported to downregulate lipid synthesis in primary cultured rat hepatocytes through AMPK phosphorylation, suggesting that a decrease in hepatic lipid metabolism, particularly lipid synthesis, may represent a possible mechanism underlying the reported hypolipidemic effect of these substances [115,116]. Olive oil and polyphenols, notably OLE, also prevent NAFLD (reviewed in [117]) and its progression to NASH and liver fibrosis in mouse models of NASH, presumably through anti-oxidant activity and reduced lipid accumulation, supporting their potential pharmacological use in NASH prevention and care [118][119][120]. Finally, other studies carried out in animal models and with cultured cells have shown that one of the mechanisms of cell protection by OLE is the aforementioned potent stimulus to autophagy. Presently, autophagy stimulation is considered an important potential goal in the research of effective therapies against neurodegenerative and dysmetabolic pathologies such as T2DM.
Olive Polyphenols and Amyloid Diseases
Amyloid diseases provide examples of the way plant polyphenols may interfere with specific pathologies (reviewed in [121]). Amyloidoses are a number of sporadic or familial degenerative conditions characterized by misfolding and aggregation into intractable polymeric fibrillar assemblies of a number of specific peptides/proteins (reviewed in [122]). These aggregates are found as intra-or extracellular deposits that are currently considered among the main factors affecting cell physiology and viability. Amyloid diseases comprise rare pathologies but also largely diffused diseases such as T2DM, AD and PD [123]. Many plant polyphenols, and among them those found in EVOO (Table 5), have been shown to interfere in different ways with the aggregation path, reducing aggregate load and its cytotoxic effects [123].
The observation that patients with diabetes have an increased risk of developing AD compared to healthy individuals has recently led to the proposal that AD may be associated with brain insulin resistance. Actually, many studies have shown that insulin resistance, increased inflammation and impaired metabolism are key pathological features of both AD and diabetes (reviewed in [124,125]). Emerging evidence underscores the importance for AD development of brain insulin resistance, a key alteration in pre-diabetes and diabetes mellitus. Such a relation has led some authors to propose some AD symptoms as a type 3 brain diabetes (reviewed in [126]). Actually, insulin and insulin-like growth factors appear to regulate several biological processes at the basis of learning and memory, including energy metabolism, synaptic plasticity and axonal growth. It was also reported that a hyperinsulinemia-induced condition of insulin resistance results in the activation of glycogen synthase kinase 3β, a key factor for cognitive decline, with ensuing brain injury. Hence, the endogenous impairment of insulin signaling in the brain accounts for important AD abnormalities.
Other factors link diabetes to neurodegeneration, including a shared role of amylin found aggregated both in the AD brain and in T2DM pancreatic β-cells [127]. OLE and oleocanthal have been shown to interfere with the amyloid aggregation of Aβ, amylin and tau. The latter is a microtubule-associated protein found aggregated in several tauopathies, including AD. The data reported indicate that OLE [128,129] and oleocanthal [130,131] interfere with the aggregation path of these peptides/proteins upon binding to the aggregating molecules, skipping the appearance of toxic species and favoring the formation of non-toxic disordered aggregates. Interestingly, the use of the aggregating Aβ peptide has allowed to distinguish two different mechanisms by which polyphenols and their glycosides interfere with the amyloid aggregation of this peptide. In fact, amyloid oligomers are remodeled by the aglycones by rapid conversion into large off-pathway aggregates, whereas they are rapidly dissociated into soluble disaggregated peptide molecules by the glycones [132]. The binding site of OLE to Aβ has also been described [133]. OLE also activates autophagy and reduces the inflammatory response provided by the accumulation of amyloid aggregates of Aβ and its pyroglutamylated 3-42 derivative in the affected brain areas of mouse models. As a result, the administration of OLE aglycone to TgCRND8 mice, a transgenic model of Aβ deposition, results in reduced plaque load and astrocyte reaction, with strong improvement of memory and behavioral performance to the levels recorded in wild-type mice, with respect to untreated littermates, [47,48]. These data confirm previous findings obtained with an Aβ peptide-expressing transgenic model of C. elegans displaying Aβ aggregates in the skeletal apparatus that disappeared in the OLE-fed worms [134]. Finally, other studies indicate that oleocanthal enhances amyloid-β clearance from the brain [135] and that OLE reduces Aβ production by promoting the non-amyloidogenic pathway through increased α-secretase cleavage of the amyloid precursor protein [136].
Taken together, these data provide molecular and biological insights strongly supporting the protection by olive polyphenols against age-and lifestyle-associated neurodegeneration (reviewed in [137]) even though data on protection by olive polyphenols against neurodegeneration in humans are lacking. Table 5. Olive polyphenols and prevention of amyloid diseases.
Epigenetic Effects
Epigenetics is defined as the complex of heritable changes to the transcriptome that are distinct from those resulting from the base sequence in the genome but are associated with post-transcriptional gene regulation by non-coding RNAs and with histone and DNA chemical modifications; the latter include DNA methylation, histone methylation, acetylation and phosphorylation [138]. Non-coding RNAs include microRNAs (miRNAs), small non-coding RNAs that post-transcriptionally modulate gene expression. miRNAs contribute to the control of the expression of both DNA methyltransferases (DNMTs) and histone-modifying enzymes and influence many cellular processes including survival of neuronal cells (reviewed in [139]). Epigenetic modifications regulate gene expression in a synergistic and cooperative way by changing chromatin arrangement and DNA openness, thus switching on/off a number of genes associated with important physiological and pathological processes (aging, and age-related pathologies, including cancer and neurodegeneration, reviewed in [140]). These changes are acquired throughout life, including embryonic and fetal development, and depend on environmental clues such as diet, lifestyle and exposure to toxins. When epigenetic modifications are inherited following cell division, they result in the enduring maintenance of the acquired phenotype; however, they can also occur at some time in the course of life and, as such, may remarkably influence phenotypic outcomes in terms of health, disease or risk of disease [5].
Although numerous compounds have been developed to specifically alter the function of chromatin-modifying enzymes, for example, histone deacetylase (HDAC) inhibitors, we are only beginning to understand the epigenetic effects of dietary compounds. Well-known examples of dietary chromatin-modifying compounds include curcumin, the active constituent of turmeric, which has been shown to be an HDAC inhibitor, as well as the red wine phenol, resveratrol, which activates Sirt1 [141,142]. Actually, the investigation of epigenome modifications following administration of plant polyphenols dates back less than one decade. Since then, an increasing number of studies has clearly shown that plant polyphenols, as other nutrients, directly regulate both transcriptional and translational processes by modulating the activity and expression levels of enzymes involved in chemical modifications of histones and DNA (reviewed in [5]). A growing body of evidence suggests that epigenetic changes triggered by diet nutrients, including plant polyphenols, contribute to preventing some diseases, notably cancer. In particular, plant polyphenols can counteract aging as well as many of its pathological consequences resulting from aberrant epigenetic mechanisms [52,139,[143][144][145]. In the context of neurodegenerative diseases, the epigenetic modifications have been shown to induce effects similar to those provided by CR [146]. Accordingly, epigenetics issues targeted by diet polyphenols have become an attractive approach for disease prevention and intervention and provide a rationale for most of the anti-cancer and anti-neurodegeneration power shown by these substances. Epigenetic therapy is an expanding field and is providing clues useful for discovering new drugs, some of which are undergoing clinical investigations, mostly as anti-cancer drugs [147,148]. The latter include plant polyphenols such as genistein and quercetin, exploited for their activity as HDAC and DNMT modulators.
The studies reported in the last decade have been carried out mostly on cancer cells and the modifications of gene and protein expression profiles underlying many of the effects listed above can be ascribed to recently shown epigenetic modifications elicited by many plant polyphenols; these include resveratrol, curcumin, epigallocathechins, genistein, quercetin and others in the anti-diabetic, anti-ageing and anti-neurodegeneration fields [140]. Information about the role of olive polyphenols as epigenome modulators is quite scarce as opposed to other polyphenols. Recent data show that OLE aglycone given orally for eight weeks to TgCRND8 mice, a model of Aβ deposition, downregulates HDAC2 [48], an enzyme known to be upregulated in AD [149]. In these mice, the downregulation of HDAC2 resulted in a significant increase in the level of histone acetylation, in particular of H3 at K9 and of H4 at K5 [48]. Histone acetylation has been reported to improve cognitive deficits in animal models of AD and its indication is considered a promising novel therapeutic strategy against AD [150]. A recent in silico molecular modeling study combined with known experimental affinities for controls has identified potential chromatin-modifying compounds from Olea europaea; in particular, HT was highlighted as a potential inhibitor of HDAC6 and lysine-specific histone demethylase 1 (LSD1) following its high affinity for binding to the active site of various chromatin-modifying enzymes [151]. Other recent studies have shown that an olive oil-enriched diet increases global DNA methylation in the mammary gland and in a murine model of breast cancer induced by dimethylbenz(a)anthracene (DMBA) [152]. Finally, EVOO or its phenolic compounds have been reported to modulate the expression of the CNR1 gene encoding for the type 1 cannabinoid receptor via epigenetic mechanisms, both in rats and in human Caco-2 colon cancer cells [153].
In spite of the limited information on the epigenetic effects of olive polyphenols presently available, the similarities between many effects of different plant polyphenols at the molecular level hold promise that the well documented epigenetic effects reported for many other plant polyphenols could be largely retrieved also in olive polyphenols. In conclusion, modulation of epigenetic flaws by natural polyphenols appears as a promising subject for the discovery of new compounds effective against chronic diseases, even though the present clinical studies have been dedicated prevalently to deciphering polyphenol effects in cancer treatment. HDACs and DNMTs are promising objectives for the control of human pathologies; the modulation of their activity is affected by plant polyphenols, some of which are found in significant amounts in widely used foods that characterize the MD and the Asian diet. However, the data on the epigenetic effects of olive polyphenols are still scarce and further research is needed to increase the information necessary to propose the possible use of these substances as epigenome modulators in humans.
Epidemiological Studies and Clinical Trials with Olive Oil and Its Polyphenols
During the last decade, the beneficial properties of EVOO and EVOO polyphenols for human health have been assessed in epidemiological studies and clinical trials. A number of these studies, often carried out on a limited number of patients, have been cited in the preceding sections [83,84,87,103,104,107]. In this section, the studies carried out on large cohorts of participants will be reviewed ( Table 6).
One of the most cited surveys on protection against neurodegeneration and cognitive decline by olive oil is the "Three-City Study" that enrolled around 7000 elderly subjects. The results showed lower odds of cognitive deficit for those subjects who used olive oil moderately (just for cooking or dressing) or intensively (for cooking and dressing) as opposed to those who never used it [154]. In the same cohort, a lower incidence of stroke in people using higher amounts of olive oil was also observed [155]. The results of this study were confirmed by two multicenter, randomized, controlled trials, the so-called PREvención con DIeta MEDiterránea (PREDIMED) and PREDIMED-NAVARRA studies carried out in Spain on people at high cardiovascular risk. The PREDIMED study was a primary prevention trial originally designed to test the long-term effects of the MD on the incidence of CVD in people with high cardiovascular risk. The cohort was also evaluated for cognitive performance, after adjustment for several potentially interfering factors. It emerged that an intervention with a MD enriched in EVOO (better than with wine or nuts) significantly improved cognition, and that EVOO phenolic content was the main factor responsible for this result [21,156]. The main goal of the study was assessing CVD prevention; in this sense, the trial suggested that EVOO consumption is associated with a reduced risk of CVD and mortality [157,158].
The role played by polyphenols in cardiovascular protection was further confirmed by two sub-studies on patients subjected to MD high in polyphenols (from nuts or EVOO). Total polyphenol excretion by people adopting a MD rich either in nuts or EVOO was positively correlated with changes in plasma levels of triglycerides, glucose and nitric oxide (NO); moreover, the statistically significant increase in plasma NO levels was associated with a reduction in systolic and diastolic blood pressure [159]. Accordingly, data from a subsample (n = 990) of the PREDIMED study have shown that the MD significantly decreases LDL oxidation only when it is enriched in EVOO with medium-high phenolic content [160].
Overall, as outlined by a recent survey, the PREDIMED study in patients at the MD rich in phenols (from nuts or EVOO) showed a significant improvement of classical and emerging CVD risk factors, including inflammation, oxidative stress blood pressure, carotid atherosclerosis, insulin resistance, lipoproteins and lipid profiles [161]. Breast cancer incidence was also investigated in the PREDIMED cohort. Thirty-five out of the 4282 women enrolled in the survey displayed confirmed cases of invasive breast cancer, whose risk was reduced by 68% in the EVOO group as compared with the low-fat group even after accounting for factors such as age, body mass index, exercise and drinking habits. The risk of being affected by invasive breast cancer was highest for women who were instructed to eat less fat (2.9 cases for 1000 person-years). This value was compared to a diagnosis rate of 1.8 cases per 1000 person-years for women on the MD supplemented with nuts and a rate of 1.1 cases per 1000 person-years for women on the MD with increased EVOO, suggesting a better protection against breast cancer of EVOO over nuts [162].
More recently, a cross-sectional sampling population study was carried out among the whole Spanish people. The study selected, in 100 health centers, 4572 individuals aged >18 years, representative of the Spanish population. Clinical, demographic and lifestyle parameters were considered, together with physiological parameters (body weight and height, body mass index, waist and hip measurement, blood pressure and oral glucose tolerance). The participants were analyzed considering whether they were consumers of olive oil or sunflower oil. The main outcome of the study showed that the consumption of olive oil was associated with significant beneficial effects on several cardiovascular risk factors, particularly in the presence of obesity and a sedentary lifestyle, and with a significant improvement of impaired glucose tolerance and insulin resistance [163]. Another study aimed at examining the association between olive oil intake and the incidence of T2DM was carried out in the USA by following 59,930 women aged 35-65 years from the Nurses' Health Study (NHS) and 85,157 women aged 26-45 years from the NHS II with no diabetes, CVD and cancer at baseline. The diet was controlled and validated by food-frequency questionnaires. Hidden T2DM cases were identified and confirmed by questionnaires. At the end of the 22 years follow-up, 5738 and 3914 cases of T2DM were documented in the NHS and NHSII, respectively. The outcome compared people taking at least one tablespoon of olive oil daily with those who never assumed olive oil. The results suggested that higher olive oil intake was associated with a modestly lower risk of T2DM; conversely the risk was raised in people who substituted olive oil with other lipids [164]. Finally, a very recent study has been conducted with 25 healthy subjects randomly allocated in a cross-over design to a Mediterranean-type meal supplemented with or without 10 g EVOO/day or the same amount of corn oil. The lipid profile and glycemic parameters related to glucose tolerance, determined two hours after the meal, showed that EVOO improves post-prandial glucose and LDL-cholesterol, confirming the anti-atherosclerotic power of the MD [165].
Overall, these surveys support the notion that natural EVOO phenols might counteract age-associated cognitive decline, CVD and cancer, particularly breast cancer. Accordingly, many clinical trials have been conducted employing olive oil or polyphenol-enriched olive extracts, but the results are still scarce even from those studies that were completed, except for trials aimed at investigating the effect of EVOO and its polyphenols against several conditions associated with CVD (oxidative stress, inflammation, haemostasis, endothelial function and blood pressure). In a small crossover trial, carried out in the context of the EUROLIVE (Effect of Olive Oil Consumption on Oxidative Damage in European Populations) study (Trial number: ISRCTN09220811), 200 participants were subjected to three rounds of daily administration of 25 mL of three olive oils with a different phenolic content (low, 2.7 mg/kg of olive oil; medium, 164 mg/kg; medium-high, 366 mg/kg) for three weeks preceded by a two-week washout period. The results showed a linear decrease of the total cholesterol/HDL-cholesterol ratio and of oxidative stress markers with the increase of the phenolic content of the olive oil [166]. Protection against atherosclerosis was confirmed by a sub-set of the same trial with 25 healthy volunteers fed for three weeks with 25 mL/d uncooked olive oil with a medium-high polyphenol-content (366 mg/kg) or a low-polyphenol-content (2.7 mg/kg) [167].
The results provided a first-level evidence that the phenolic content is the main factor responsible for the health benefits of EVOO. This correlates also with the results coming from the randomised, controlled clinical trial involving pre-hypertensive patients fed with 30 mL of two similar olive oils, a functional EVOO enriched with its phenolic compounds (961 mg/kg,) or a medium polyphenol content EVOO (289 mg/kg). Data from this study support a significant upregulation of genes regulating the cell-HDL cholesterol efflux [81]. That olive polyphenols increase human HDL functionality has recently been confirmed by a subsample of the EUROLIVE study, in which it was shown that the consumption of olive oil with high phenolic content increased the cholesterol efflux from macrophages mediated by HDL [82].
Most of the above studies were aimed at assessing the effects of EVOO and its polyphenols after a relatively long-term ingestion; studies were also carried out to determine the effects of an acute administration of olive oil, particularly in the case of protection against postprandial hyperlipidaemia and the associated inflammation. A study conducted on 20 obese subjects that were given muffins made with different oils previously subjected to 20 heating cycles, showed that oils rich in phenols, whether natural (EVOO) or artificially added, reduced postprandial inflammation; this outcome was determined by the activation of nuclear NFκB, the cytosolic levels of IκB-α, an NFκB inhibitor, the mRNA levels of p65, IKKβ, and IKKα (NFκB subunits and activators) and by the levels of lipopolysaccharide (LPS) and other pro-inflammatory molecules (TNF-α, IL-1β, IL-6, migration inhibiting factor (MIF), JNK); seed oil (sunflower) failed to produce similar results [168]. The protection against post-prandial oxidative stress was confirmed by other randomised, cross-over, controlled human studies showing that the serum antioxidant capacity was increased after EVOO ingestion at the same single doses (40 or 50 mL) at which oxidative stress normally occurs if the ingested oil is not EVOO [169]; the study also reported a lower lipid oxidative damage in subjects fed with an olive oil with high, rather than low, phenolic content [170,171]. Finally, a comprehensive review synthetically reports the results of randomized, controlled trials showing the efficacy of EVOO in lowering numerous inflammation markers such as thromboxan2 (TBX 2 ), leukotriene B4 (LTB 4 ), intercellular adhesion molecule 1 (ICAM-1) and vascular cell adhesion molecule 1 (VCAM-1), TNF-α, IL-1β, IL-6, MIF, JNK and LPS, NFκB and its activators, high-sensitivity C-reactive protein (hs-CRP), asymmetric dimethylarginine (ADMA) relevant in the context of several pathologies including CVD [172]. EVOO polyphenols Reduction of triglycerides and glucose plasma levels and increase of nitric oxide, with lowered blood pressure [158,159] Reduced oxidation of LDL [160] General reduction of CVD risk factors [161] Decrease of total cholesterol/HDL-cholesterol ratio and oxidative stress markers and protection against atherosclerosis [81,82,165,167] Reduction of post-prandial inflammation and oxidative stress [168][169][170][171]
Bioavailability of Olive Polyphenols
A problem associated with the use of olive and other plant polyphenols is their reduced bioavailability due both to incomplete intestinal absorption and to rapid biotransformation favoring urinary excretion. Moreover, in the case of the brain, orally ingested polyphenols must cross an additional barrier, the blood-brain barrier, in addition to that represented by the enterocytes. With few exceptions, only polyphenol aglycones can be absorbed in the small intestine [173] and deglycosylation by β-glucosidase in small intestinal epithelial cells has been focused on as a crucial step in absorption and ensuing metabolism of dietary polyphenols, notably the glycated forms [174]. Once released from the enterocyte into the lymph and therefrom into the blood stream, most polyphenols undergo substantial biotransformations including methylation, glucuronidation, sulphation and thiol conjugation [175] that alter their chemical properties, favor their excretion and, possibly, provide them new biological activities [176]. Moreover, recent research has highlighted the importance for polyphenol bioavailability of the colonic microflora that can extensively metabolize and chemically modify polyphenols [177]. However, recent studies on this theme carried out both on rats and humans have shown that these compounds are indeed absorbed in discrete amounts from the intestine and rapidly distributed through the blood flow to the whole organism, including the brain. In particular, recent data clearly indicate that, similarly to other polyphenols, the glycated and (preferentially) the aglycone forms of OLE are indeed absorbed and found in the plasma after ingestion both in rats [178,179] and in humans [179][180][181][182][183]. From the plasma they, at least in part, are distributed to different organs and tissues, including the brain, where they, or some derivatives, have been found [179]. Finally, a recent metabolite-profiling study with cultured breast cancer cells treated with an olive leaf extract has shown that OLE is the main polyphenol found inside the cells, suggesting its ability to cross the plasma membrane in this cell line [184]. As a confirmation, a very recent study has shown that OLE interacts with synthetic phospholipid membranes, but the extent of the interaction depends on membrane lipid composition and is favored by the presence of anionic lipids, suggesting specific interactions [185]. These data agree with those reported in a recent study showing that a number of polyphenols (those from the olive tree were not included) are able to protect the mitochondrial membrane from permeabilization by amyloid oligomers, suggesting some interference with the formation of the oligomer-membrane complex [186]. Finally, in a recent study we have shown that the OLE metabolite, HT, arising mainly from acid hydrolysis in the stomach, is found in the brain of TgCRND8 mice fed with OLE for eight weeks [48]; this finding agrees with previous data reported in rats [178] supporting the ability of some OLE derivatives, including HT, to cross the blood-brain barrier, even though its generation from OLE once it has crossed the blood-brain-barrier cannot be excluded.
Accurate studies on the effective daily dose of olive polyphenols to be administered to humans to get significant protection are still lacking. What must be taken into account is that, apparently, the amount of OLE and other plant polyphenols present in foods is not adequate to ensure daily doses suitable to get short-term acute effects. However, clinical and experimental evidence suggests that the continuous assumption of foods containing moderate amounts of these molecules can be effective in the long term, also due to their possible accumulation as lipophilic molecules, producing a low-intensity continuative stimulus of cell defenses against oxidative stress, amyloid deposition and other alterations underlying age-associated pathologies. Nevertheless, the low daily consumption of olive oil polyphenols with a typical MD suggests the value of the integration of polyphenol-enriched olive leaf extracts that can intensify, in the short-term, the beneficial effects of these molecules.
Conclusions
At the end of our itinerary through the nutraceutical properties of EVOO and its polyphenols we should recognize that the research in this field is actively proliferating, hence it is becoming mandatory to accommodate the plethora of biochemical, cellular and physiological effects of EVOO polyphenols in a coherent picture. In fact, looking at the multitude of cellular effects elicited by these compounds, that often look similar to those elicited by other plant polyphenols, one could have the uncomfortable sensation that their action is rather non-specific. Moreover, in some cases, contradictory results have been reported; these can possibly result, among others, from differences in the study design and administered doses of olive polyphenols in the case of clinical trials, the type of animal (race, model, wild-type or transgenic) in the case of animal studies, or the type of investigated cells and cell culture conditions. However, we do believe that some common traits are emerging that will pave the way to mechanistically defining the nutraceutical activity of EVOO and olive polyphenols. These traits can be summarized as follows.
First, plant, notably olive, polyphenols act mainly as signaling molecules. Although the direct target of interaction must still be identified, a calcium-mediated activation of AMPK via CaMKKβ has been reported for OLE aglycone [46]; oleocanthal was also found to activate AMPK [187]. Aging can be considered as a phenomenon driven by over-activation of the nutrient-sensor mTOR gerogene following a decline, or lack of responsiveness to activation signals of AMPK, an energy-sensing protein and a critical mTOR gerosuppressor [69]. AMPK is a sort of node in the intricate signaling network of the cell, where several physiological and pathological pathways (autophagy, anabolism/catabolism, apoptosis, cell proliferation, inflammation, neurodegeneration) intersect [188][189][190][191][192]. Accordingly, AMPK activation by olive polyphenols is a possible explanation for most of the pleiotropic activities of these substances. In general, specific receptor-polyphenol interactions have not been clearly identified; rather, it is believed that these compounds interact freely with the cell membrane bilayer [185,186], modifying its permeability properties, notably to calcium [46]. However, a receptor, the TRPA1 receptor ion channel, spatially restricted to the throat, has recently been identified as the possible responsible for the pungency of oleocanthal sensed in the throat [193].
Second, EVOO polyphenols directly participate to the redox balance of the cell. They perform this role not simply as antioxidants but, in certain circumstances, also as mild pro-oxidants, by up-regulating the antioxidant defenses of the cell, thus acting as hormetic factors [69]. This has been shown for both HT that, in the presence of peroxidases, can undergo a redox cycling that generates superoxide, an inducer of Mn-SOD expression [194] and for tyrosol, that increases C. elegans lifespan also by activating the heat shock response [195]. The role of EVOO phenols as "xenohormetic agents" has been analyzed by Menendez and coll. in their transcriptome analysis of cells exposed to crude EVOO polyphenol extracts highly enriched in secoiridoids OLE and decarboxymethyl OLE; their data confirm an involvement, among others, of the activation of anti-aging/cellular stress-like genes, including those for ER stress, the unfolded protein response, Sirt1 and Nrf2 signaling [69].
Third, it is generally believed that plant polyphenols can directly interact with other molecules or molecular complexes through aromatic stacking, hydrophobic interaction, or chemical crosslinking. Actually, this mechanism seems to underlie the inhibition of toxic amyloid aggregation by EVOO phenols. In the case of oleocanthal, it usually promotes the conversion of monomers and oligomers of amyloidogenic proteins/peptides (such as Tau and Aβ) to high molecular weight aggregates by chemical crosslinking thanks to its couple of aldehyde groups [136,196,197]. OLE aglycone differs from oleocanthal by the absence of aldehyde groups and the presence of a methoxycarbonyl group. Therefore, OLE is not a chemical crosslinker and its anti-amyloidogenic activity should occur differently [130]. Structure comparison analysis suggests that the number of phenolic rings is a key responsible for the polyphenol's efficacy in remodeling amyloid oligomers; in particular, two aromatic rings, at least one with a hydroxyl group, appears to be the minimal structural requirement needed by phenolic aglycones to remodel Aβ oligomers by interfering with aromatic group stacking [132]. Actually, the main EVOO polyphenols do not respond to this description, since they have just one phenolic ring. Nonetheless, it has been clearly shown that OLE physically interacts with the Aβ peptide [131] and that the Phe4-Glu11 sequence, together with the Leu17-Lys28 hydrophobic region, of Aβ40/42 are responsible for the non-covalent interaction that occurs in the 17-21 region of the peptide (Leu-Val-Phe-Phe-Ala), which includes two Phe residues [197,198]. Interestingly, the Aβ sequence critical for amyloid fibrillization overlaps such OLE-binding regions. This finding supports the hypothesis that aromatic stacking or, more generally, hydrophobic interactions, would be the molecular mechanism underlying inhibition of amyloid aggregation by this polyphenol [197]. Hydrophobic interaction is probably also involved in the incorporation of olive polyphenols into LDL [199].
Fourth, in spite of the reduced bioavailability due to incomplete intestinal absorption, microbiota metabolism and biotransformation in tissue, plant, notably olive, polyphenols are indeed distributed throughout the organism and have been found in tissues, including the brain, further supporting their ability to positively interfere with pathological states and/or their prodromal conditions. Further efforts are needed to mechanistically define the biochemical and biological activities of EVOO and olive polyphenols as well as the pharmacokinetics and pharmacodynamics underlying their effective doses in humans and the dose-dependence of their effects. The structure-activity relationship of olive polyphenols must still be deciphered as well; the latter could be the basis of engineering new drugs starting from the molecular scaffolds of these substances. Finally, more clinical trials are needed to overcome the limits of those currently reported and some of their conflicts. However, we do believe that the four points that we have extracted from the large body of scientific literature on OLE, HT and oleocanthal could be useful landmarks with which to orient future research and organize present and future data into a coherent frame. | 2016-06-10T08:59:46.098Z | 2016-05-31T00:00:00.000 | {
"year": 2016,
"sha1": "d34c25f2b148320ed58ae919c017f066a7141f71",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/17/6/843/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d34c25f2b148320ed58ae919c017f066a7141f71",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
250559722 | pes2o/s2orc | v3-fos-license | Amnioinfusion versus Usual Care in Women with Prelabor Rupture of Membranes in Midtrimester: A Systematic Review and Meta-Analysis of Short- and Long-Term Outcomes
Introduction: Midtrimester prelabor rupture of membranes (PROM) between 16 and 24 weeks of gestational age is a major obstetric complication with high rates of perinatal morbidity and mortality. Amnioinfusion has been proposed in women with midtrimester PROM to target oligohydramnios and subsequently enhance pulmonary development and perinatal outcomes. Material and Methods: The purpose of this study was to perform a systematic review and meta-analysis including all randomized clinical trials investigating amnioinfusion versus no intervention in women with PROM between 16+0 and 24+0 weeks of gestational age. Databases Central, Embase, Medline, ClinicalTrials.gov and references of identified articles were searched from inception of database to December 2021. The primary outcome was perinatal mortality. Secondary outcomes included neonatal, maternal, and long-term developmental outcomes as defined in the core outcome set for preterm birth studies. Summary measures were reported as pooled relative risk (RR) or mean difference with corresponding 95% confidence interval (CI). Results: Two studies (112 patients, 56 in the amnioinfusion group and 56 in the no intervention group) were included in this review. Pooled perinatal mortality was 66.1% (37/56) in the amnioinfusion group compared with 71.4% (40/56) in no intervention group (RR 0.92, 95% CI: 0.72–1.19). Other neonatal and maternal core outcomes were similar in both groups, although due to the relatively small number of events and wide CIs, there is a possibility that amnioinfusion can be associated with clinically important benefits and harms. Long-term healthy survival was seen in 35.7% (10/28) of children assessed for follow-up and treated with amnioinfusion versus 28.6% (8/28) after no intervention (RR 1.30, 95% CI: 0.47–3.60, “best case scenario”). Conclusions: Based on these findings, the benefits of amnioinfusion for midtrimester PROM <24 weeks of gestational age are unproven, and the potential harms remain undetermined.
Introduction
Midtrimester prelabor rupture of membranes (PROM between 16 +0 and 24 +0 weeks of gestational age) complicates 0.4-0.7% of all pregnancies. After midtrimester PROM, an immature or extreme premature delivery can follow. Ongoing pregnancies are challenged by oligohydramnios (amniotic fluid single deepest pool <2 cm) and intrauterine infections, with subsequently maternal, fetal, or neonatal complications. Live-born neonates are at risk of pulmonary hypoplasia as a result of underdevelopment of the alveolar system due to (prolonged) PROM [1][2][3]. One of the proposed interventions for this pregnancy complication is serial transabdominal amnioinfusion. Amnioinfusion could restore residual amniotic fluids and therefore might positively contribute to pulmonary development and reduce the rate of pulmonary hypoplasia and other pulmonary morbidities [4]. Furthermore, it may prevent compression of the umbilical cord, prevent skeletal deformities, and increase time to delivery [4]. Recently, two randomized controlled trials (RCTs) investigated the effectiveness of amnioinfusion compared to no intervention in pregnancies complicated by PROM in the midtrimester period [5,6]. No differences were seen in perinatal mortality, and other neonatal, or maternal outcomes. These trials additionally assessed long-term neurodevelopmental outcomes and respiratory function as part of study follow-up [6,7]. They showed that amnioinfusion compared to no intervention did not improve long-term outcomes up to 5 years of corrected age. However, both trials concluded that they were underpowered to evaluate a smaller, yet clinically relevant, difference in outcomes after amnioinfusion compared to no intervention, and larger trials are needed. The aim of this study was to perform a systematic review and meta-analyses to evaluate perinatal mortality and other neonatal, maternal, and long-term outcomes in all RCTs investigating amnioinfusion versus no intervention in women with PROM <24 weeks of gestational age.
Study Selection
The reporting of this systematic review and meta-analysis followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement [8]. The review was registered with the PROSPERO International Prospective Register of Systematic Reviews (#CRD42018107802). An electronic search was performed, assisted by a medical librarian, in Central, Embase, and Medline from inception to November 2020 to identify published RCTs eligible for inclusion. To identify ongoing studies, clinical trial regis-tries were searched. Searches contained the following keywords: preterm prelabor rupture of membranes, midtrimester rupture of membranes, second-trimester rupture of membranes, amnioinfusion, and randomized clinical trial (online suppl. Table 1; see www. karger.com/doi/10.1159/000526020 for all online suppl. material). No restrictions for language, date of publication, or geographic location were applied. Two review authors (A.d.R. and S.B.) independently assessed all potentially eligible studies. Any disagreements were resolved by discussion with a third author (E.P.). Additionally, to identify additional publications, bibliographies of eligible studies and identified review articles were searched.
Eligibility Criteria
RCTs were included if they randomized women with confirmed midtrimester PROM (i.e., between 16 +0 and 23 +6 weeks) to transabdominal amnioinfusion (i.e., intervention group) or no intervention or care as usual (i.e., control group). Amnioinfusion was defined as serial transabdominal infusion of fluid. In the control group, no intervention was defined as expectant management or care as usual, including the option to administer antibiotics, tocolysis, or steroids as defined per local protocol. To reduce the possibility of bias, quasi-random study designs were excluded. Trials including women with signs of intrauterine infection at onset of midtrimester or premature PROM, pregnancy complications (e.g., such as hypertension, preeclampsia, or HELLP syndrome), an obstetric indication for immediate delivery (signs of fetal distress, abruption, cord prolapse, or advanced labor), or evidence of a major, confirmed fetal abnormality were excluded.
Risk of Bias and Quality Assessment
The risk of bias in included RCTs was assessed using the Cochrane Collaboration's tool for assessing risk of bias ( Fig. 2; online suppl. Table 3) [9]. Review authors' judgments were categorized as "low risk," "high risk," or "unclear risk" of bias. Risk of bias assessment was done by two different reviewers (A.d.R. and S.B.). All conflicts were resolved through discussion or consultation with a third author (E.P.).
Outcomes
The prespecified primary outcome was perinatal mortality (defined as stillbirth, intrapartum death, or neonatal death within 28 days postpartum). Neonatal and maternal outcomes were prespecified and consistent with the core outcome set for preterm birth studies [10]. Additional neonatal and maternal outcomes included time of latency (time from PROM to birth), being a "short-term healthy survivor" (as defined by trials), placental abruption, antepartum hemorrhage, reason for delivery, mode of delivery, and umbilical cord prolapse. Definitions of outcomes were as reported by trial. Furthermore, long-term outcomes were assessed, including long-term respiratory and neurodevelopmental outcome, and being an overall "long-term healthy survivor" (as defined by trials: long-term development with no respiratory problems and no neurodevelopmental delay). mean differences with corresponding 95% confidence intervals (CIs) were calculated and reported. Meta-analysis was performed using a fixed-effect model, considering the low number of included studies and assuming that included studies have a comparable trial protocol and are estimating the same underlying treatment effect (effect of amnioinfusion on perinatal outcomes). Heterogeneity was measured using I-squared (Higgins I 2 ). If substantial statistical heterogeneity was detected (defined as >80%), data were not combined in meta-analysis but reported separately. Potential publication bias was planned to be assessed with Begg's and Egger's tests. Reporting bias (publication bias) was planned to be investigated if there were >10 studies in the meta-analysis. A p value <0.05 was considered statistically significant.
GRADE Assessment
The quality of evidence was assessed using the GRADE approach for following outcomes: perinatal mortality and the overall chance of being a long-term "healthy survivor." The GRADEpro Guideline Development Tool was used to create a Summary of Findings table. The quality of evidence could be graded from "high quality" to "moderate quality" or "low or very low quality" depending on assessment of the trials' risks of bias, the indirectness of evidence, heterogeneity or inconsistency, imprecision of pooled effect estimates, and potential publication bias. Figure 1 shows the flow diagram (Preferred Reporting Items for Systematic Reviews and Meta-Analyses template). The literature search identified 84 unique articles, of which three titles met the inclusion criteria (shown in Fig. 1). One study was excluded because the trial was never executed [11], leaving two studies eligible for inclusion in this systematic review and meta-analysis (online suppl. Table 1 shows search strategy; online suppl. Table 2 shows characteristics of excluded studies).
Characteristics of Included Trials
Included trials randomized patients between 16 +0 and 23 +6 weeks of gestational age and were carried out in The Netherlands and the UK between 2002 and 2016 [5,6]. Details of included trials are listed in Table 1 (study flow diagram is shown in online suppl. material. Fig. 8). The PPROMEXIL-III (Preterm Prelabor Rupture of Mem- Clinical management Both groups received a single course of oral erythromycin (250 mg 4 times per day for 10 days); administration of antenatal corticosteroids could be considered from 23 +5 weeks. If no delivery had occurred after 2 weeks, a second course of corticosteroids was allowed when signs of preterm birth were apparent, according to local protocol. Hospital admission for rest was recommended after 24 weeks of gestation, but not mandatory Both groups received a single course of oral erythromycin (250 mg 4 times per day for 10 days), administration of antenatal corticosteroids at 26 +0 weeks (as a matter of routine prophylaxis). Earlier antenatal corticosteroids (between 23 +0 and 25 +6 weeks) were given at clinicians' discretion. Hospital admission for rest was recommended between 26 +0 and 30 +0 weeks of gestation but was not mandatory Tocolysis was not required for amnioinfusion
Timing of randomization
After ≥3 and <21 days PROM and oligohydramnios After ≥10 days PROMe Timing of procedure As soon as possible after randomization, and not later than 1 week after randomization As soon as possible after randomization
Methods of procedure
Manual injection of Ringer's lactate under continuous ultrasound monitoring. Volume injected was calculated by multiplying the gestational age in weeks by 10 mL Participants were seen twice weekly, and amnioinfusion was repeated weekly if the single deepest pocket of amniotic fluid was measured at <20 mm. This procedure was repeated until 28 weeks of gestation Manual injection of Hartmann's solution or normal saline under continuous ultrasound monitoring. Volume injected was calculated by multiplying the gestational age in weeks by 10 mL Participants were seen weekly, and amnioinfusion was repeated if the single deepest pocket of amniotic fluid was measured at <20 mm. This procedure was repeated until 34 weeks of gestation Assessment of respiratory problems at 6, 12, and 18 months using a validated respiratory questionnaire and a whole body plethysmography Outcome: "healthy survivor," defined as: surviving without long-term respiratory problems or neurodevelopmental delay Study data are presented as number in the amnioinfusion group versus number in the expectant management group. PROM, prelabor rupture of membranes; mL, milliliters; mm, millimeters; cm, centimeters. a Two women were excluded after randomization because of termination of pregnancy for a lethal anomaly. b Trial conducted in six tertiary centers with neonatal intensive care unit (NICU) facilities in The Netherlands. c Trial conducted in four UK fetal medicine units -Liverpool Women's NHS Trust, St. Mary's Hospital, Manchester, Birmingham Women's NHS Foundation Trust, Wirral University Hospitals Trust. d Randomization was stratified for pregnancies in which the membranes ruptured between 16 +0 and 19 +6 weeks of gestation and those in which rupture occurred between 20 +0 and 23 +6 weeks to avoid selection bias. e Women were eligible for randomization irrespective of a maximum pool depth of amniotic fluid measured by ultrasound. f Including obstetric and fetal complications necessitating termination of the pregnancy (hypertension, preeclampsia, or HELLP syndrome), major fetal structural anomalies visible on ultrasound that were thought to compromise perinatal survival. g Including obstetric indications to terminate pregnancy (fetal bradycardia, abruption, cord prolapse, or advanced labor >5 cm dilatation) or a confirmed fetal abnormality.
Fetal Diagn Ther 2022;49:321-332 DOI: 10.1159/000526020 branes Expectant management or Amnioinfusion) randomized women with midtrimester PROM, between three and 21 days after diagnoses, and oligohydramnios (single deepest pocket <20 mm). The AMIPROM trial (Amnioinfusion in Preterm Premature Rupture of Membranes) randomized women with midtrimester PROM, at least 10 days after the diagnoses of PROM and regardless of amniotic fluid level. In both studies, patients received a single course of antibiotics at hospital admission. Administration of antenatal corticosteroids was considered according to local protocol. Administration of tocolysis was not required for amnioinfusion. After evaluating short-term outcomes, both trials also assessed long-term neurodevelopment and respiratory outcomes in survivors. The PPROMEXIL-III trial assessed neurodevelopment between 2 and 5 years of age with the Bayley third edition (Bayley-III) or Wechsler Preschool and Primary Scale of Intelligence (WPPSI-III) and respiratory problems using respiratory and general health questionnaires. The AMIPROM trial assessed development at 2 years of corrected gestational age using the Bayley Scales of Infant Development, second edition (Bayley-II) and respiratory function using validated respiratory questionnaires and whole body plethysmography at the age of 18 months. In online suppl. material Table 4, baseline characteristics of participants in included trials are shown. Mean gestation-al age at PROM was 19.0 weeks in the amnioinfusion group versus 18.9 in the no intervention group; PROM at randomization was 20.9 weeks versus 20.5 weeks, respectively. A positive vaginal culture for group B Streptococcus was seen in 12.5% of women randomized to amnioinfusion and in 8% randomized to expectant management. Nearly 90% of women in both treatment groups received antenatal maternal antibiotics (90.9% vs. 89.3%) and in almost 40%, antenatal corticosteroids were administrated (40% in the amnioinfusion group vs. 41.1% in the no intervention group). Figure 2 and online suppl. material, Table 3 shows the risk of bias. None of the studies were double blinded; thus, all studies were judged to be at high risk of performance bias as assessed by the Cochrane Collaboration's tool. Publication bias was not assessed since less than 10 publications with the primary outcome were included.
Primary Outcome
No differences between groups were seen in the prespecified outcome perinatal mortality: 37/56 (66.1%) pregnancies in amnioinfusion group versus 40/56 (71.4%) pregnancies in no intervention group, RR 0.92, 95% CI: 0.72-1.19 (Table 2; Fig. 3), intention-to-treat analysis). [1]. e Outcome measured in all live-born neonates. PPROMEXIL-III trial: amnioinfusion n = 15, expectant management n = 13. f Outcome measured for which fetal deaths were omitted. AMIPROM trial: amnioinfusion n = 23, expectant management n = 17. g Outcome measured in all neonates alive >7 days postpartum. h Outcome measured in all neonates alive >28 days postpartum. i Diagnostic criteria for pulmonary hypoplasia were unspecified. In PPROMEXIL-III trial, numbers are neonatal respiratory morbidity associated with pulmonary hypoplasia. j In the PPROMEXIL-III trial, pulmonary hypoplasia was not reported. k In the AMIPROM trial, pulmonary hypoplasia was diagnosed in 5/14 (35.7%) cases of neonatal death in the amnioinfusion group versus 2/8 (20.0%) cases in the no intervention group. l Periventricular leukomalacia > grade 1 (classified by De Vries et al.). m Periventricular leukomalacia at any grade. n In PPROMEXIL-III defined as the presence of contractures. o Healthy short-term survivor in PPROMEXIL-III trials defined as a neonate surviving without composite neonatal morbidity, such as: PPHN, pneumothorax, CLD, NEC, PVL, IVH, and/or neonatal sepsis. In AMIPROM defined as a neonate surviving without pneumothorax, CLD, NEC, PVL, IVH, neonatal sepsis, treated seizures, treated retinopathy, or/and shunt. p Amnioinfusion was considered successful if the single deepest pocket remained >20 mm for ≥48 h after the procedure. q Outcome measured in all neonatal deaths.
Secondary Outcomes
In the PPROMEXIL-III trial, GA at birth of the live-born neonates occurred at a median of 27.0 weeks in the amnioinfusion group and at 27.4 weeks in the expectant management group. In the AMIPROM trial, women delivered at a mean of 28.5 weeks and 29.8 weeks, respectively. Regarding any prespecified secondary neonatal outcome, no differences were seen between both treatment groups ( Table 2). Pulmonary hypoplasia was diagnosed in the AMIPROM trial in 5/14 (35.7%) cases of neonatal death in the amnioinfusion group versus 2/8 (20.0%) cases in the no intervention group (online suppl. material Fig. 3) [12]. Pneumothorax was observed in 6/38 (15.8%) live-born neonates in the amnioinfusion group versus 9/30 (30.0%) live-born neonates in the no intervention group (RR: 0.54, 95% CI: 0.22-1.34, Table 2; online suppl. Fig. 4). Persistent pulmonary hypertension of the neonate was measured in the PPRO-MEXIL-III trial, showing PPHN in 40% of live-born neonates following amnioinfusion and in 69.2% of live-born neonates following no intervention [5]. The outcome "short-term healthy survivor" occurred slightly more often in the amnioinfusion group, although no difference was seen between both groups (20.9% of children vs. 12.2% children, RR: 1.69, 95% CI: 0.62-4.62) ( Table 2; online suppl. Fig. 5). For maternal outcomes, no differences were seen between both treatment groups. Substantial statistical heterogeneity was measured between studies for onset of labor and vaginal mode of delivery. The high level of heterogeneity observed for mode of delivery may be due to the low number of cesarean sections performed in the PPROMEX-IL-III trial ( Fig. 6). In both groups, one case of maternal sepsis was seen, and no maternal deaths occurred ( Table 2). As for the safety of the intervention, the PPROMEXIL-III trial reports six minor maternal complications after the procedure of amnioinfusion (6 complications in 81 procedures [7%], complications reported were pain during or after the procedures, vaginal bleeding postintervention, and a small amount of fluid injected into the myometrium). Aggregate data for not reported outcomes were not collected. Only published data were included. Outcomes are complete-case analysis for which no data are missing, as a result of there being no validated methods available to handle missing data. Study data are presented as number in the amnioinfusion group versus number in the expectant management group with percentages, as mean (SD), or median [IQR] unless stated otherwise. SD, standard deviation; IQR, interquartile range; RR, risk ratio. a Respiratory questionnaires of the Follow-up PPROMEXIL-III study were obtained at 2 years of corrected age and of AMIPROM at 18 months of corrected age. b Respiratory symptoms include at least once a week respiratory symptoms interfering with daily activities (i.e., not able to attend school or not able to play) in the past 4 weeks. c In the respiratory questionnaire of the AMIPROM trial, the median (IQR) daytime symptoms score was 6.5 (2-17) versus 6 (4-21). d Total of all children assessed for long-term follow-up examinations. Percentages represent number of children/number of children assessed for follow-up. e Defined as visits to a pediatric pulmonologist from birth until current age. f Respiratory questionnaires of three children were not returned and were thus excluded from analysis. g Defined as attendance to hospital clinics for chest problems. h Antiasthmatic medication taken at least one time/week from birth until current age. i Medicines taken as treatment for chest symptoms for up to 1 week at any one time. All medicines were inhalers for asthma. j Neurodevelopmental delay in the Follow-up PPROMEXIL-III study was assessed by BSID-III with outcomes in two subscales (CCS and MCS) at corrected age <42 months and by the WPPSI-III with outcomes in three subscales (PIQ, VIQ, FSIQ) at corrected age of >42 months. Mean score of 100 with SD of 15 points for both tests. Neurodevelopmental delay in the AMIPROM trial was assessed by BSID-II with outcomes in two subscales (MDI and PDI) at corrected age of 2 years. Mean score of 100 with of SD 15 points. k Mild neurodevelopmental delay in the Follow-up PPROMEXIL-III study was defined as a score of 70-85 (−1 SD) in any or both of the two subscales of BSID-III, or in any or all of the three subscales of WPPS-III-NL. Mild neurodevelopmental delay in the AMIPROM trial was defined as a score of 70-84 (−1 SD) in any or both of the 2 subscales of the BSID-II. l For two children, a number of subscales of the WPPSI-III-NL were not performed due to delayed performance or child's limited understanding of tasks. A team consisting of a neuropsychologist and a neonatologist classified the missing subscales as −1 SD index scores based on the neurodevelopmental reports. m Severe neurodevelopmental delay in the PPROMEXIL-III was defined as a score of ≤70 (−2 SD) in any or both of the of the two subscales of BSID-III or in any or all of the subscales of the WPPS-III-NL. Severe neurodevelopmental delay in the AMIPROM trial was defined as a score of 50-69 (−2 SD) in any or both of the two subscales of the BSID-II. n One or both subscales (MDI and PDI) of the BSID-II were impossible in two children due to significantly delayed performance. In sensitivity analysis, a score of 50 was given, accounting for as severe delay. o Defined as no long-term neurodevelopmental delay or no respiratory problems. Worst case scenario defined as "if all children lost to follow-up are unhealthy." p Defined as no long-term neurodevelopmental delay or no respiratory problems. Best case scenario defined as "If all children lost to follow-up are healthy." q Neurodevelopmental delay in the Follow-up PPROMEXIL-III study was assessed by BSID-III with outcomes in two subscales (CCS and MCS) at corrected age of <42 months and by the WPPSI-III with outcomes in three subscales (PIQ, VIQ, FSIQ) at corrected age of >42 months. Mean score of 100 with SD of 15 points for both tests. Neurodevelopmental delay in the AMIPROM trial was assessed by BSID-II with outcomes in two subscales (MDI and PDI) at corrected age of 2 years. Mean score of 100 with of SD 15 points. breakdown of percentages of long-term neurodevelopmental delay is shown in the online supplementary material, Table 5. Long-term respiratory problems assessed by parental questionnaires showed no differences between treatment groups (Table 3; online suppl. Table 5). Both the Follow-up PPROMEXIL-III and AMIPROM trial reported on the outcome: long-term survival without neurodevelopmental delay or respiratory problems (defined as "longterm healthy survivor"). In total, ten children treated with amnioinfusion could be classified as healthy survivor compared to eight children in the no intervention group, showing a pooled RR of 1.30 (95% CI: 0.47-3.60) (reported best case scenario, defined as "all children lost to follow-up were healthy," Table 3; online suppl. Table 5 and Fig. 7, 8).
Long-Term Child Outcomes
GRADE Assessment GRADE decisions for the prespecified primary and secondary outcome: perinatal mortality, short-term healthy survivor, and long-term healthy survivor are shown in the Summary of Findings table (online suppl. material, Table 6). Forest plots for all GRADE outcomes comprise Figure 3 and online supplementary material Figure 5. According to the GRADE methodology, both outcomes were graded as moderate quality (defined as, "further research is likely to have an important impact on our confidence in the estimate of effect and may change the estimate").
Main Findings
This systematic review and meta-analysis showed no difference in perinatal mortality in women treated with amnioinfusion as compared to expectant management for midtrimester PROM. No differences between both treatment arms for any other core neonatal or maternal outcomes were detected. This review also investigated the long-term effects of amnioinfusion or no intervention on neurodevelopmental outcome and respiratory function in survivors at the age of 18 months to 5 years old. Also, no differences were shown between treatment groups for long-term outcomes. However, looking at the overall pooled estimates for all perinatal, neonatal, and maternal outcomes, the measures of effect seem to be slightly in favor of amnioinfusion. Further research is necessary to investigate the effectiveness of this treatment.
Quality of Evidence
The quality of evidence was graded using the GRADE tool and was classified as moderate due to the limited number of RCTs and small sample sizes. Furthermore, included individual trials had no noticeable risk of bias, apart from lack of blinding which was due to the nature of the intervention.
Strength and Limitations
One of the strengths of this systematic review is that it provides a comprehensive overview of available RCTs with a strict definition for very early (midtrimester) PROM (between 16 and 24 weeks of gestation). Furthermore, the two well-conducted RCTs in this meta-analysis showed minor protocol differences and comparable inclusion and exclusion criteria, making it possible to aggregate data with low levels of heterogeneity. Even though solely including RCTs is one of the main strengths, it might also be a limitation. When only including RCTs for evaluating the effectiveness of an intervention, an accurate estimate for the population included in the trial will be reported but will not always yield relevant information about the effects in a particular target population [13]. A second limitation to this meta-analysis is the small number of included RCTs and, consequently, the small total sample size. As previously published by our study group, further research and a much larger sample size ("a sample of 1,352 women per arm is needed to detect a decrease of 5% in perinatal mortality [from 71% to 66%], with an alpha of 0.05 and a power of 80%") are required to effectively assess the effect of amnioinfusion in this population [7].
Comparison with Existing Literature
Previously, one other review included RCTs evaluating amnioinfusion versus no intervention in midtrimester PROM. This Cochrane review performed in 2013 included two ongoing trials at the time, the AMIPROM trial and an RCT by Locatelli et al. [5,6,11,14]. The study by Locatelli et al. [11] has never been executed (online suppl. Table 2). In 2012, Porat et al. [15] performed a systematic review and meta-analysis, including both comparative observational cohort studies and RCTs in which serial transabdominal amnioinfusion was compared with conventional treatment (or no intervention). They concluded that amnioinfusion for early PROM reduced perinatal mortality in observational studies and showed a trend toward reduced mortality in RCTs. However, they included studies with PROM up to 33 +6 weeks of gestational age. None of the studies included women up to maximum 24 weeks of gestational age, and three studies included women with PROM between 25 and 34 weeks of gestational age [16][17][18]. Results were not stratified for these differences in gestational age. When including preg-Fetal Diagn Ther 2022;49:321-332 DOI: 10.1159/000526020 nancies with PROM >24 weeks of gestation, it is thought that these pregnancies would show better outcomes as the critical time for lung development (the preglandular or canalicular period) is mostly between 16 +0 and 24 +0 weeks of gestation. Lack of adequate volumes of amniotic fluid in that time period will lead to sustained breathing movements, interruption of lung development and thus underdevelopment of the alveolar system, pulmonary hypoplasia and its (often fatal) consequences.
Comparability and Differences between Included Studies
Our review reflects the results of the two individual trials included in this review (i.e., PPROMEXIL-III and AMIPROM trial), as both trials concluded that no reduction was found in perinatal mortality or other secondary outcomes. However, the included trials have some minor protocol differences. First of all, levels of amniotic fluid prior to randomization differed. The PPROMEXIL-III only included women with oligohydramnios, while the AMIPROM specified no maximum amniotic fluid level (i.e., pool depth of <2 cm) for inclusion. Therefore, the AMIPROM potentially included more women with favorable outcome, since a higher level of amniotic fluid is correlated with better perinatal outcomes [19,20]. Furthermore, both studies required a deepest pool level of <2 cm for the procedure of amnioinfusion; thus, in the AMIPROM study, four women randomized to this intervention never received treatment because they maintained a deepest pool of amniotic fluid >2 cm throughout the duration of their participation. In the PPROMEXIL-III trial, five women in the amnioinfusion group did not receive the treatment they were allocated to due to other reasons: onset of labor, detection of lethal anomaly after the 20-week anomalies scan, maternal sepsis, or technical problems during the procedure. When comparing the primary outcome in a per protocol analysis in this review, comparable perinatal mortality rates for both treatment arms were seen. In addition, the PPROMEXIL-III included women with an ongoing pregnancy 3 days after midtrimester PROM, while the AMIPROM included women 10 days after PROM. It has been observed that most patients with an (active) infection after PROM will deliver within 72 h, and at least half of the patients with midtrimester PROM deliver immature within the first 7 days after rupture of membranes [21]. Inclusion in the AMIPROM trial after this time period of 7 days could lead to a better a priori prognosis of these women. Another difference is the diagnosis of pulmonary hypoplasia between studies. Pulmonary hypoplasia is a crucial outcome measure for evaluating the effect of amnioinfusion. In the AMIPROM trial, lethal pulmonary hypoplasia was based on autopsy data. This differed from the PPROMEXIL-III trial, in which pulmonary hypoplasia was diagnosed based on respiratory symptoms (pneumothorax, PPHN) associated with pulmonary hypoplasia, therefore also including neonates that survived. The clinical definition of pulmonary hypoplasia is inconsistent [22]. A uniform diagnosis is needed.
The intervention of amnioinfusion is an invasive procedure. The two RCTs in this review showed some fetal complications: one fetal demise occurred 30 min after amnioinfusion and mild fetal trauma was reported in three cases. These three fetuses were punctured by the amnioinfusion needle, but no trauma with postnatal complications occurred. In both RCTs, PPROMEXIL and AMIPROM, the procedure did not seem to be associated with any severe maternal complications. Still, the presence of oligohydramnios complicates the procedures of amnioinfusion more than routine diagnostic amniocentesis and thus should only be performed by experienced fetal medicine specialists.
Conclusion
At present, the benefits of amnioinfusion for midtrimester PROM are unclear, and the potential harms remain unknown. No differences in perinatal mortality rates are shown in women treated with amnioinfusion as compared to no intervention for midtrimester PROM. However, it must be noticed that patient numbers in this meta-analysis are small; therefore results and conclusions should be interpreted with care. Results of this review justify the need for additional research and especially adequately powered RCTs to demonstrate a smaller but clinically relevant effect, before this therapy can be considered for routine clinical use. Performing an international, multicenter study may be the only way to achieve this large sample size.
Statement of Ethics
An ethics statement is not applicable because this study is based exclusively on published literature. | 2022-07-16T06:16:31.104Z | 2022-07-14T00:00:00.000 | {
"year": 2022,
"sha1": "7fe25eb6b626e9f19eecd36669251f69b96f6ec1",
"oa_license": "CCBYNC",
"oa_url": "https://www.karger.com/Article/Pdf/526020",
"oa_status": "HYBRID",
"pdf_src": "Karger",
"pdf_hash": "79b270b0e2c377f33fd20d7d808e3293f366736a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247454312 | pes2o/s2orc | v3-fos-license | Transplant Trial Watch
Citation: Knight SR (2022) Transplant Trial Watch. Transpl Int 35:10307. doi: 10.3389/ti.2022.10307 To keep the transplantation community informed about recently published level 1 evidence in organ transplantation ESOT and the Centre for Evidence in Transplantation have developed the Transplant Trial Watch. The Transplant Trial Watch is a monthly overview of 10 new randomised controlled trials (RCTs) and systematic reviews. This page of Transplant International offers commentaries on methodological issues and clinical implications on two articles of particular interest from the CET Transplant Trial Watch monthly selection. For all high quality evidence in solid organ transplantation, visit the Transplant Library: www.transplantlibrary.com.
Participants 333 de novo liver transplant recipients.
Outcomes
The primary outcome was renal function. The secondary outcomes included death, graft loss, acute rejection (AR), treated AR or treated biopsy-proven acute rejection (tBPAR), assessed as composite or individual components at 12 months posttransplant.
CET Conclusion
The HEPHAISTOS superiority trial compared everolimus plus reduced exposure tacrolimus versus everolimus with standard exposure tacrolimus in de novo liver transplant recipients. The multicentre, German study randomised recipients 7-21 days posttransplant using a validated system that automates random assignment. The power analysis indicated that 105 patients in each group were needed, which was adjusted to 165 patients per group to allow for dropouts. The study randomised 333 patients and the primary full-analysis set, which included all randomised patients who received at least one dose of the study drug, found no statistically significant difference in eGFR at 12 months between groups. A statistically significant difference between groups in eGFR was found for the per-protocol and on-treatment analyses. The composite efficacyendpoint of graft loss, death or treated BPAR was similar between groups. Treatment-emergent (serious) adverse events were similar between groups but there were more adverse events leading to study drug interruption or adjustment in the reduced exposure tacrolimus group.
Data Analysis
Intention-to-treat analysis.
Allocation Concealment
Yes.
Funding Source
Industry funded.
Aims
The aim of this study was to investigate whether rituximab in addition to rabbit anti-thymocyte globulin induction was effective in reducing the development of de novo donorspecific human leukocyte antigen antibodies (DSA) and improve outcomes, in paediatric lung transplant recipients.
Interventions
Participants were randomly assigned to either the rituximab group or the placebo group.
Participants 27 paediatric lung transplant patients.
Outcomes
The primary outcome was a composite of chronic allograft dysfunction, listing for re-transplant or death. The secondary outcomes were the incidence of primary graft dysfunction, antibody-mediated rejection and acute cellular rejection.
CET Conclusions
This is a good quality randomised controlled trial in paediatric lung transplantation. The study was double-blinded and conducted in multiple centres. Patients were randomised to either standard immune induction with ATG (plus placebo) or to ATG and Rituximab. The primary outcome was composite graft dysfunction, death or re-listing. Unfortunately, only 11 subjects met criteria for the composite primary outcome, so the study was underpowered to demonstrate all but the most drastic of differences between the study arms. Whilst there was no significant difference in the primary outcome, there was a significantly lower generation of de novo DSA in the Rituximab arm (21% vs. 73%). There was no significant difference in adverse event rates. A much larger study, and with longer follow up, is required.
Data Analysis
Intention-to-treat analysis.
Allocation Concealment
Yes
Funding Source
Non-Industry funded.
CLINICAL IMPACT SUMMARY
Most current induction immunosuppression strategies focus on T-cell inactivation or depletion. B-cell activation and donorspecific antibody production also play an important role in allograft damage, which has led to interest in the use of B-cell depleting therapies such as rituximab as induction agents following solid organ transplantation. In a recent publication in the American Journal of Transplantation, Sweet et al. report a multicentre randomisedcontrolled trial using rituximab as induction therapy in paediatric lung transplant recipients (1). The study is well designed, with double blinding and allocation concealment ensured by use of placebo and centralised web-based randomisation. Unfortunately, the study failed to recruit the required target sample within the funding time-frame, resulting in a loss of power and shorter followup than initially planned. Perhaps as a result, no difference in the primary clinical endpoint [a composite of death, bronchiolitis obliterans syndrome (BOS) and relisting] was seen. However, there was a significantly lower incidence of de novo donor specific antibodies (DSA) in the rituximab-treated group, leading the authors to cautiously claim some evidence of benefit.
Whilst it is difficult to draw firm conclusions from an underpowered study, the suggestion of benefit seen in this study is at odds with previous studies in renal and cardiac transplantation. A systematic review of studies in renal transplantation from our own group in 2014 found no clear evidence of benefit to rituximab induction across a small number of studies (2). The authors of the current study postulate that this may be due to a lack of T-cell depleting induction in these earlier studies. Rituximab also depletes regulatory B-cells, and this loss of regulation in the presence of donor-reactive T-cells may increase the risk of T-cell mediated rejection. Combination of B-and T-cell depletion is proposed to overcome this.
One specific area of concern, perhaps not apparent in the current paediatric study, is the impact of rituximab therapy on the risk of cardiovascular disease. Previous studies in both renal transplantation and cardiac transplantation have suggested increased risk of cardiovascular mortality and graft vessel disease, possibly related to the role of B-regulatory cells in atheroprotection (3,4). Any future studies, especially in adult populations, would need to collect these outcomes and ensure long-enough follow-up to adequately assess the impact on cardiac disease.
Overall, the study does provide some interesting data suggestive of a potential role of B-cell depletion in conjunction with T-cell depleting induction in the reduction of DSA formation and subsequent chronic allograft damage. Further, well-powered studies in adult populations will need to focus on the long-term safety of such a strategy.
AUTHOR CONTRIBUTIONS
The author confirms being the sole contributor of this work and has approved it for publication. | 2022-03-16T13:26:58.187Z | 2022-03-15T00:00:00.000 | {
"year": 2022,
"sha1": "1e4c48f5663e7a19ec5defc08a40b8b64e1c1e87",
"oa_license": "CCBY",
"oa_url": "https://www.frontierspartnerships.org/articles/10.3389/ti.2022.10307/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "1e4c48f5663e7a19ec5defc08a40b8b64e1c1e87",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
2886593 | pes2o/s2orc | v3-fos-license | CRISPR–Cas9-targeted fragmentation and selective sequencing enable massively parallel microsatellite analysis
Microsatellites are multi-allelic and composed of short tandem repeats (STRs) with individual motifs composed of mononucleotides, dinucleotides or higher including hexamers. Next-generation sequencing approaches and other STR assays rely on a limited number of PCR amplicons, typically in the tens. Here, we demonstrate STR-Seq, a next-generation sequencing technology that analyses over 2,000 STRs in parallel, and provides the accurate genotyping of microsatellites. STR-Seq employs in vitro CRISPR–Cas9-targeted fragmentation to produce specific DNA molecules covering the complete microsatellite sequence. Amplification-free library preparation provides single molecule sequences without unique molecular barcodes. STR-selective primers enable massively parallel, targeted sequencing of large STR sets. Overall, STR-Seq has higher throughput, improved accuracy and provides a greater number of informative haplotypes compared with other microsatellite analysis approaches. With these new features, STR-Seq can identify a 0.1% minor genome fraction in a DNA mixture composed of different, unrelated samples.
M icrosatellites, otherwise called short tandem repeats (STRs), have multiple alleles that are defined by variation in the number of motif unit repeats. Given their multi-allelic characteristics, they have greater heterozygosity than single nucleotide polymorphisms (SNPs) 1 . STR polymorphisms are the result of motif insertions or deletions (indels), arising from slippage errors during DNA replication 2 or recombination events 3 . The diversity of microsatellite alleles is attributable to STR mutation rates (10 À 2 events per generation) that are significantly higher than the mutation rate for SNPs 4,5 which are reported to be 10 À 8 events per generation 6,7 . Due to their multi-allelic characteristics, STR genotyping has proven useful for the genetic characterization of individual, subpopulations and populations 8 . Moreover, genotyping with B20 STRs can identify an individual with high confidence 9 , enabling its universal application for genetic identification in forensics.
When STRs reside in coding regions, the genetic variation in these sequences have a significant functional impact [10][11][12] . Studies using model organisms suggested that STR variations lead to diverse range of phenotypes. For example, in Saccharomyces cerevisiae, there is evidence pointing to the enrichment of intragenic STRs in genes encoding cell wall proteinsphenotypes such as adhesion and biofilm formation were shown to have strong correlation with the STR variations 3 . Repeat variation in circadian clock genes of Arabiodopsis thaliana and Drosophila can create altered phenotypes such as variable periods 13,14 . Variation in STRs has human disease implications. Many monogenic diseases are linked to specific STR expansions, particularly among some neurological disorders such as Huntington's chorea, fragile X syndrome, spinocerebellar ataxias and amyotrophic lateral sclerosis 15 .
Despite their importance in genetics and biology, the analysis of STRs is challenging regardless of the methods that is used. The repetitive motifs of STRs are prone to accumulating errors during any polymerase amplification process 16 . This phenomenon is most pronounced for motifs that are smaller than four bases. Therefore, tetranucleotide repeats are preferred for applications where accurate genotyping is required 17 . For example, the 13 STRs used for the Combined DNA Index System (CODIS), an important set of microsatellites used in forensic genetics, are all tetranucelotide repeats. However, the analysis of mono-, di-and trinucleotide repeats is of significant utility in a broad number of applications. For example, STRs composed of mononucleotide repeats have among some of the highest mutations rates as observed in embryonic development 18 and tumour progression 19 . Thus, a process to accurately genotype STRs with smaller motifs would be highly useful for many research applications.
STR genotyping relies on multiplexed PCR amplification of microsatellite loci followed by analysis based on size discrimination with capillary electrophoresis (CE) 20 . For example, forensic genetics employs the CE-based method for nearly all DNA identification cases. However, this approach has many limitations. First, CE genotyping assays are restricted to 30 STR amplicons or less because of the inherent challenges of multiplexing PCR reactions 20 . Second, CE has low analytical throughput, typically in the tens of markers. Third, as already described, PCR amplification of microsatellites introduces artifactual indels, also known as 'stutter', that can obscure true genotypes, particularly when alleles are close in size 16 . Finally, current STR genotyping methods have difficulty resolving alleles in DNA mixtures that are composed of multiple individual genomes 21 . In forensic genetic analysis, it is nearly impossible to distinguish a specific individual DNA sample amongst multiple contributors, particularly when a specific component exists at a low ratio.
Next-generation sequencing (NGS) assays have been developed for the analysis of STRs. These include whole-genome sequencing (WGS) 17,[22][23][24] , targeted sequencing using bait-hybridization capture oligonucleotides 25,26 and multiplexed amplicon sequencing methods [27][28][29][30][31][32] that include molecular inversion probes (MIP). Regardless of the approach, current NGS methods for STR analysis have significant limitations. STRs' repetitive motifs complicate traditional alignment methods and lead to mapping errors 22,23 . Sequence reads that span an entire STR locus are the most informative for accurate genotyping. However, many NGS approaches produce reads that truncate the STR sequence, resulting in ambiguous genotypes. Although one can generate very long reads from more single molecule sequencers (for example, Pacific Biosciences and Oxford Nanopore systems), these newer technologies have very high error rates and limits on the number of STRs loci that can be analysed 33 .
STR genotypes can be determined from WGS data derived from Illumina sequencers [22][23][24] . However, the read coverage of an intact STR locus varies greatly with the standard WGS coverage (for example, 30 Â to 60 Â ) and reduces the reads with intact microsatellites. Lower coverage translates into decreased sensitivity and specificity for detecting microsatellite genotypes. Consequently, accurate STR genotyping requires much higher sequencing coverage than is practical with WGS, particularly in cases of genetic mixtures composed of different genomic DNA samples in varying ratios.
Targeted sequencing can improve STR coverage but current methods have limitations. For example, enrichment of microsatellite targets with bait-hybridization requires randomly fragmented genomic DNA-random fragmentation reduces overall fraction of informative reads containing a complete microsatellite to o6% (ref. 26). Furthermore, enrichment for STR loci is complicated by repetitive sequences with potential off-target hybridization 25 . Sequencing library amplification or PCR-dependent multiplexed amplicons lead to significant increase in stutter errors 31 .
Addressing all of these limitations, we present STR-Seq, a massively parallel sequencing approach that generates microsatellite-spanning sequence reads with high coverage and accurate genotypes. STR-Seq uses a targeted DNA fragmentation process with CRISPR-Cas9 to increase the number of sequenced molecules with an intact STR. We use amplification-free library method to reduce amplification artifacts. Finally, a novel bioinformatics pipeline is used for quantifying STR motifs and associated SNPs in phase with the STR, thus generating haplotypes. We demonstrate that STR-Seq is highly accurate using a ground truth set of previously genotyped samples, has high efficiency in assay design and genotyping when compared to other methods such as CE, provides phased STR-SNP haplotypes and can resolve individual-specific haplotypes at minor allelic fractions of 0.1% in genetic mixtures.
Results
Overview of STR-Seq. Sequencing libraries for STR-Seq assays are generated from either random or targeted DNA fragmentation. In the latter case, we designed and synthesized CRISPR-Cas9 guide RNAs (gRNAs) to selectively cut genomic DNA sites flanking a target STR loci (Fig. 1a). Afterwards, we generate a single-adapter library. STR-Seq uses 40-mer sequences called primer probes, that mediate STR targeting and are directly incorporated into the Illumina flow cell 34,35 . As the next step, the sequencing library is introduced into the modified flow cell. The primer probes anneal to target DNA fragments for a given STR locus ( Supplementary Fig. 1) and primer extension incorporate the microsatellite sequence. Sequencing produces paired-end reads, referred to as Reads 1 and 2.
STR-Seq utilizes an indexing process with the paired sequences where Read 2 includes the targeting primer sequence (that is, STR index) and Read 1 spans an entire STR region. To genotype STRs while avoiding alignment artifacts such as soft clips that arbitrarily truncate the microsatellite sequence, we used the synthetic primer probe sequence in Read 2 to generate a STR index tag ('Methods' section; Fig. 1b). Using this process, STR-indexed read counts per sample ranged from 0.6 to 58 million reads depending on the experiment and degree of sample multiplexing (Supplementary Table 1).
Microsatellite genotypes are quantitative and reported as the number of motif repeats for each allele. After assigning a STR index tag to each paired-end read, the Read 1 sequence was evaluated for the presence of the expected STR ('Methods' section; Fig. 1b). STR allele sizes were calculated by dividing the microsatellite length by the number of bases in the individual motif. Subsequently, we applied a statistical model threshold to identify valid genotypes ('Methods' section). For STR-SNP haplotypes, we used FreeBayes 36 for SNP calling on the remaining Read 2 sequence not containing the primer probe. Because every Read 2 starts with a targeting primer sequence, coverage for SNP regions is high and ensures accurate genotypes. Haplotypes were generated by combining the STR genotype originating from Read 1, with the SNPs from the Read 2 sequences (Fig. 1c).
Designing and generating STR-Seq assays. The locations of over 740,000 tandem repeats were obtained from the UCSC Genome Browser ('Methods' section). We identified known STRs with documented polymorphisms and candidate STRs not previously reported to be polymorphic. We limited our selection of STRs to those that could be covered in their entirety within a 150 bp read produced by an Illumina HiSeq sequencer. To increase the number of potential STR-SNP haplotypes, we identified tandem repeats that were within 100 bp of a SNP with a high genotype frequency among different populations ('Methods' section). Our analysis identified a total of 10,090 tandem repeat loci that fulfilled our targeting criteria and were in proximity to a SNP position. Afterwards, candidate primers were identified based on their uniqueness in the human genome reference, requiring at least two edited bases to align in any other location 34 . Targeting primers were positioned on opposing strands ( Supplementary Fig. 2 Single primer targeting (a) Guide RNAs and primer probes were designed to target STRs and proximal SNPs. We target both plus and minus strands with only the plus strand targeting illustrated. In the first step, Cas9 enzyme cleaves upstream of STR. The DNA libraries including the STR and SNP are target sequenced. (b) After initial alignment of Read 2 from any given paired-end set, we use the primer probe sequence derived from Read 2 as an index tag to link the Read 1 microsatellite internal motif and flanking sequences. If the primer probe sequence aligns within 2 bp of the expected primer probe start position, the paired Read 1 was assigned to its specific STR index tag. Based on the human genome reference, we identified the flanking genomic sequences that mark the complete STR segment and then determined the composition (that is, mononucleotide, dinucleotide and so on) and overall length of the repeat motif structure. Read 1 sequences that contained both the 5 0 and 3 0 flanking sequences with the internal microsatellite were used for genotyping. STR genotypes are called from Read 1. SNPs are phased with the STR genotype to generate haplotypes. (c) As an example of STR-Seq haplotyping, paired end alignments to the reference genome are shown for a STR target (trf747130) for sample NA12878. After the STR genotyping process, 114 and 133 read pairs were identified to have 11 and 8 repeats of a tetranucleotide motif (ATGA) in their Read 1s, respectively. Within each read pair group, all the base calls at the SNP position were identical, being either C (reference) or G (alternative). The site where CRISPR-Cas9 targets is indicated with red arrow, and the two haplotypes are illustrated on the bottom.
coverage was particularly useful because a true STR variant should be the same for both the forward and reverse strand reads 27,37 .
We developed two STR-Seq assays ('Methods' section; Supplementary Table 2). Assay 1 was designed to sequence 700 STRs that included 470 microsatellites with CE genotypes from a set of well characterized DNA samples 38 . These samples and their CE-based genotypes provided a ground truth data set to assess the accuracy of STR-Seq's genotyping. Assay 2 targeted 2,370 loci for which 964 STRs fulfilled the criteria as microsatellites per Willems et al. 17 ('Methods' section), while the remaining 1,406 were candidate STRs or homopolymers. Each assay had a number of control non-microsatellite targets. A subset of primer probes targeting 2,191 STRs with reported SNP positions within 100 bp of the probe. Given that thousands of primer probes were required, array-synthesized oligonucleotides were used for preparation for Assay 2 ('Methods' section; Supplementary Fig. 3). When preparing 5,000 primer probes, the array synthesis requires less than a tenth of the cost for column-based synthesis.
Validating STR-Seq genotypes. To validate STR-Seq's genotyping accuracy, we used Assay 1 to sequence nine genomic DNA samples with 470 CE-based genotypes 38 . These samples also had STR genotypes derived from WGS with the programme lobSTR 17 . To compare genotypes among the different methods, we used a dosage value that is derived from the number of base pairs remaining after subtracting the reference allele 17 . For example, a STR locus with a reference size of 18 bp and heterozygous STR alleles of 16 bp and 24 bp would have a STR dosage of À 2 þ 6 ¼ 4. Given that CE genotyping measures differences in amplicon size versus the NGS-based genotyping that counts the number of motifs directly from a sequence read, the dosage value provides a standardize method for comparing between the two 17 .
Among the nine samples, STR-Seq analysis produced 439-464 STR calls ( Po2.2e À 16 by linear regression t-test). These discordant STR genotypes arose from microsatellites that exceeded the sequence read length or originated from STRs with indels in the flanking sequences.
We compared the genotype concordance among the subset of STRs called by all three methods (CE, STR-Seq and WGS-lobSTR). This ranged from 266 to 293 STRs per sample. The lower number of STRs was a result of the WGS method identifying only a fraction of the CE genotypes (up to 464 STRs), thus representing a category of WGS false negatives. On this overlapping subset, STR-Seq genotypes were 97.83% concordant with CE while WGS-lobSTR genotypes were 94.00% concordant with CE (Table 1). STR-Seq genotypes were equally accurate whether they were heterozygous or homozygous. STR-Seq and CE genotypes showed a higher concordance for heterozygotes with alleles had a greater difference in repeat number. WGS-lobSTR genotypes had a lower CE concordance for homozygous alleles compared to STR-Seq.
As another method for determining genotype accuracy, we analysed samples from a family trio (NA12878-female child, NA12891-father and NA12892-mother) 39 . Specifically, we determined whether the paternal and maternal alleles were identified in the child per parental inheritance. We identified 679 STRs from Assay 1 and 1,617 STRs from Assay 2 where genotypes were available from all three family members. When evaluating the child's STRs with Assay 1, 98.50% of the genotypes were concordant with paternal and maternal inheritance (Supplementary Table 3). With Assay 2, the child's genotypes demonstrated 96.29% concordance in terms of paternal and maternal inheritance.
With this family trio, we verified the accuracy of SNPs called from STR-Seq. With Assay 1 we identified total of 143 SNPs present among all three family members (Supplementary Table 3). From these SNPs, 97.90% of the child SNP genotypes were concordant with parental inheritance. In addition, 139 of the SNPs matched those genotypes previously reported from WGS analysis of this trio ('Methods' section). For the remaining SNPs not reported from WGS, four showed Mendelian inheritance from the parents, and two were reported in dbSNP. It is likely that these non-reported SNPs were false negatives from the original WGS analysis.
Assay 2 generated 2,430 SNPs of which 95.80% of the child SNP genotypes were concordant with parental inheritance. From this set, 1,994 SNPs were previously reported per WGS analysis. Among the remaining 436 SNPs that were not reported, 382 demonstrated specific maternal and paternal inheritance to the child and 387 were reported in dbSNP. Many of these SNPs represent potential false negatives from the original WGS analysis. SNP concordance for both Assays 1 and 2 was smaller than STR concordance as a result of the following factors: (i) STR genotyping has additional quality filtering that eliminates artifacts-for example our analysis only uses sequence reads with the correct flanking sequences and (ii) unlike SNP genotypes, STR genotypes are generally supported by reads sequenced from both the forward or reverse strand-the SNP genotyping is typically limited to only one strand.
To determine the accuracy of STR-SNP haplotypes, we used our results from the family trio sequencing and determined haplotypes by phasing those SNPs with STR genotypes. For Assay 1, we identified 128 informative haplotypes among all three family members. For the child's STR-SNP haplotypes, 97.66% were concordant with parental inheritance. For Assay 2, we identified 1,324 haplotypes in the family trio. For the child STR-SNP haplotypes, 93.88% demonstrated parental inheritance. The majority of the STR-SNP haplotypes not concordant with paternal or maternal segregation originated from STRs located in highly repetitive segments of the genome. These highly repetitive regions are difficult to target and this factor likely caused the discordant genotypes as result of off-target sequence.
Amplification-free STR-Seq reduces sequence artifacts. To reduce PCR artifacts in microsatellites, we developed a PCR-free method for library preparation. NA12878 was sequenced with Assay 1, using either PCR-amplified or PCR-free sequencing libraries and genotyping results were compared among 686 STRs (Supplementary Table 4). Citing an example of the effects of amplification-free library preparation, we examined the microsatellite BAT26 that is composed of 26 mononucleotide (A) repeats ( Supplementary Fig. 4). From the PCR-amplified libraries, STR-Seq analysis generated BAT26 motif repeats ranging from 19 to 30; all of these variations were attributable to stutter artifacts (Fig. 2b). With the PCR-free method, the true BAT26 allelotype was apparent without significant stutter.
Comparing the data from the amplification-free versus PCR-amplified libraries, we examined the STR-containing reads with complete microsatellite sequences. For all of the targeted STRs, the median fraction of stutter decreased significantly from 3.2 to 0.9% (Fig. 2c). For example, the amplification-free STR-Seq analysis identified homozygote alleles for six STRs that were called as heterozygotes using PCR-amplified libraries (Supplementary Table 5). In these cases, stutter led to false heterozygotes allele calls.
When comparing across all the sequenced samples (Supplementary Table 4), a significant decrease in stutter was also observed between PCR and PCR-free libraries (from 2.7 to 2.1%, P ¼ 6.7e À 08 by Wilcoxon rank sum test). Some of the variation is related to the different assays that were designed for this study. In particular, Assay 2 includes a higher proportion of STRs with mononucleotide and dinucleotide repeats-these short motifs are significantly more prone to stutter artifacts compared to larger STR motifs. Accounting for these differences in the types of STRs included, Assay 2 has a baseline stutter error rate comparable to Assay 1. In addition, a degree of stutter is likely to be a result of polymerase errors during primer extension and the cluster generation steps.
Targeted fragmentation improves complete STR read coverage. As a solution for truncated microsatellite sequences resulting from random DNA fragmentation, we developed an in vitro CRISPR-Cas9-targeted fragmentation process. As an initial step before library preparation, the gRNAs bind to the complementary DNA target site and in combination with Cas9, produce a blunt-ended, double-strand break ( Supplementary Fig. 5).
We designed a set of gRNAs to fragment DNA either upstream or downstream of the STRs targeted by Assays 1 and 2 (Supplementary Data 1). Three criteria were used to select the gRNA target sequences (Supplementary Fig. 6): (i) the fragmentation site included the entire repeat within a 100-base read length; (ii) the binding region sequence was uniquely represented in the human genome and (iii) the gRNA sequence did not overlap more than 6 bp with the STR repeat. Overall, we identified 8,343 gRNAs targeting 2,103 repeat regions. The gRNA reagents were generated with array-synthesized oligonucleotides incorporating a T7 promoter ('Methods' section). The oligonucleotides were amplified and gRNA was produced in vitro. Genomic DNA was treated with the CRISPR-Cas9 enzyme and the synthesized gRNAs.
After targeted fragmentation, NA12878 was analysed with Assay 1. After sequencing, the exact position of the fragment's cleavage site was determined from Read 1 (Fig. 3a). Sequence reads in which the flanking sequence was within 4 bases of the expected gRNA fragmentation position were classified as being on-targeted and counted. Overall, 56% of the reads showed the specific CRISPR fragment position compared with random fragmentation that showed 8.7% (Fig. 3b). Compared with random fragmentation, the CRISPR-Cas9 procedure showed a significant increase from 5.3 to 17.1% in the median in the fraction of STR-spanning reads for the gRNA-targeted STRs ( Supplementary Fig. 7a,b). Furthermore, throughout all the sequenced samples used in this study, we observed a two-fold increase from 6.5 to 15.1% in the median STR-spanning read fraction (Supplementary Table 1; P ¼ 1.7e À 13 by Wilcoxon rank sum test). For the comparison among all of the sequenced samples, all the STR targets were included regardless of gRNA targeting, which is why a smaller increase was observed than in the NA12878 pairs. From our analysis with Assay 1, 642 STR genotypes were identified with CRISPR targeted fragmentation compared with 625 STR genotypes with random fragmentation (Supplementary Table 4). We examined the allelic fraction of each STR genotype as measured by counting reads with one genotype versus the other (Fig. 3c). Assuming the sequencing assay perfectly reflects the variants in a diploid sample, for a heterozygote STR allele we would observe 50% of the reads, a direct reflection of the allele fraction, having one allele and the remaining 50% having the other. Without CRISPR targeting, we observed a wide distribution of allele fractions (s.d. ¼ 0.13) across the heterozygous STRs. With CRISPR targeting, the distribution of allelic fractions (s.d. ¼ 0.08) was reduced significantly. There was no significant change for those STRs not targeted by gRNAs. This result confirms that CRISPR improves the quantitative assessment of allelic fraction with better precision. This quantitative accuracy benefits the analysis of DNA mixtures as we describe later.
Haplotypes distinguish the minor components in DNA mixtures. Identifying a specific individual DNA sample in a mixture composed of many individuals is one the most pressing issues in forensic genetics and a significant challenge when a specific component DNA is represented at a low fraction. We evaluated STR-Seq's sensitivity in detecting a specific genomic DNA sample among a series of DNA mixture ( Table 2) by combining samples in varying ratios. We used two unrelated DNA samples (HGDP00924 and HGDP00925) where HGDP00924 represented the minor component of the mixture. DNA from HGDP00924 was added in decreasing ratios from 25 to 0.1%. First, we determined haplotypes for the two samples individually. With Assay 1, STR-Seq was used to analyse HGDP00924 alone and haplotypes were compared with HGDP00925. We identified 29 unique haplotypes present in HGDP00924 and not present in HGDP00925. We evaluated these 29 haplotypes and determined if read counting provided an accurate quantitative measurement of the minor component contribution to the mixture. Overall, the HGDP00924 fraction as observed by the sequence reads showed a strong correlation with the known mixture ratio ( Fig. 4a; R 2 ¼ 0.61, Po2.2e À 16 by linear regression t-test). Even with the minor component ratio of 0.1%, 11 of the HGDP00924 haplotypes were detected (Table 2).
For the next experiment, we generated a six-component mixture. Five DNA samples from unrelated individuals were combined in equimolar ratio and then a minor component DNA (HGDP00924) was added in decreasing ratios ranging Expected allele fraction (%) a b Fig. 8a). Five of the HGDP00924-informative haplotypes were still detectable even at a ratio of 0.1% (Table 2). For additional validation, we generated a different two-component mixture (NA12892 and NA12891). Mixture ratios ranged from a 40 to 1% fraction with NA12892 being the minor component. This STR-Seq analysis was conducted with both CRISPR targeted fragmentation and PCR-free library preparation. Using Assay 2, we analysed the two sample DNAs separately, and identified 122 haplotypes unique to NA12892. These haplotypes demonstrated an allelic fraction that was highly correlated with the minor component ratio (Supplementary Fig. 8b; R 2 ¼ 0.66, Po2.2e À 16 by linear regression t-test). We observed that the goodness-of-fit value (R 2 ) improved with CRISPR targeted fragmentation.
For the 1% fraction, STR-Seq called 12 haplotypes specific to the NA12892 minor component. Four informative loci had coverage 4150, and the allele fraction of these haplotypespecific reads matched the mixture ratio (that is, B0.5% or 1% for each haplotype per each locus depending on zygosity). The remaining eight haplotypes had lower coverage with less precision in their allelic fraction at 1.5% or greater (Supplementary Table 6). Higher coverage sequencing will further improve the precision of this analysis.
Improving the targeting efficiency of STR-Seq. Depending on the hybridization conditions, a significant fraction of reads were the result of off-target priming, enabling the extension of off-target fragments and not demonstrating a STR primer index sequence (50-80%; Supplementary Table 1). To maximize the absolute yield of on-target, STR-indexed reads, we modified the stringency of primer hybridization just before the before extension of the genomic target ('Methods' section). Using a higher stringency wash step (0.2 Â hybridization buffer), most of the off-target reads were eliminated. We demonstrated this improvement using 10 samples that were sequenced with the hybridization modification; 80% of total raw reads were indexed to the appropriate STR target (Supplementary Table 1). Regardless of wash stringency conditions, the absolute numbers of STR-indexed reads were in very high correlation with the concentration of library loaded onto the sequencing flow cell (R 2 ¼ 0.96, P ¼ 3.6e À 04 by linear regression t-test; Supplementary Fig. 9, Supplementary Table 7). This result explains why the lower stringency protocol results in variable on-target rates, and strongly suggests that the high stringency wash can selectively detach extendable off-target hybridizations.
We compared CRISPR-Cas9 versus random fragmentation using the same high stringency wash conditions as well as all other conditions. With this rigorous comparison, we observed a two-fold increase in the fraction of STR-spanning reads (Supplementary Table 1), which was consistent with what we observed with the lower stringency wash. Three samples (HGDP01341, HGDP00811 and HGDP01292) were used for a direct comparison between CRISPR targeting versus random fragmentation strategies. Because a very large effect size was expected based on the previous result with the lower stringency method, the minimum required number of sample was predicted to be o3. We used same amount of input genomic DNA, and the difference in total number of reads per sample was not significant (P ¼ 0.32 by paired t-test). Compared with the random fragmentation, the CRISPR-Cas9 procedure showed a significant increase from 9.8 to 22.1% in the median STR-spanning read fraction (P ¼ 5.3e À 04 by paired t-test). Thus, it is clear that the CRISPR-Cas9 process generated more informative target reads compared with random fragmentation.
We also observed significant improvements in genetic mixture analysis when comparing CRISPR-Cas9 versus random fragmentation under high stringency wash. Using a mixture of two individuals (NA12878 and NA12877) with NA12878 being the minor component (1%), we performed a comparison between random and CRISPR-Cas9 fragmentation procedures (Supplementary Table 1). We analysed the two sample DNAs separately, and identified 249 haplotypes unique to NA12878. Among the informative haplotypes, the random and CRISPR-Cas9 procedures detected 45 and 58 haplotypes, and 26 were shared between the two (Supplementary Fig. 10a). The most noticeable improvement was observed in quantitative accuracy and precision for allelic fraction ( Supplementary Fig. 10b). The CRISPR-Cas9 procedure determined allelic fractions closer to 1% and the variance was significantly smaller (P ¼ 3.2e À 03 by Levene's test), which were consistent with observations mentioned earlier.
When compared with two other STR genotyping methods that rely on Illumina sequencing (Table 3), STR-Seq is most efficient in generating STR genotypes both with and without the CRISPR-Cas9 procedure. While MIPSTR has similar efficiency (0.9 Â of STR-Seq with CRISPR-Cas9), the assay targets only 100 STRs. Considering the amount of input DNA sample required for both methods (750 ng for MIPSTR and 1 mg for STR-Seq), STR-Seq has a general yield per amount of DNA that is 25 times higher. Moreover, STR-Seq shows a higher success rate for STR genotyping (B80%) to the other methods. It is noticeable that even without the CRISPR-Cas9, STR-Seq has improved efficiency, suggesting a significant contribution of on-flow cell capture coupled with PCR-free library preparation. However, when considering our rigorous comparison experiment, the fraction of informative reads is doubled with CRISPR-Cas9 targeting, which further improves the accuracy and precision of genotyping as well as the efficiency.
Discussion STR-Seq technology provides a solution for highly parallel analysis across thousands of microsatellites with a genotyping accuracy that is comparable to the traditional CE method. The scale of STR-Seq is 100 times higher than the traditional CE method. When compared with the other NGS methods, the efficiency of assay design and sequencing itself are superior. The analysis of thousands of microsatellites in parallel is particularly useful for STR-SNP haplotype applications. STR-Seq accurately called informative STR-SNP haplotypes that increase the polymorphic context when examining genotypes. For example, an uninformative homozygous variant once phased with an adjacent heterozygous variant yields informative haplotype. As we demonstrate, haplotype detection is a very powerful feature in the analysis of DNA mixtures and improves STR-Seq's sensitivity to identify a minor component DNA sample at a 0.1% ratio (Fig. 4b). STR-SNP haplotypes that are closely linked in a short interval are rare. In our analysis, only 10% of the microsatellites have informative haplotypes. Therefore, the analysis of more than 1,000 microsatellites enables: (i) discovery of multiple informative haplotypes and (ii) haplotype-based identification of a specific DNA sample that occurs as a low fraction of a multi-sample DNA mixture.
STR-Seq can be run as a PCR amplification-free assay that enables one to link each sequence read to a single DNA molecule without the use of unique molecular indices (UMI). Other targeted sequencing methods require a post-capture PCR step that increases the frequency of amplification errors. To overcome this issue, some STR sequencing assays such as those using MIP have UMI's composed of random sequences 31 . There are examples where the amplification error is as frequently represented as the genotype among the target reads; a UMI-based approach may not be able to distinguish between these cases. Citing an example, in the study of Carlson et al. 31 , some target STR loci generated as many as six different genotypes all of which were supported by at least one molecular index. In this case, only the reliability of measurement, not the true genotype, was provided. As a result, such targets were excluded from analysis of somatic STR variation. In the case of the MIP approach, the genomic DNA insert size is limited to 200 bp that restricts its application for identifying some categories of STR-SNP haplotypes.
A recent report has shown usefulness of target specific fragmentation with CRISPR-Cas9 in an NGS assay where removal of unwanted high-abundance species was desired (for example, mitochondrial ribosomal RNA in RNA sequencing) 40 . In this study, we proved that not only the depletion of non-target but also selection of target itself enables the sequencing of DNA molecules containing intact microsatellites. More importantly, off-target fragmentation in STR-Seq is not as influential as in any other application of CRISPR-Cas9 because downstream capture step selects only the fragmentation occurring near the probe target region. Therefore, to improve performance, we saturated the cleavage activity by using high concentration of enzyme-gRNA complex and extremely long incubation time. Moreover, multiple gRNAs, if available, were designed per target. The depletion method, on the other hand, requires very careful gRNA design, by which off-target depletion should be minimized. Incorporation of the targeted fragmentation with sequencing library preparation improves STR-Seq's overall performance and this targeted fragmentation process has potential for many applications beyond targeted sequencing. Thus, we demonstrate that there are critical advantages for maintaining an intact target DNA molecule, particularly for highly repetitive segments of the genome. By eliminating PCR amplification artifacts with CRISPR targeted fragmentation, allelic ambiguity is significantly reduced.
Overall, STR-Seq has a wide spectrum of applications for forensics and genetics. For future studies, we will continue making improvements to the performance and conduct large population studies.
Primer probe design for STRs. The locations of 962,714 tandem repeats were obtained from a file called 'simpleRepeat.txt.gz' at UCSC Genome Browser (http://hgdownload.soe.ucsc.edu/goldenPath/hg19/database). As an additional quality control, we selected 950,265 repeats located on canonical chromosomes. We limited our candidate STR loci to short repeats (r100 bp), to enable a single Illumina sequencing read to cover the entire STR. Based on this size criteria, we identified 743,796 STRs from the human genome reference (hg19).
We use additional design criteria to increase the probability of an informative SNP being located in close proximity to the STR locus. For this purpose, we used NCBI dbSNP Build 138, which was downloaded from UCSC Genome Browser (http://hgdownload.soe.ucsc.edu/goldenPath/hg19/database). This data set was comprised of a total of 14,017,609 SNPs that were validated by one of the groups: 1,000 Genomes Project, the Hapmap Project or the submitter. Among these validated SNPs, 13,737,549 SNPs were located on canonical chromosomes.
Of the identified short repeats that totalled 743,796, we identified 512,612 that had at least one validated SNP within 100 bp. We designed probes for a total of 10,090 of these STRs. To determine the STRs with the highest probability of having an informative SNP allele, we selected SNPs that had high population allele frequencies across different populations-if the additive genotype frequency was 41.0, this SNP was included. This ethnic specific genotype population was ascertained from dbSNP138. Using this approach, we identified 2,191 STRs that were proximal to a reported SNP position.
Among the 2,191 STRs, 964 fulfilled the criteria described by Willems et al. 17 : repeat unit sizes of 2-5 bp, an 80% probability of matching, a 10% probability of an indel, and minimum alignment scores determined for each repeat unit size (2-22, 3-28, 4-28, 5-32 and 6-34). All the information was determined by Tandem Repeat Finder 41 and downloaded from the UCSC Genome Browser.
Generating primer probe oligonucleotides. Primer probe pools were prepared either from column or array synthesis (Supplementary Table 2). Oligonucleotides for Assays 1 and 2 are described in Supplementary Data 2. For Assay 1, primer probes were column-synthesized at the Stanford Genome Technology Center (Palo Alto, CA) and combined to generate an equimolar pool where each oligonucleotide was at the same individual concentration. We designed 1,365 primer probes to analyse 491 STR loci that had been previously genotyped and pooled these with 424 primer-probes targeting other STR loci, as well as 466 primer probes for exons (Assay 1; Supplementary Table 2). Primer-probe oligonucleotides targeting exons were included as a subset to provide more sequence diversity and improve the base calling.
For Assay 2, we used array-synthesized oligonucleotides (CustomArray, Bothell, WA) that were amplified and then processed to generate single-stranded DNA for flow cell modification. Supplementary Fig. 3 shows the preparation of primer probe pools from array-synthesized oligonucleotides. We used three steps that included amplification using modified primers and two enzymatic reactions to get the single-stranded final product (Supplementary Fig. 3a). The modified primers were synthesized with polyacrylamide gel electrophoresis purification (Integrated DNA Technologies, Corallville, IA). The forward primer (5 0 -A*A*T*G*A*T*ACGGCGACGGATCAAGU-3 0 ) had a uracil base at the 3 0 end and six phosphorothioate bonds (indicated by *) at the 5 0 end. The reverse primer (5 0 -/5Phos/CAAGCAGAAGACGGCATACGAGAT-3 0 ) had a 5 0 phosphate. Two nanogram of the original oligonucleotide pool was amplified in a 50 ml reaction mixture including 25 U AmpliTaq Gold DNA polymerase, 1 Â Buffer I with 1.5 mM MgCl 2 (Thermo Fisher Scientific), 1 mM of each primer and 0.2 mM dNTP mixture (New England Biolabs, Ipswich, MA). Initially, the reaction was denatured at 95°C for 10 min, followed by 35 cycles of 15 s of 95°C, 30 s of 65°C and 30 s of 72°C. The final steps for amplification involved an incubation at 72°C for 1 min and cooling to 4°C. The amplified product was purified with AMPure XP beads (Beckman Coulter, Brea, CA) in a bead solution to sample ratio of 1.8, and then used for next steps. The purified 40-ml dsDNA amplicon was mixed with 10-ml reaction mixture containing 12.5 U l exonuclease and 1 Â reaction buffer (New England Biolabs), and incubated at 37°C for 2 h for digestion of strands extended from the reverse primer. The reaction was stopped by heat inactivation at 80°C for 20 min. A total of 2.7 U of USER enzyme (New England Biolabs) in 1 Â l exonuclease reaction buffer was added to the single-stranded product, followed by incubation at 37°C overnight. The final product was mixed with 3 Â volume of AMPure XP bead solution and 1 Â volume of isopropanol. Afterwards, the beads were washed twice by 90% ethanol and eluted in 20 ml of 10 mM Tris buffer. We used a Qubit ssDNA assay kit (Thermo Fisher Scientific) to quantify the purified product. Denaturing gel electrophoresis was performed using Novex 15% TBE-Urea gel (Thermo Fisher Scientific) to confirm size of final product (Supplementary Fig. 3b).
In vitro guide RNA preparation. A pool of 8,336 gRNAs targeting 2,098 STRs was prepared from an array-synthesized oligonucleotide pool (Supplementary Data 3). The synthesized oligonucleotide consisted of four components: adapter, T7 promoter, target-specific, trans-activating CRISPR RNA (tracrRNA) regions. Because two separate pools targeting upstream or downstream regions of STRs were required, we added two different adapters according to their target orientation. Forward primers (5 0 -GAGCTTCGGTTCACGCAATG-3 0 and 5 0 -CAAGCAGAAGACGGCATACGAGAT-3 0 ) matching to the adapter sequences and a reverse primer (5 0 -AAAGCACCGACTCGGTGCCACTTTTTCAAGT TGATAACGGACTAGCCTTATTTTAACTTGCTATTTCTAGCTCTAAAAC-3 0 ) complementary to the tracrRNA sequence were synthesized by Integrated DNA Technologies and used for initial amplification. Supplementary Fig. 11 summarizes the preparation process for the gRNA pool from array-synthesized oligonucleotides. Two ng input oligonucleotide pool was amplified in a 25 ml reaction mixture including 1 Â Kapa HiFi Hot Start Mastermix (KapaBiosystems, Woburn, MA) and 1 mM of each primer. The reaction was initially denatured at 95°C for 2 min, followed by 25 cycles of 20 s of 98°C, 15 s of 65°C and 15 s of 72°C. The final steps for amplification involved an incubation at 72°C for 1 min and cooling to 4°C. The amplified product was purified with AMPure XP beads in a bead solution to sample ratio of 1.8, and then used for next steps. Two hundred ng of the purified products was used as a template for in vitro transcription using MEGAscript T7 transcription kit (Thermo Fisher Scientific). After the transcription reaction completed, RNA products were purified using RNAClean XP beads (Beckman Coulter) in a bead solution to sample ratio 3.0. The final gRNAs were quantified by Qubit RNS High Sensitivity kit (Thermo Fisher Scientific). The RNA reagent kit on a LabChip GX (Perkin-Elmer) was used to confirm the product size per the manufacturer's protocol.
Targeted fragmentation and sequencing library preparation. For each library, 500 ng or 1 mg gDNA was incubated in a 25-ml reaction mixture including 100 nM Cas9 nuclease, 1 Â reaction buffer (New England Biolabs) and 100 nM gRNA pool. The reaction was incubated at 37°C overnight, and then heat-inactivated at 70°C for 10 min. The fragmented DNA was purified using AMPure XP beads in a bead solution to sample ratio of 1.8 and used for the next step. The KAPA HyperPlus library preparation kit (KapaBiosystems) was used for the following steps. The gRNA-cleaved DNA was subject to random fragmentation with the KAPA enzyme mix; the incubation was at 37°C for 9 min directly followed by incubation on ice. A-tailing enzyme mix was added to the final fragmentation products and the fragmented library was A-tailed with incubation at 65°C for 30 min. Because the random fragmentation creates blunt-ended breaks, the end-repair step was omitted. The DNA ligase mix including 75 pmol annealed adapter and was added to the A-tailed library. The reaction volume was incubated at 20°C for 15 min. Afterwards, the library products were purified with AMPure XP beads in a bead solution to sample ratio of 0.8. For the amplification-free preparation, the purified library was used directly for STR-Seq with no additional steps.
For those samples where we used PCR amplification of the sequencing libraries, several additional steps were included. We prepared 50-ml reactions for PCR amplification. The reaction mixture contained 25% volume of the adapter annealing step product, 1 mM amplification primer, 1X Kapa HiFi Hot Start Mastermix (KapaBiosystems, Woburn, MA). The amplification primer is the top strand of the singleplex adapter (Supplementary Table 8 Oligonucleotides and the sequencing library were heat denatured for 15 min at 95°C followed by incubation on ice. Afterwards, we diluted both components with ice-cold 4 Â Hybridization buffer (20 Â SSC, 0.2% Tween-20) to a final total concentration of 50-100 nM for the primer probes and 150 ng ml À 1 for the sequencing library. Denatured primer probes (100 ml) and libraries (30 ml) were loaded in separate eight tube strips. As described previously 34 , we created a custom cBot reagent plate, containing hybridization buffer 1 (pos.1: HT1 or 5 Â SSC, 0.05% Tween-20), Extension mix (pos.2: 20 U ml À 1 Phusion (Thermo Scientific); 0.2 mM dNTP; 1 Â Phusion HF buffer), Wash buffer (pos.7: HT2 or 10 mM Tris buffer) and freshly prepared 0.1 N NaOH (pos.10).
The reagent plate and eight-tube strips containing the denatured primer probes were loaded onto the Illumina cBot. We set the 'Wash before Run' and 'Wash after Run' setting (that is, Configure in Menu) to Optional. In the RunConfig.xml file, we increased the number of cycles to 42 (that is, Amplification MaxNumCycles). Two different cBot programs were used for the subsequent steps 34 . The first cBot programme (P1) automates the hybridization and extension of the primer probes to a subset of the P7 primers of the flow cell surface, followed by denaturation and removal of the original primer probe oligonucleotides. Finally, the denatured sequencing library is hybridized to the generated primer probe capture flow cell lawn in an overnight hybridization at 65°C.
After the completion of the P1 programme, the second cBot programme (P2) is started. When HiSeq High Output runs are performed, the standard Illumina cBot clustering reagent plate is used for this process. The P2 programme for the High Output mode performs a stringency wash of the hybridized library, followed by the standard Illumina extension and clustering protocol. For HiSeq Rapid Run mode, another custom cBot reagent plate was created. The plate contains hybridization buffer 1 (pos. -20). The P2 programme for the Rapid Run mode performs a stringency wash of the hybridized library (hybridization buffer 1 or high stringency buffer at 65°C), followed by extension and initial five cycles of amplification. For runs performed using High Output mode, we used cBot clustering reagents and sequencing reagents (V3 for Illumina) for 101 cycle paired end reads. For runs performed using Rapid Run mode, we used v1 or v2 reagents for cBot sample loading, clustering, and sequencing (Illumina) for 2 Â 150 cycle or 2 Â 250 cycle paired end reads. For all the HiSeq experiments, image analysis and base calling were performed using the HCS 2.2.58 and RTA 1.18.64 software (Illumina). All sequence data has been deposited in the NCBI Sequence Read Archive (SRP071335). STR genotyping. We developed an automated bioinformatics pipeline for STR-Seq. An overview of STR genotyping process is illustrated in Supplementary Fig. 12.
The following five data files describing the STRs and associated STR-Seq probes are required as input to the processing steps: (i) str_probes.txt: containing STR-Seq probe number, genomic coordinates for probe alignment, name of targeted STR, and probe plus or minus orientation; (ii) str_info.txt: containing STR name, repeat motif, STR genomic coordinates, minimum number of motif repeats required to consider the STR present in the region, and the 5 0 and 3' STR flanking sequences; (iii) 5prflank.bed: containing STR name and 5 0 flanking sequence coordinates in.bed format; (iv) 3prflank.bed: containing STR name and 3 0 flanking sequence coordinates in.bed format; (v) noSTR_plus5b.bed: target bed coordinates for variant calling (excludes any STR motif regions). Selected STR metadata from these files is provided in Supplementary Data 4. The complete files are available for download at https://github.com/sgtc-stanford/STRSeq in the Resources folder.
Single-end alignment to the NCBI v37 reference genome was performed on the sequencing reads using bwa-mem 42 v.0.7.4 with default parameters. For the paired end sequence, Read 1 is designated as R1 and Read 2 is designated as R2. Although it is not necessary to align the Read 1 to the genome, subsequent processing is facilitated by having both Read 1 and Read 2 sequencing reads in bam format. We developed an indexing process to analyse the R2 sam format alignment records and add a STR index tag. This involves adding a custom sam tag (ZP) to each read that aligns within 2 bases of an expected probe position. For example if the R2 read matched an expected alignment position for probe number 123, the tag 'ZP:i:123' would be added to the sequence read. Alignment position rather than the actual probe sequence is used in this step for determining the probe match thus delegating the mismatch tolerance to the alignment algorithm. R2 reads that do not match any expected probe position are discarded. The R1 mates of the remaining R2 reads are tagged with the same probe number as R2. This indexing method does not require R1 sequences to align to the genome; both aligned and unaligned reads are tagged based on alignment of their R2 mate to a designated primer probe sequence.
The first step in evaluating reads for presence of a STR is to determine whether both the expected 5 0 and 3 0 STR flanking sequences are present in R1. The exact expected flanking sequences are available in the str_info.txt file. To allow for mismatches in the flanking sequences, FreeBayes 36 and vcftools 43 were used to determine variant flanking sequences as follows: (i) variants were called using FreeBayes v0.9.21-19 with the-noindels parameter; (ii) bedtools intersectBed method was used to extract only the variants occurring in the 5 0 and 3 0 flanking regions described by the genomic coordinates in the 5prflank.bed and 3prflank.bed files; (iii) a simple custom python script (str_flank_alleles.py) was used to exclude any complex variants and to reformat the variant file for further processing.
As a result of the STR-indexing, each R1 sequence read is tagged with the probe number to which its R2 mate aligned. Each probe number is associated with a targeted STR in the str_probes.txt file, and the str_info.txt file provides the expected 5 0 and 3 0 flanking sequences for each STR. Using this information, as well as any flanking sequence variants called by FreeBayes and bedtools, a custom python script (str_lengths_R1ref.py) is used to identify R1 reads that include the complete 5 0 and 3 0 flanking sequences and can therefore be expected to encompass the entire STR.
The next step in this process is to determine whether the expected STR motif repeat is present between the flanking sequences. The str_info.txt file specifies the expected motif, as well as a minimum number of STR motif repeats that should be present between the flanking sequences to consider the STR present. Thus for R1 reads which are identified as having an intact STR present, the read will comprise a 15 base 5 0 flanking sequence, followed by a variable length region containing at least a minimum number of STR motif repeats, followed by a 15 base 3 0 flanking region. For these reads STR motif repeat count is calculated by dividing the number of bases in the variable length region by the length of the STR motif. For example if the variable length region is 28 bases and the STR motif is GATA (tetramer), then the STR motif repeat count is 7.
R1 reads encompassing entire STRs are counted, and summarized by motif repeat count to provide a basis for determining heterozygous vs homozygous STR alleles. For example, if all of the reads for a given STR have a motif repeat count of seven, then the STR allele is clearly homozygous. However, there are often stutter artifacts introduced during the PCR amplification process that results in a percentage of reads with STR motif repeat counts bracketing the true allele. The distribution of repeat counts and relative percentage of reads for each repeat count was used to differentiate heterozygous or homozygous STR alleles versus stutter artifacts. The major STR allele is determined by counting the sequence reads with a specific STR motif repeat. Other STR motif repeats are evaluated based on their repeat count distance from the major allele. For example, if the major STR allele has a motif repeat count of 10, and another allele has a repeat count of 8, the distance from the major allele is À 2. Depending on the distance from the major allele, a candidate secondary allele must pass a read threshold for the STR to be considered heterozygous. The read thresholds as a fraction of the major allele reads are: 0.35, 0.15, 0.45 and 0.02, corresponding to allelic distances of: À 1, þ 1, o À 1 and 4 þ 1 respectively. Details of how the thresholds were determined are outlined below.
Determination of threshold for secondary STR allele. Using the STR-Seq data from HGDP individuals having also been genotyped by CE, thresholds for four different allelic distances relative to the major allele ( À 1, þ 1, o À 1 and 4 þ 1) were determined to maximize sensitivity of detection of secondary allele while maintaining the type II error below 0.01. Supplementary Fig. 13 shows receiver operating characteristic curves for all the categories, in which the determined thresholds were indicated. The thresholds are as follows: 0.35, 0.15, 0.45, 0.02 which reflects the finding that PCR amplification induced stutter is more likely occurs as a deletion of a motif than insertion, and additionally that longer motif repeats will more often be impacted by sequencing read length being insufficient to capture the entire STR region plus flanking sequences. To test the null hypothesis (no secondary allele detection; that is, homozygous call), a subset of the data having homozygous CE calls was used as controls. Distribution of number of reads having the same allelic distance from the major allele showed generally a good separation between the case and control (Supplementary Fig. 14).
Comparison with CE microsatellite genotypes. When comparing STR-Seq with CE, many STRs demonstrated a consistent offset of one or more repeat units. This is due to annotation differences 17 . First, the start and end positions of STRs can vary because we adjusted those to ensure the flanking sequences were unique and free of high frequency SNPs in each targeted region. Second, some CE annotations include multiple STRs separated by non-repetitive sequences, for which STR-Seq targeted only the longest. Therefore, before comparing genotypes, the median of all the offsets for every locus was calculated and used to compare CE versus STR-Seq calls.
Comparison with WGS-lobSTR genotypes. The lobSTR calls for all the HGDP samples were downloaded from: http://lobstr.teamerlich.org/validation-sets.html. A tab-delimited file (marshfield_cap_vs_lobstr_calls.tab) included all the genotype calls, STR-spanning coverages, and scores. We used the data for the comparisons after filtering for calls with a minimum coverage of 5 Â and minimum lobSTR quality score of 0.9, which were shown to be in good correlation with CE calls (93% concordance and R 2 ¼ 0.95). Notably, the genotyping accuracy for homozygous STRs (88.71%) is worse than that of heterozygous STRs (94.00%) when applying the filters (Table 1). Although the difference is smaller (83.43% versus 86.40%) without the filtering, we still decided to use the filters because of poor overall genotype accuracy (85.68%).
STR-SNP haplotypes. Leveraging the target design process, a subset of the primer probes targeted regions in which there were proximal SNPs to STRs. An overview of STR-SNP haplotyping is illustrated in Supplementary Fig. 15.
The bamUtil (http://genome.sph.umich.edu/wiki/BamUtil) v0.1.13 trimBam method was used to mask the first 40 bases of R2 reads in the forward orientation, and the last 40 bases of R2 reads in the reverse orientation. This masking is performed so that the synthetic probe DNA which by design matches the reference sequence, does not influence the variant discovery. FreeBayes v0.9.21-19 with quality and coverage filters was used to call R2 variants. The parameters used are: --pvar 0.05, --no-mnps, --no-complex, --min-mapping-quality 25, --min-basequality 15, --min-coverage 3, --min-supporting-mapping-qsum 90, --min-supporting-allele-qsum 60. The coverage, mapping and base quality parameters were chosen to minimize type I errors when comparing our NA12878 variant calls to the Illumina platinum genomes (http://www.illumina.com/ platinumgenomes) calls for the same sample (see 'Methods' section, SNP validation). Vcftools 43 v0.1.11 is then used to exclude variant calls in any locus that encompasses a STR repeat. This step is necessary because some STRs are in close proximity to each other and especially with longer read lengths, the R2 read targeting one STR could include all or part of a repeat region for a different STR. Due to the inherent variability in these regions relative to the genome reference, it is not informative to consider these variants in STR-SNP phasing. This filtering is accomplished by providing a.bed file (noSTR_plus5b.bed) that excludes these STR repeat regions, to the vcftools step. Additionally in the vcftools filtering step, any SNPs which are within 6 bp of each other are removed, as are indels or variants which do not have a status of 'PASS' from FreeBayes. Parameters used are: --thin 6, --remove-indels, --remove-filtered-all, and -bed. As a final quality filtering step, vcffilter (https://github.com/vcflib/vcflib#vcflib) is used to include only those reads with average alternate base quality48 (QUAL / AO48).
Picard (http://broadinstitute.github.io/picard/) v1.97 FilterSamReads method with FILTER -includeReadList parameter was used to select only R2 alignment sequences that paired with R1 sequences having intact microsatellites. Of those R2 alignment sequences, only the ones that cover one or more of the SNP positions determined in the previous section are extracted using a python script (pstr_extract_R2SNP.py). In this step, additional filtering is also performed to exclude any R2 reads for which the base at the SNP position is either not a reference or alternate allele as reported by FreeBayes, or if FreeBayes reports the allele frequency as 0. For example if the reference base frequency is 0 and alternate base frequency is 1, only the reads with the alternate base will continue to the next step. The resulting R2 sequences are merged with the STR metadata derived from the R1 mate sequence (pstr_merge_str_snv.py). Subsequently, the python script (pstr_genotyping.py) summarizes the read counts in the merged file by STR, SNP allele and STR motif repeat count. Finally, the script (pstr_haplotype_cts.py) is used to make the haplotype calls. For homozygous SNPs, the STR-SNP haplotypes are determined by evaluating allelic difference and read count thresholds as in the STR genotyping. If no STR allele passes the threshold test, the STR-SNP haplotype will be homozygous (for example, A-11), otherwise it will be heterozygous, (for example, A-11, A-13). For heterozygous SNPs the STR-SNP haplotype will be heterozygous-formed by associating each SNP NATURE COMMUNICATIONS | DOI: 10.1038/ncomms14291 ARTICLE NATURE COMMUNICATIONS | 8:14291 | DOI: 10.1038/ncomms14291 | www.nature.com/naturecommunications base with its major STR repeat allele, simply by majority counting (for example, A-11, C-13).
SNP analysis and validation. To confirm the validity of our SNP calls we used SNPs derived from the high coverage WGS of the HapMap sample NA12878, as a ground truth set. This sample was subject to Illumina-sequencing at an average coverage of 200 Â on a HiSeq 2000 system, using an amplification-free library. The platinum genomes vcf file was downloaded from Illumina and filtered with vcftools using the following filters: --thin 6 --remove-filtered-all --remove-indels --recode -recode-INFO-all, and with --bed file filtering using the noSTR_plus5b.bed file for either Assay 1 or Assay 2, depending on the comparison being performed. The same filters were applied to the NA12878 vcf files generated by Assay 1 and Assay 2. Vcftools was then run with the -diff and -diff-sites parameters to compare the two vcf files. The STR-Seq vcf calls were tested with a combination of parameters: min-coverage ¼ 3, 5, 8 or 10, min-base-quality ¼ 10, 15 or 20, min-mappingquality ¼ 25 or 30. The parameters determined to minimize false positive SNP calls were the lower to mid end of the parameters tested: min-coverage ¼ 3, min-basequality ¼ 15, min-mapping-quality ¼ 25. Additionally to require slightly higher base and mapping quality for low coverage STRs, the following parameters were also used: min-supporting-mapping-qsum ¼ 30 Â min-coverage ¼ 90, and min-supporting-allele-qsum ¼ 20 Â min-coverage ¼ 60. This further reduced the putative false positive calls to 0 of 135 SNP calls for Assay 1, and 212 of 1535 SNP calls for Assay 2.
Validation of haplotypes. To determine the accuracy of phased STR-SNP haplotypes, we evaluated the Mendelian inheritance patterns of a family trio (NA12878-daughter, NA12891-father and NA12892-mother). The standard STR-Seq genotyping and haplotyping pipeline was first run for all three members of the trio. Next, the parents were assessed for the presence of variants found in the child. The process documented in the Phasing STRs with SNPs method section (pstr_extract_R2snv.py, pstr_merge_str_snv.py, pstr_genotyping.py, pstr_haploty-pe_cts.py) is rerun, using the variant calls for the child, in place of the parent variant calls. The parent is considered heterozygous for the reference and variant if the secondary allele comprises at least 15% of the reads at that position. Though a heterozygous allele should theoretically be 50% of the reads, if the SNP is phased with a longer STR allele, there will be a greater number of reads that truncate the STR region. Stutter in the simpler repeat motifs will distribute the read counts over a greater number of phased haplotypes. Once the parental haplotypes are called, the parent and child haplotype files are merged and compared with determine if the child haplotype can be explained by Mendelian inheritance of one phased allele from each parent. Final concordance percentages are based on coverage of at least 10 reads at a given SNP position, for each member of the trio.
Statistics. Normality of distributions were tested by Shapiro-Wilk test. According to the normality, we chose either non-parametric tests (Wilcoxon rank sum and signed rank tests) or t tests. However, the tests were all two-sided in both cases. Levene's test for homogeneity of variances was used (i) to check equal variance assumption in independent two-sample tests (for example, Wilcoxon rank sum test); and (ii) to simply compare two variances. P valueso0.05 were considered statistically significant, and either P value itself or asterisk was used to indicate the significance. | 2018-04-03T03:21:53.979Z | 2017-02-07T00:00:00.000 | {
"year": 2017,
"sha1": "f316a504933907158ac91209e8c0c2980de0dfe9",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/ncomms14291.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f316a504933907158ac91209e8c0c2980de0dfe9",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
253047499 | pes2o/s2orc | v3-fos-license | Dissecting super-enhancer driven transcriptional dependencies reveals novel therapeutic strategies and targets for group 3 subtype medulloblastoma
Background Medulloblastoma is the most common malignant pediatric brain tumor and group 3 subtype medulloblastoma (G3-MB) exhibits the worst prognosis. Super enhancers (SEs) are large clusters of enhancers that play important roles in cancer through transcriptional control of cell identity genes, oncogenes and tumor-dependent genes. Dissecting SE-driven transcriptional dependencies of cancer leads to identification of novel oncogenic mechanisms, therapeutic strategies and targets. Methods Integrative SE analyses of primary tissues and patient-derived tumor cell lines of G3-MB were performed to extract the conserved SE-associated gene signatures and their oncogenic potentials were evaluated by gene expression, tumor-dependency and patient prognosis analyses. SE-associated subtype-specific upregulated tumor-dependent genes, which were revealed as members of SE-driven core transcriptional regulatory network of G3-MB, were then subjected to functional validation and mechanistic investigation. SE-associated therapeutic potential was further explored by genetic or pharmaceutical targeting of SE complex components or SE-associated subtype-specific upregulated tumor-dependent genes individually or in combination, and the underlying therapeutic mechanisms were also examined. Results The identified conserved SE-associated transcripts of G3-MB tissues and cell lines were enriched of subtype-specifically upregulated tumor-dependent genes and MB patients harboring enrichment of those transcripts exhibited worse prognosis. Fourteen such conserved SE-associated G3-MB-specific upregulated tumor-dependent genes were identified to be members of SE-driven core transcriptional regulatory network of G3-MB, including three well-recognized TFs (MYC, OTX2 and CRX) and eleven newly identified downstream effector genes (ARL4D, AUTS2, BMF, IGF2BP3, KIF21B, KLHL29, LRP8, MARS1, PSMB5, SDK2 and SSBP3). An OTX2-SE-ARL4D regulatory axis was further revealed to represent a subtype-specific tumor dependency and therapeutic target of G3-MB via contributing to maintaining cell cycle progression and inhibiting neural differentiation of tumor cells. Moreover, BET inhibition with CDK7 inhibition or proteasome inhibition, two combinatory strategies of targeting SE complex components (BRD4, CDK7) or SE-associated effector gene (PSMB5), were shown to exhibit synergistic therapeutic effects against G3-MB via stronger suppression of SE-associated transcription or higher induction of ER stress, respectively. Conclusions Our study verifies the oncogenic role and therapeutic potential of SE-driven transcriptional dependencies of G3-MB, resulting in better understanding of its tumor biology and identification of novel SE-associated therapeutic strategies and targets. Supplementary Information The online version contains supplementary material available at 10.1186/s13046-022-02506-y.
Background
Medulloblastoma (MB) is the most common malignant pediatric brain tumor and one of the leading causes of brain-tumor patient death of children. Current MB treatment includes surgical resection followed by radiation and intensive chemotherapy. The establishment of a consensus molecular subtyping standard is a milestone of developing targeted MB therapy [1]. There are four major subtypes of MB: WNT, SHH, Group 3 and Group 4, which carry distinct gene expression profiles, epigenetic landscapes, genetic mutations and clinical outcomes [2]. Among the four subtypes, group 3 subtype MB (G3-MB) exhibits the worst prognosis as they tend to carry amplification of MYC, to metastasis and to relapse following therapy [3]. Therefore, patients of G3-MB need more effective therapy most urgently.
Super-enhancers (SEs) are large proximal clusters of enhancers with extraordinary enrichment of H3K27Ac, transcription factors (TFs) and coactivators [4,5]. They exert oncogenic functions via driving transcription of cell identity genes, oncogenes and tumor-dependent genes in cancer cells [4,5]. Those genes can be categorized into upstream TFs and downstream effector genes, which together comprise SE-driven core transcriptional regulatory network [5,6]. SE-associated TFs often self-regulate and mutually regulate the others, thus forming cross-regulated feed-forward loops called SE-driven core regulatory circuitry [5,6]. Dissecting SE-driven transcriptional dependency not only helps better understanding the cellular origin and oncogenic mechanisms of cancer, but also facilitate identification of novel therapeutic strategies or targets. Targeting BRD4, a crucial component of the SE complex, with BET inhibitor (BETi) has been shown to effectively suppress SE-associated transcription and growth of many cancers in preclinical tests [7]. Moreover, SE-associated malignancy genes are often found to be more vulnerable to CDK7 inhibition, which targets the general transcription factor TFIIH, an integral component of the RNA polymerase II pre-initiation complex. CDK7 inhibitor (CDK7i) is found to exhibit selective suppression on cancer cells via preferentially targeting SEdriven transcriptional addiction [7]. More importantly, BETi and CDK7i drugs have already entered human clinical trials for cancer therapy. Targeting SE complex suppresses transcription of members of SE-associated core transcriptional regulatory network preferentially and effectively [8,9]. This is extremely helpful for treating tumor types highly addicted to oncogenic master TFs, which are often difficult to be directly targeted by smallmolecule inhibitors. Alternatively, some SE-associated downstream tumor-dependent effector genes could serve as promising drug targets for developing novel cancer therapy [10,11].
There has been some progress in unveiling SE's oncogenic functions and the underlying molecular mechanisms in G3-MB. A study has reported the SE landscape of all four subtypes of MB based on epigenetic profiling of human tumor tissues, which reinforces the intersubgroup tumor heterogeneity of MB via analyzing SE-driven core regulatory circuitry [12]. As expected, MYC and OTX2, the two well-established oncogenic driver TFs of G3-MB, are revealed as subtype-specific SE-associated oncogenes of G3-MB tumor tissues. Moreover, another study has reported CRX and NRL as another two SE-associated subtype-specific tumordependent TFs. They are shown to be master regulators of the photoreceptor transcriptional program that represents a G3-MB specific tumor dependency [13]. Furthermore, both BETi and CDK7i have been reported to effectively treat pre-clinical models of G3-MB [14][15][16][17], but their impacts on SE's oncogenic functions have not been evaluated yet. Notably, it has been shown that the enhancer landscape of primary tissues of G3-MB exhibited poor overlap and correlation with those of tumor cell lines [12], therefore, whether the commonly used patientderived primary G3-MB lines could serve as proper models for further investigating oncogenic functions and therapeutic potential of SE-associated transcription remains to be determined. In this study, we aimed to perform integrative SE analyses of primary tissues and patient-derived tumor cell lines of G3-MB to verify the oncogenic role of SE-driven transcriptional dependencies and further explore their therapeutic potential in preclinical models of G3-MB. Li et al. J Exp Clin Cancer Res (2022) 41:311 Methods Cell culture 293T cell line was obtained from Cell Bank of Chinese Academy of Science (Shanghai, China). D425, MB002, HD-MB03 and UW228 cell lines were kindly provided by Prof. Yoon-jae Cho (Oregon Health & Science University). D425, UW228 and 293T were cultured in DMEM (BI-01-052-1ACS, Biological Industries) supplemented with 10% FBS (F2442, Sigma). MB002 and HD-MB03 were cultured in Tumor Stem Media (TSM) as previously described [16]. Drosophila S2 cell line was cultured in Schneider's Insect Medium (S0146, Sigma) supplemented with 10% heat-inactivated FBS (S711-001S, Lonsera) in humidified air at 37 °C (Forma Reach-In CO 2 Incubator, Modal 3951, Thermo Fisher Scientific).
Lentivirus was generated by co-transfection of 293T cells with above mentioned plasmids and packaging plasmids pMD2.G (Addgene plasmid # 12259) and psPAX2 (Addgene plasmid # 12260). Lentiviral particles were concentrated via PEG method and resuspended in PBS for infection.
Cells were infected with indicated lentivirus at multiplicity of infection (MOI) of 1 ~ 5 for two days and subjected to puromycin selection for another three days. Then the cells were harvested and subjected to FACS analyses of cell proliferation, cell apoptosis and cell cycle, or seeded into 96-well plate in triplicate (5000 cells per well) for cell viability tests.
All shRNA and cas13d-sgRNA sequences were listed in supplementary Table 1.
Immunoblot assay
Whole cell lysates were obtained by lysing cells with RIPA buffer supplemented with Protease Inhibitor Cocktail Set III (539134, Calbiochem) and Phosphatase Inhibitor Cocktail 3 (P0044, Sigma). Protein concentration was determined with Pierce BCA Protein Assay (23225, Thermo Fisher Scientific). Equal amount of protein was loaded for immunoblot analysis. Antibodies used for immunostaining were listed in supplementary Table 2.
RNA extraction, reverse transcription and quantitative real-time PCR (RT-qPCR)
Total RNA was extracted using TRI Reagent (TR118, MRC) according to the manufacturer's instructions. Reverse transcription (RT) was performed with High Capacity cDNA Reverse Transcription Kit (4368813, Thermo Fisher scientific). Quantitative real-time PCR (qPCR) analysis was performed with Fast Real-time PCR System (ABI, 7900HT) using FastStart Universal SYBR Green Master (ROX) (04913850001, Roche). Total cDNA of Drosophila S2 cells, serving as spike-in reagent, was added to total cDNA with mass ratio of 1:10. RT-qPCR assays were performed in triplicates and the data are presented as mean ± SD (standard deviation). The qPCR primers were listed in supplementary Table 3.
Cell viability, CI, proliferation, apoptosis, and cell cycle assays
For cell viability measurement, cells were seeded into 96-well plates (5000 cells per well) and exposed to drug treatment or not. The viabilities of the seeded wells were then measured by Celltiter-Glo (G9243, Promega). Cell viability assays were performed in triplicates and the data are presented as the means ± SD. For synergistic investigation, the combination index (CI) was calculated with CompuSyn software (ComboSyn, Inc.). FACS analyses of cell proliferation, cell apoptosis and cell cycle were performed with Click-iT EdU Alexa Fluor 647 Flow Cytometry Assay Kit (C10640, Invitrogen), Annexin V-FITC Apoptosis Detection Kit (556547, BD Biosciences), Cell cycle staining kit (CCS012, Multi Science), respectively. FACS data were acquired from BD Fortessa (BD Biosciences) or CytoFLEX (Beckman Coulter) FACS instrument and analyzed with Flowjo software (FlowJo, LLC).
ChIP-qPCR
Chromatin immunoprecipitation (ChIP) coupled with qPCR (ChIP-qPCR) was performed as described previously [18]. Briefly, cells were fixed by 1% formaldehyde for 8 min at room temperature (RT) with rotation, quenched by 0.125 M glycine. The cells were digested by MNase (NEB, M0247S) and followed by sonication for 5 cycles (20 s on/30 s off for one cycle). Then the chromatin was incubated with indicated primary antibodies (H3K27Ac, Active Motif #39133, or OTX2, ProteinTech #13497-1-AP) with rotation overnight at 4 °C. The antibody-chromatin complex was immunoprecipitated with magic beads (26162, Thermo Fischer Scientific) with rotation at 4 °C for 4 h. Then the immunoprecipitated DNA was extracted followed by qPCR. ChIP-qPCR results of indicated primary antibodies were calculated by normalization to ChIP-INPUT. ChIP-qPCR assays were performed in triplicates and the data are presented as mean ± SD. The ChIP-qPCR primers were listed in Supplementary table 4.
Promoter-located constant and SE-located test 3C-PCR primers were designed for detecting DNA loop-structure in gene loci of ARL4D and PSMB5. Primers were named after location, initial of gene symbol (HindIII-digestion related) and also restriction enzyme in the case of MboI.
The 3C-PCR primers were listed in Supplementary table 5.
G3-MB tumor xenograft
All in vivo experimental procedures were approved by the Animal Care and Use Committee of Shanghai Jiao Tong University School of Medicine and performed according to the guidelines. For orthotopic inoculation, each 8-10 weeks old female nude mice (BALB/c nu/ nu ) (Lingchang, Shanghai) were injected with 7.5 × 10 4 MB002 cells with stably expressing GFP and firefly luciferase proteins (MB002-GFP-luc) (suspended in 3 μl PBS). Cells were stereotactically injected into each nude mouse's cerebellum 2.1 mm below the dura at a location 2 mm right of the midline and 2 mm posterior of the bregma. Then the tumor burden of the mice was monitored by in vivo imaging system (IVIS). The mice were intraperitoneally injected with D-luciferin (75 mg/ kg, P1043, Promega) and were imaged by the Xenogen IVIS200 Imaging System (Perkin-Elmer). The signal of the total bioluminescence flux intensity (p/s) for each xenografted nude mouse was collected to represent tumor burden. The IVIS signal data are presented as mean ± SEM.
In vivo drug treatment
The orthotopic xenograft models were randomly divided into 4 groups, and treated with vehicle, Marizomib (150 μg/kg, tail vein injection, once a week), JQ1 (50 mg/ kg, intraperitoneal injection, twice a week) or in combination, respectively.
RNA-seq and ChIP-seq
D425 was treated with 0.1 μM THZ1 for 6 h or 1 μM JQ1 for 24 h, lysed in Trizol and sent to the company (Smartquerier Biomedicine, Shanghai, China) for RNA sequencing. For ChIP sequencing, D425, MB002 and HD-MB03 cells were harvested, fixed by 1% formaldehyde, snap-frozen and sent to the company (Romics, Shanghai, China) together with H3K27Ac antibody (AM39133, Active Motif ).
RNA-seq data processing
RNA-seq data were mapped to the cDNA sequences of GRCh38 by Salmon [20]. Mapped read counts were normalized using DESeq2 [21] followed by differential gene expression analysis.
ChIP-seq data processing
All ChIP-seq data sets were aligned to the human genome (build version: GRCh38/hg38) using Bowtie 2 (version 2.3.0) [22]. SAM files generated by Bowtie2 were then converted to BAM files with samtools (version 1.9) [23]. Multi-mappers and duplicates were filtered out by sambamba (version 0.7.1) [24]. ChIP-seq peaks over input sample were identified using a peak-finding algorithm, MACS2 (version 2.2.6) [25]. A q value of 0.05 was set as threshold of enrichment for all data sets. Active enhancers were defined as regions of ChIP-seq enrichment for the enhancerassociated histone modification H3K27Ac outside of promoters (excluding the ± 2.5 kb region flanking the promoter). In order to accurately capture dense clusters of enhancers, stitching distance of 12.5 kb was allowed for separate H3K27Ac regions. Superenhancers were identified and analyzed as described previously [26].
Gene Set Variation Analysis (GSVA) and Gene Set Enrichment Analysis (GSEA)
Gene Set Variation Analysis [27] (GSVA) were performed on the data from indicated public database using GSVA package in R. Gene Set Enrichment Analysis (GSEA) was performed according to the instructions on the website (http:// www. broad insti tute. org/ gsea/ index. jsp) as previously described [28].
Data source
Gene expression and survival data were obtained from R2 platform H3K27Ac ChIP-seq data of D283 and D341 lines were acquired from GSE92585.
CERES gene effect scores for evaluating tumor-dependency were from DepMap Public 20Q2 Achilles_gene_ effect on DepMap platform (https:// depmap. org/ portal/). For Tumor dependency analysis, a CERES score of -0.1 was selected as cutoff instead of what is being normally used, -0.5, so that some of the well-described oncogenes of G3-MB, such as CRX and NRL, would not be misidentified to be dispensable based on their CERES scores in the tested G3-MB lines.
Statistical analyses
GraphPad Prism 6.0 software was applied for the statistical analysis. Significance was calculated by two-tailed Student's t test for data with two groups and One-way ANOVA for data with more than two groups. Two-way ANOVA was used to compare IVIS bioluminescence flux intensity. The statistical significance of Kaplan-Meier survival curves was determined by Log-rank (Mantel-Cox) test. The FDR value of GSEA was generated by GSEA software. *p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001.
Characterization of SE-associated gene signatures of patient-derived primary G3-MB lines
To characterize the SE landscape of patient-derived primary G3-MB lines, we performed chromatin-immunoprecipitation with sequencing (ChIP-seq) of H3K27Ac antibody and RNA-seq analyses in three human primary G3-MB cell lines (MB002, D425, HD-MB03). Previously published H3K27Ac ChIP-seq and RNA-seq data of two other G3-MB lines (D283 and D341) were also obtained for SE profiling [29]. ROSE (Rank Ordering of Super-Enhancers) algorithm was used for calling SE and SEassociated genes. As shown in Fig. 1a-b, MYC, OTX2 and CRX were found to be within the top-rank SE-associated genes of G3-MB cell lines as previously reported in primary G3-MB tissues [12,13]. We defined SE-associated genes recurrently identified in at least three G3-MB lines as "cellular_SE-associated_gene_signature" (cSE) (Fig. 1c).
We also extracted SE-associated genes of G3-MB tissues from a previously published study [12] as "tissue_SE-associated_gene_signature" (tSE). The 42 genes shared between cSE and tSE were defined as "overlapping_SE-associated_gene_signature" (oSE) (Fig. 1d). Gene ontology (GO) analyses revealed they were all significantly enriched in biological processes related to nervous system development and transcription regulation (Fig. 1e). Next, we examined the oncogenic potential of the three SE-associated gene signatures of G3-MB. The "ALL" gene signature, which contained all measured genes in each dataset, was used as control. For gene expression analyses, four MB tissue transcriptomic datasets (Pomeroy [30], U133P2 [31][32][33], Pfister [31], Cavalli [34]) were obtained from R2 website with two of them (Pomeroy, U133P2) containing normal cerebellum control data. Compared to ALL, all the three SE-associated gene signatures are enriched of significantly upregulated genes of G3-MB versus normal cerebellum (NC) or the other three MB subgroups, and the oSE exhibits the highest enrichment ( Fig. 1f-g). For gene dependency analyses, CERES gene effect scores of four G3-MB lines (D283, D341, D425 and D458) calculated based on whole-genome CRISPR-Cas9 screening results were obtained from Dep-Map Public 20Q2 [35]. We found cSE and oSE but not tSE were enriched of tumor-dependent genes in all four G3-MB lines (Fig. 1h). To delineate the impact of SEassociated transcription of G3-MB on clinical outcome, we performed gene set variation analysis (GSVA) of cSE, tSE and oSE in the MB tissue transcriptomic datasets and found they are all significantly enriched in G3-MB versus NC or the other subgroups ( Fig. 1i and S1a-b). Moreover, MB patients harboring higher enrichment of these SE-associated gene signatures consistently exhibit inferior survival (Fig. 1j and S1c-d). Together, these data demonstrated the conserved SE-associated transcripts between primary tumor cell lines and tissues of G3-MB were enriched of subtype-specific upregulated tumordependent genes and MB patients harboring enrichment of those transcripts exhibited worse prognosis, indicating these G3-MB lines could be used for further exploring the therapeutic potential of SE-associated transcription.
Establishment of SE-driven core transcriptional regulatory network of G3-MB
To decipher SE-associated subtype-specific oncogenic mechanisms of G3-MB, oSE genes were examined to identify members of the SE-driven core transcriptional regulatory network. The following criteria were utilized: (1) significantly upregulated in G3-MB versus NC (log2FC > 0.6, FDR < 0.05 in at least one dataset) or the other three MB subtypes (log2FC > 0.2, FDR < 0.05 in at least three datasets); (2) tumor-dependent (CERES score < -0.1 in at least two G3-MB lines). Fourteen such SE-associated genes were found to meet all these criteria and defined as "vital_SE-associated_gene_signature" (vSE), including the three well-established TFs (MYC, OTX2 and CRX) and eleven newly identified downstream effector genes of G3-MB (ARL4D, AUTS2, BMF, IGF2BP3, KIF21B, KLHL29, LRP8, MARS1, PSMB5, SDK2 and SSBP3) (Fig. 2a-b). Nine such effector genes were selected for tumor-dependency verification with RNA interference approach. MYC, OTX2 and CRX were tested in parallel as positive controls. MARS1 and PSMB5 were exempted from such tests based on their extremely low CERES scores in all analyzed G3-MB lines (Fig. 2b).
As shown in Fig. 2c-f, knocking down of these genes individually with two separate shRNAs all markedly dampened the growth of MB002 and D425 cells in vitro, supporting their tumor dependency in G3-MB. Next, we measured the impact of knockdown of MYC, OTX2 or CRX individually on the transcript levels of the other thirteen vSE genes, to build up their regulatory connections within the SE-driven core transcriptional regulatory network of G3-MB (Fig. 2g-h and S2a-c). To be noted, we did not detect any consistent cross-regulated feed-forward loops of the three SE-associated TFs within the two G3-MB lines.
BET inhibitor works synergistically with CDK7 inhibitor on suppressing SE-driven core transcriptional regulatory network of G3-MB
Both BETi and CDK7i have been reported to effectively suppress growth of G3-MB in vitro and in vivo [14][15][16][17]. However, as well-recognized SE-targeted therapeutic strategies, their impacts on SE-associated transcription of G3-MB remain unexplored. To do so, we performed RNA-seq analyses of JQ1 (1 μM for 24 h) or THZ1 (0.1 μM for 6 h) treated D425 cells in parallel. As shown in Fig. 3a, THZ1 but not JQ1 induced remarkable genome-wide downregulation of active transcripts. Next, we examined how JQ1 or THZ1 affected SE-associated transcription in G3-MB cells. Gene set enrichment analysis (GSEA) results showed that both JQ1 and THZ1 could markedly suppress transcription of cSE, tSE, oSE or vSE signature (NES > 1, FDR ≤ 0.25). In contrast, they did not exhibit such significant inhibition on D425_TE signature, which is composed of bottom ranked 1099 typical enhancer (TE) associated genes (same number as SE-associated genes) of D425 cells (Fig. 3b). When we compared the inhibitory effects between THZ1 and JQ1 in treating D425 cells, we found all tested SE-associated gene signatures of G3-MB were more robustly downregulated by THZ1 than JQ1 (Fig. 3c). The stronger anti-SE activity of THZ1 versus JQ1 was further verified by RT-qPCR analysis of all fourteen vSE genes as well as immunoblot analysis of MYC and OTX2 proteins in both D425 and MB002 lines (Fig. 3d-e).
Notably, the combination of BETi and CDK7i have been shown before in other cancer types to exert their synergistic inhibitory effects via stronger suppression of SE-associated oncogenic transcriptional activity [8,9,11]. Accordingly, we tested the in vitro combinatory therapy of JQ1 and THZ1 against D425 and MB002 and found their combination exhibited synergistically inhibitory effects against both G3-MB lines as well (Fig. 3f). THZ1 + JQ1 was more potent in suppressing cell proliferation and inducing cell apoptosis, thus resulting in stronger cytocidal effects (Fig. 3g-i). Moreover, our RT-qPCR results showed their combination induced stronger transcriptional downregulation of all fourteen vSE genes (Fig. 3j). Their combinatory inhibition on MYC and OTX2 at protein level was further confirmed by immunoblot analysis (Fig. 3k). Taken together, these results illustrated the inhibitory effects of BETi or CDK7i on SE-associated transcription individually and further revealed their therapeutic synergy against G3-MB cells via stronger suppression of SE-driven core transcriptional regulatory network, thus proving the therapeutic potential of treating G3-MB via targeting SE complex components.
BET inhibitor works synergistically with proteasome inhibitor on suppressing G3-MB
To further explore the therapeutic potential of SE-driven transcriptional dependencies in G3-MB, we evaluated the inhibitory effects of targeting SE complex components (BRD4 or CDK7) in combination with targeting SEassociated tumor-dependent effector genes. Within the identified fourteen members of G3-MB's SE-driven core transcriptional regulatory network, PSMB5 is the only one having clinically available small-molecule inhibitors. It encodes a β subunit of 20S proteolytic core of the 26S proteasome complex [36], and has been shown to act as the direct target of various proteasome inhibitor (PSI) drugs including Bortezomib, Carfilzomib and Marizomib [37]. As shown in Fig. 4a-b, PSMB5 is significantly upregulated in G3-MB and its higher expression is associated with worse prognosis of MB patients. Based on the alignment of its SE regions of across multiple G3-MB tissues and cell lines, D283 was selected as another G3-MB cell line model for PSMB5 investigation (Fig.S3a-b). Meanwhile, UW228, a human non-G3 MB cell line, was used as a control for following SE analysis and validation. As shown in Fig.S3c-d, RNA-seq and ChIP-qPCR analyses validated the higher transcript levels of PSMB5 and the stronger enrichment of H3K27Ac at the conserved proximal SE regions of PSMB5 in multiple G3-MB lines versus UW228, respectively. We also performed 3C-PCR analysis and identified stronger chromatin looping and interaction between the SE region and the promoter region of PSMB5 in G3-MB cells versus UW228 cells (Fig.S3e-f ). Moreover, CRISPR interference (CRISPRi) silencing of PSMB5's SE region resulted in significant downregulation of its transcript level and cell viabilities of G3-MB cells (Fig S3g-h). Together, these results proved the crucial role of PSMB5's SE in regulating its transcription in G3-MB.
To be noted, in line with our findings in D425 and MB002 cells, PSMB5 transcription was sensitive to BET inhibition or CDK7 inhibition, but not knockdown of MYC, OTX2 or CRX in D283 cells (Fig.S3i-l). Then we also performed single-cell transcriptomic analysis of G3-MB tumor cells using single-cell RNA-seq (scRNA-seq) data of MB primary tissues from a recent study [38]. As shown in Fig.S4, G3-MB tumor cells were found to exhibit stronger PSMB5 expression than tumor cells of the other three MB subtypes at single-cell level. Moreover, the tumor cell subpopulations expressing the highest level of PSMB5 (GP3-B1) are different from the ones of MYC, OTX2 or CRX (GP3-B2 for MYC, GP3-C2 for OTX2 and CRX), further supporting the involvement of unidentified SE-associated TFs in regulating PSMB5 transcription in G3-MB (Fig.S4).
Notably, PSI drug Marizomib has been previously reported to exhibit in vitro inhibitory activity against G3-MB or Group 4 subtype MB (G4-MB) alone or in combination with radiation [39]. It has also been proved to effectively penetrate the blood-brain barrier (BBB) and already entered human clinical trials for treating multiple brain cancers like DIPG and glioblastoma [40,41]. Therefore, we chose Marizomib for further drug combination testing. We performed in vitro combinatory therapy tests on multiple G3-MB lines of Marizomib with JQ1 or THZ1. Synergy was detected between Marizomib and JQ1 but not THZ1 (Fig. 4c and S5a-b). Like THZ1 + JQ1, Marizomib + JQ1 was also more effective in suppressing cell proliferation, inducing cell apoptosis and generating cytocidal effects (Fig. 4d and S5c). To be noted, the antitumor synergy between BETi and PSI has been reported in other tumor types to result from stronger activation of ER stress and unfolded protein response (UPR) [42,43]. As shown in Fig. 4e, our RT-qPCR results revealed Marizomib + JQ1 induced stronger expression of seven representative UPR genes (BiP, CHOP, IRE1α, ATF3, ATF4, GADD34, HERPUD1), indicating a similar synergistic mechanism in treating G3-MB. We further tested the combination therapy of JQ1 and Marizomib in an orthotopic xenograft model of G3-MB to demonstrate its in vivo therapeutic efficacy. Nude mice orthotopically implanted with MB002-GFP-luc cells were treated with JQ1 (50 mg/ kg, intraperitoneal injection, twice a week), Marizomib (150 μg/kg, intravenous injection, once a week) or in combination. As shown in Fig. 4f-h, while treatment of JQ1 or Marizomib at such low dosage alone did not generate obvious therapeutic effect, their combination resulted in significantly slower tumor progression and longer survival of xenografted nude mice. None of these treatment conditions obviously affected mice bodyweight (Fig.S5d).
ARL4D represents a novel subtype-specific tumor-dependency and therapeutic target of G3-MB
To demonstrate the proof of principle that novel therapeutic targets could be unveiled from the identified SEdriven core transcriptional regulatory network, ARL4D, one of the eleven newly identified downstream effector vSE genes, was selected for further investigation. ARL4D is a member of the ADP-ribosylation factor (ARF) family of proteins that belongs to the RAS superfamily of small GTPases. ARF family members, which usually play functions in cytoskeleton remodeling, cell cycle, cell migration and adhesion in normal tissues, are frequently found to be subverted by cancer for regulating proliferation, migration and invasion of tumor cells [44]. Even though ARL4D was previously identified as a glioma-associated antigen dependent on loss of PTEN and consequent activation of Akt/mTOR pathway [45,46], its oncogenic roles and underlying molecular mechanisms have never been reported in any cancer type before. As shown in Fig. 5a-b, ARL4D is consistently and significantly upregulated in G3-MB versus NC or the other MB subtypes and patients with higher ARL4D levels exhibit significantly worse prognosis. Single-cell transcriptomic analysis also showed that G3-MB tumor cells exhibited much stronger ARL4D expression than tumor cells of the other three MB subtypes (Fig.S6a). Moreover, GP3-C2, the photoreceptor differentiated tumor cell cluster of G3-MB, exhibits the highest expression of ARL4D among all the identified tumor cell clusters (Fig.S6a). To be noted, CRX and OTX2, the potential upstream SE-associated TFs of ARL4D described in Fig. 2h, were found to be enriched in GP3-C2 as well (Fig.S6a). We then compared the expression and tumor-dependency of ARL4D in D425 and MB002 versus UW228. Our results showed the two G3-MB lines expressed much higher level of ARL4D than UW228 (Fig. 5c-d), and knockdown of ARL4D with shRNAs or cas13d-sgRNAs only markedly suppressed growth of D425 and MB002 but not UW228 cells in vitro ( Fig. 5e-g and S7a-c). ARL4D loss induced growth inhibition of G3-MB cells resulted from disruption of proliferation and induction of apoptosis of tumor cells (Fig. 5h-i and S7d-e). Furthermore, we showed knockdown of ARL4D caused marked growth disruption of MB002-GFP-luc xenograft model in vivo and significantly prolonged the survival of xenografted mice (Fig. 5j-l). Taken together, our results verified ARL4D as a subtype-specific tumor-dependency of G3-MB.
To explore the transcriptional regulation of ARL4D in G3-MB, we firstly examined the H3K27Ac ChIP-seq signals around ARL4D genomic locus across multiple G3-MB tissues and cell lines. UW228 was analyzed in parallel as control. As shown in Fig. 6a-b, ARL4D represents a G3/G4-MB SE at tumor tissue level and exhibits robustly elevated H3K27Ac signals in G3-MB lines versus UW228. As a result, it is identified as a SE-associated target gene in D425, MB002 and D283 lines or top-ranked TE-associated target gene in HD-MB03 and D341 lines (Fig.S8a). In contrast, ARL4D only ranked 45.9% from the top within all the TE-associated target genes in UW228 line (Fig.S8a). After obtaining the commercially-available ChIP-qualified anti-OTX2 antibody reported in a previous study [29], we performed ChIP-qPCR analyses to confirm the enrichment of H3K27Ac and OTX2 at ARL4D's SE regions in D425 and MB002 cells versus UW228 cells (Fig. 6b-c). Then we performed 3C-PCR analysis with two different restriction enzyme digestion, HindIII (Fig. 6b, d and S8b) and MboI (Fig.S8c-e), to demonstrate the chromatin looping between ARL4D's SE and promoter regions in G3-MB cells. We also performed CRISPRi analysis with pooled sgRNAs targeting ARL4D's SE regions and the results showed CRISPRi silencing of ARL4D's SE could significantly impair its transcription and the growth of G3-MB cells (Fig. 6e). When ChIP-qPCR analysis with anti-H3K27Ac antibody was performed on JQ1 or THZ1 treated MB002 cells to measure their impact on ARL4D's SE, we found JQ1 but not THZ1 could significantly reduce the enrichment of H3K27Ac signal at ARL4D's SE regions, supporting the direct targeting of SE by BET inhibition (Fig. 6f ). Furthermore, we measured the impact of OTX2 knockdown on the enrichment of H3K27Ac and OTX2 at ARL4D's SE regions in MB002 cells. As shown in Fig.S8f, while the binding of OTX2 was broadly abrogated, the H3K27Ac enrichment was partially impaired in only one of the tested regions, suggesting OTX2 might play a dominant role in this region of ARL4D's SE (Fig. 6f ).
To dissect the molecular mechanism underlying ARL4D's tumor-dependency of G3-MB, we performed RNA-seq analysis of MB002 cells stably expressing two separate shARL4D clones or scramble control shRNA (Fig. 7a). The 630 commonly downregulated genes (log2FC < -1, FDR < 0.05) shared by the two shARL4D clones were found to be enriched in cell cycle related biological processes whereas the 75 commonly upregulated genes (log2FC > 0.6, FDR < 0.05) were enriched in neural cell differentiation and development related biological processes (Fig. 7b-e). We then performed RT-qPCR verification of eight commonly downregulated cell cyclerelated genes (AURKB, BUB1B, CDK1, CENPW, DUT, GINS2, ORC1, RRM1) and five commonly upregulated nervous system development-related genes (CPLX3, GUCA1C, STRA6, TULP1, ZNF385A) selected based on the RNA-seq data in MB002 and D425 cells upon knockdown of ARL4D (Fig. 7f-h). Furthermore, we showed loss of ARL4D caused cell cycle arrest at G2/M phase and significantly attenuated tumor-sphere formation in both D425 and MB002 lines (Fig. 7i-j and S8g-h). Collectively, our results demonstrated that ARL4D, which is required for maintaining cell cycle progression and inhibiting neural differentiation of tumor cells, represents a novel SE-associated subtype-specific tumor-dependency and therapeutic target of G3-MB.
Discussion:
In this study, we chose to deeply dissect SE-driven transcriptional dependencies of G3-MB to better understand its tumor biology and identify novel SE-associated therapeutic strategies or targets. Even though it has been reported before there are poor overlap and correlation between enhancer landscapes of primary tumor tissues and patient-derived tumor cell lines of MB [12], here we were able to show the conserved SE-associated oncogenic signature between primary tumor lines and tissues of G3-MB was enriched of subtype-specific upregulated tumor-dependent genes and MB patients harboring enrichment of those transcripts exhibited worse prognosis. We then built G3-MB's SE-driven core transcriptional regulatory network composed of fourteen such conserved SE-associated subtype-specific upregulated tumor-dependent genes, including three well-recognized TFs (MYC, OTX2, CRX) and eleven newly identified downstream effector genes (ARL4D, AUTS2, BMF, IGF2BP3, KIF21B, KLHL29, LRP8, MARS1, PSMB5, SDK2 and SSBP3). Moreover, we revealed BETi and CDK7i, which were previously reported to effectively suppress G3-MB [14][15][16][17], both exhibited anti-SE activity against G3-MB cells as they did in many other cancer types [7]. These results verified the oncogenic role of SE-driven transcriptional dependencies in G3-MB and supported us to further explore its therapeutic potential by searching for other SE-associated therapeutic strategies or targets.
There have been multiple effective anti-SE therapeutic strategies reported in various cancer types via targeting SE complex components and SE-associated effector genes individually or in combination [8][9][10][11]. We noticed that only PSMB5 within our identified SE-driven core transcriptional regulatory network of G3-MB has targeted small-molecule inhibitor and PSMB5-targeted PSI drug Marizomib has been reported to effectively inhibit growth of G3/G4-MB alone or in combination with radiation in vitro [39]. Therefore, we evaluated the therapeutic effects with pairwise combinations of THZ1, JQ1 and Marizomib on treating multiple G3-MB lines and synergy was detected between JQ1 with THZ1 or Marizomib but not THZ1 with Marizomib. Mechanistically, we revealed that the combinations of BETi with CDK7i or PSI exerted their synergistic inhibitory effects via stronger suppression of SE-associated transcription or higher activation of ER stress and unfolded protein response (UPR), respectively, sharing very similar molecular mechanisms with previously reported cancer types [8,9,42,43]. Notably, PSI, CDK7i and BETi drugs have all entered human clinical trials for cancer therapy. More importantly, PSI drug Marizomib and BETi drug OTX015 have been shown to possess sufficient brain penetration capacity [40,41,47]. Therefore, our identified combinatory anti-SE strategies exhibit great potential for future clinical application.
It has been proven that novel therapeutic targets can be unveiled from SE-associated downstream effector genes [10,11]. Accordingly, ARL4D, a member of the newly identified SE-driven core transcriptional regulatory network of G3-MB with very little prior knowledge in cancer, was subjected to further investigation. Notably, small GTPase family members used to be considered as undruggable, but plenty of new approaches or strategies have been developed in recent years for targeting GTPase Fig. 7 ARL4D is required for maintaining cell cycle progression and inhibiting neural differentiation of G3-MB cells. a Volcano plots showing significantly altered genes (mean FPKM of shSCR or shARL4D ≥ 1, log2_FC < -1 or > 0.6, FDR < 0.05) in MB002 cells upon ARL4D knockdown by two separate shRNA clones. Selected cell cycle and neural development related genes for further validation are shown. b-c Venn diagram analysis of significantly downregulated (b, mean FPKM of shSCR ≥ 1, log2_FC < -1, FDR < 0.05) or upregulated genes (c, mean FPKM of shARL4D ≥ 1, log2_FC > 0.6, FDR < 0.05) in MB002 cells upon ARL4D knockdown by two separate shRNA clones. d-e GO (BP, biological processes) and Pathway (KEGG and REACTOME) analyses of the shared downregulated (d) or upregulated genes (e) identified in (b) and (c), respectively. f Heatmap of gene expression levels of the selected cell cycle and neural development related genes that are significantly downregulated or upregulated upon ARL4D knockdown in MB002 cells by two separate shRNA clones. g-h RT-qPCR validation of the selected significantly differentially expressed cell cycle (g) or neural development (h) related genes tested in (f) upon ARL4D knockdown by two separate shRNA clones in MB002 and D425 cells, respectively. i FACS analysis of cell cycle of MB002 and D425 cells with ARL4D being knocked down following infection of two separate clones of Cas13d-sgARL4D lentivirus. Tumor cells stably expressing Cas13d empty vector (EV) and uninfected tumor cells (Mock) were analyzed in parallel as control. j Limiting dilution analysis of the frequency of tumorsphere forming cells of MB002 and D425 cells following ARL4D knockdown by two separate shRNA clones. All RT-qPCR assays were performed in triplicate and the data are presented as mean ± SD. Statistical significance was determined by one-way ANOVA (g-h) proteins directly or indirectly via their modulators [44,48], thus making ARL4D a plausible therapeutic target for future drug development. As a result, an OTX2-SE-ARL4D regulatory axis is revealed to represent an important subtype-specific tumor dependency of G3-MB via contributing to maintaining cell cycle progression and repressing neural differentiation. As a oncogenic driver TF of G3-MB [49], OTX2 has been previously shown to promote tumor cell cycle progression via direct activation of multiple cell cycle genes and inhibit neural differentiation via repressing transcription of various neurodevelopmental genes directly or indirectly [50][51][52][53]. Hence, our results illustrate ARL4D as another crucial downstream oncogenic effector of OTX2. On the other hand, CRX was also found to be a potential upstream TF of ARL4D in G3-MB ( Fig. 2g-h). Even though it could not be experimental verified due to the lack of commercially available ChIP-qualified CRX antibody, our data are in line with a previous study that reports the oncogenic role of NRL and CRX in subtype-specific aberrant activation of photoreceptor differentiation program [13]. In that study, ARL4D is identified to be one of the 385 high confidence SE-associated genes containing NRL and CRX motifs in proximity and its transcript level is significantly downregulated in NRL knockdown D458 cells. Moreover, our scRNA-seq data analysis also revealed ARL4D, OTX2 and CRX were all enriched in GP3-C2, the photoreceptor differentiated tumor cell cluster of G3-MB defined in a recent single-cell transcriptomic study of MB [38]. Intriguingly, we noticed that the top significantly upregulated transcriptome signatures upon ARL4D knockdown in G3-MB cells were mostly related to photoreceptor differentiation as well (Fig. 7e), suggesting ARL4D might be required for restraining the aberrant activation of photoreceptor differentiation program at a proper level. To be noted, ARL4D is also significantly upregulated in G4-MB versus normal cerebellum (Fig. 5a). Like GP3-C2, GP4-C2, the photoreceptor differentiated tumor cell cluster of G4-MB, exhibits the highest expression of ARL4D and highly expresses OTX2 and CRX (Fig.S6a). Therefore, it would be interesting to test in future whether ARL4D also works as an essential gene and is transcriptionally regulated by OTX2 and CRX as well in ARL4D-high G4-MB tumors if proper tumor models are available.
Conclusion
In summary, this study utilizes the conserved SE-associated tumor-dependent gene signatures between primary tumor tissues and patient-derived tumor cell lines to dissect the oncogenic role and therapeutic potential of SEdriven transcriptional dependencies of G3-MB, resulting in better understanding of its tumor biology and identification of novel therapeutic strategies and targets. To be noted, other than ARL4D and PSMB5, the other newly identified SE-associated tumor-dependent effector genes of G3-MB are worthy of further investigation as well. For instance, the oncofetal RNA-binding protein IGF2BP3 was recently identified as a m6A reader [54]. The roles and related mechanisms of RNA epigenetic modifications like m6A in G3-MB remains unclear and deserves further investigation. | 2022-10-22T13:54:11.351Z | 2022-10-22T00:00:00.000 | {
"year": 2022,
"sha1": "da9601a61501d9b25c23653cd97b274471cd1391",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "905837c9c05b90359ef8d70f010ccb57948ee7cc",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
219606006 | pes2o/s2orc | v3-fos-license | Effective Data Transmission and Control Based on Social Communication in Social Opportunistic Complex Networks
In opportunistic complex networks, information transmission between nodes is inevitable through broadcast. -e purpose of broadcasting is to distribute data from source nodes to all nodes in the network. In opportunistic complex networks, it is mainly used for routing discovery and releasing important notifications. However, when a large number of nodes in the opportunistic complex networks are transmitting information at the same time, signal interference will inevitably occur.-erefore, we propose a low-latency broadcast algorithm for opportunistic complex networks based on successive interference cancellation techniques to improve propagation delay. With this kind of algorithm, when the social network is broadcasting, this algorithm analyzes whether the conditions for successive interference cancellation are satisfied between the broadcast links on the assigned transmission time slice. If the conditions are met, they are scheduled at the same time slice, and interference avoidance scheduling is performed when conditions are not met. -rough comparison experiments with other classic algorithms of opportunistic complex networks, this method has outstanding performance in reducing energy consumption and improving information transmission efficiency.
of other nodes within its interference range, and signal interference is an important factor that affects the node broadcast delay [18]. e above situation will result in low transmission rate and high latency for social networks. However, it has been proved [19] that, due to the influence of signal interference, the minimum delay broadcasting problem in wireless sensor networks is NP (nonpolynomial) difficult problem, so it is difficult to design a polynomial time optimization algorithm. In order to solve the problem of low-latency broadcasting in opportunistic complex networks, researchers have proposed many approximation algorithms. e broadcast delay is continuously optimized by approximating optimization to approximate the performance of the optimal algorithm [20][21][22]. rough these studies, the approximate ratio is continuously reduced and the performance is continuously improved. e link scheduling strategy designed under the physical interference model can effectively improve the actual performance of the broadcast algorithm [23,24]. However, it is necessary to consider all data links transmitting simultaneously, which makes the problem more complicated and difficult, and the research challenges are greater [25].
However, the abovementioned research work has designed the broadcast algorithm through interference avoidance scheduling technology [26]. at is, when there is a signal interference between broadcast links, the transmission of information between these nodes is distributed to different time slices to avoid mutual interference. Although the interference avoidance scheduling technology can effectively reduce the interference between signals, it reduces the number of broadcast links that can be transmitted concurrently, which is not conducive to reducing the broadcast delay. In this paper, a greedy broadcast algorithm GreedyA (greedy algorithm) is first proposed.
is algorithm mainly uses the breadth first search tree with the source node as the root node to stratify the nodes in the social network [27,28]. en GreedyA algorithm according to the rules that the node covering more number of nodes have prioritize to become the parent node to construct the broadcast tree. Finally, the broadcast link scheduling is carried out around the broadcast tree using the idea of layerby-layer scheduling and interference avoidance scheduling. So as to better enhance the transmission performance, this paper proposes a broadcast algorithm (EDTC) based on the GreedyA algorithm, which increases the number of broadcast links that can be transmitted concurrently through successive interference cancellation techniques [29][30][31].
e main contributions of this research are as follows: (1) A greedy broadcast algorithm, GreedyA. is algorithm uses the method of layer-by-layer scheduling and interference avoidance scheduling to allocate the transmission time slices of the broadcast link, which effectively solves the problem of signal interference. But the number of broadcast links that can be transmitted concurrently is limited.
(2) Combining with successive interference cancellation techniques, another low-latency broadcast algorithm EDTC is proposed. is algorithm is based on the greedy broadcast algorithm GreedyA and makes full use of the benefits of successive interference cancellation to perform broadcast link scheduling. e EDTC algorithm improves the performance of information transmission in opportunistic complex networks by relaxing the link interference limit and increasing the number of broadcast links that can be transmitted concurrently. (3) Experiments show that using the EDTC algorithm for information transmission in opportunistic complex networks has excellent performance in reducing energy consumption and improving data transmission efficiency.
is paper is divided into five parts. e first part introduces our research. In the second part, we briefly explained the related work. e third part introduces the proposed algorithm model. In the fourth part, a simulation experiment is performed using the proposed algorithm, and the experimental results are analyzed. Section five summarizes this study.
Related Work
Over the years, research on routing algorithms has always been a hot issue in opportunistic complex networks. So far, many routing algorithms have been proposed. Among them, there are many algorithms applied to opportunistic complex networks. Several routing algorithms are described below.
Vahdat and Becker [32] proposed the epidemic routing algorithm, whose core idea is using several meeting nodes to transmit information. Lenando and Alrfaay [33] studied epidemic with social features to improve routing performance in opportunistic social networks. e core idea of this algorithm is to utilize the social activities of the nodes. Compared with the epidemic protocol, it can increase the transmission rate and reduce the transmission overhead, average delay, and average hops. Mundur et al. [34] proposed an improvement based on epidemic routing protocols, and its core idea is to use the information that the messages have been transmitted in the antiinterference list to prevent future exchange of these messages. With this technology, there will be better buffers and network utilization, which can increase the percentage of messages delivered with a lower latency. To enhance the epidemic routing in delay-tolerant networks from an energy perspective, Rango et al. [35] proposed a new strategy to dynamically adjust the n-parameter. is strategy considers the energy consumption and node degree of mobile nodes to increase or decrease the amount of data distributed in the network. With this strategy, when the remaining energy of the node is low, the scalability of the popular strategy is greatly improved, and the n-parameter is also improved. Conversely, when the mobile node has a good energy budget, more transmissions can be allowed and the n-parameter can be reduced to increase the transmission probability.
Spyropoulos et al. [36] proposed a simple solution called Spray and Wait, which managed to overcome the shortcomings of epidemic routing and other flooding-based schemes. e algorithm can avoid the performance dilemma inherent in 2 Complexity utility-based schemes. In order to avoid the Spray and Wait algorithm making random and blind forwarding decisions in delay tolerant networks, Xue et al. [37] proposed a Spray and Wait algorithm based on average transfer probability in delay tolerant networks. e core idea of the algorithm is using pass prediction to forward messages. Huang et al. [38] proposed a Spray and Wait routing based on location prediction in social networks. e main idea of the algorithm is that, in the waiting phase, each relay node uses polynomial interpolation to predict the future position. A copy of the message can be forwarded to another relay node closer to the target without waiting for the target node to be encountered. is solution makes full use of mobility information so that messages can be delivered to their destination faster. Jain et al. [39] proposed enhanced fuzzy logicbased Spray and Wait routing protocol for delay tolerant networks. e core idea of this algorithm is to achieve a high transfer rate by appropriately aggregating multiple message parameters. Experiments prove that compared with other Spray and Wait routing protocol variants, the proposed buffer management scheme successfully achieves the goal of increasing the delivery ratio and the overhead ratio. For flooding routing strategies, multiple copies generated by the original nodes are used for forwarding. e network nodes have large information redundancy and high dependence on network resources. For the purpose of reducing network resources consumption to a greater extent, a routing strategy based on prediction is proposed. Dhurandher et al. [40] proposed a history-based routing prediction in opportunistic complex networks. e core idea of this algorithm is to use movement history to model the behavior of nodes. Markov predictors were used to make predictions and choose the best next node. Yu et al. [41] proposed a probabilistic routing algorithm based on contact time and message redundancy. is algorithm estimates the transit probability of a node based on the history of encounter information and contact time. And, by using a controlled replicating scheme, messages can be transmitted in parallel on multiple paths.
Based on the context-aware routing strategy, choosing the best transmission path through context-aware parameters obtained by intermediate nodes can greatly improve network performance. Wong [42] proposed the social relation opportunistic routing (SROR) algorithm. It is mainly based on social relations, social profiles, and social mobility patterns. e optimal relay node for routing data is calculated to maximize the delivery ratio. And they proved that the proposed algorithm can achieve the highest data transmission rate with the highest routing efficiency in the social environment. Xu et al. [43] proposed an intelligent distributed routing algorithm based on social similarity. is algorithm can use the social environment information in the network to predict the mobile attributes of network nodes through the BP neural network. is routing decision fully considers the time and space attributes of the mobile node. rough simulation experiments, in the comparison of other existing well-known algorithms, they find that their algorithm can improve the network's ability to adapt to topology changes.
However, in social networks, due to the broadcast characteristics of wireless signals, there will be interference between wireless signals, which will make receiving nodes to not receive the message correctly. e related research of low-latency broadcasting algorithms is introduced below with respect to the physical interference model.
Yu et al. [24] studied the basic communication primitives in unstructured wireless networks under the physical interference model and the method for distributing broadcast messages from multiple nodes to the entire network with minimum delay. ey proved that the proposed random distributed algorithm can be completed in O((D + nb)log n + log 2 n) time slices with a higher probability, where D is the network diameter, n b is the number of nodes that need to send broadcast messages, and n is the network scale.
Tian et al. [44] proposed two global broadcast distributed determination algorithms based on the signal-to-interference plus noise ratio model. In these two algorithms, any node can become the source node, and the remaining nodes are divided into different layers according to their distance from the source node. Broadcast messages are transmitted layer by layer from the source node to all other nodes. For the first algorithm, by carefully selecting multiple subsets of the largest independent set of each layer, most concurrent transmissions can be allowed. Its time complexity is O(D log n). For the second algorithm, the running time is improved by reducing the number of repeated broadcast messages in each layer, that is, eliminating redundant broadcasts in the same layer. eoretical analysis shows that the time complexity of the second algorithm is O(DΔ log n).
However, none of the abovementioned research works have used successive interference cancellation techniques to design low-latency broadcasting algorithms. In opportunistic complex networks, different network nodes have different signal interferences. How to choose the forwarding nodes reasonably and increase the number of links that can be transmitted concurrently is very challenging. Successive interference cancellation technology can effectively decode the required signals from the disturbing signals. As a result, network performance is improved. However, as far as we know, there is no research work to apply successive interference cancellation technology to the broadcast algorithm to reduce the broadcast delay in social opportunistic complex networks, so it is necessary to conduct in-depth research.
Network Model.
is paper considers an opportunistic complex network with m nodes. Each sensor node uses a halfduplex omnidirectional antenna for wireless communication, and the maximum transmission distance is the same. Based on the characteristics of node wireless communication, the network is modeled as a unit circle graph G 0 � (V 0 , E 0 ). e set V 0 contains all nodes in the network, and the set E 0 includes all edges in the network. ere is an edge between two nodes if and only if the distance between them is less than or equal to the maximum transmission distance.
Assuming that the time between the nodes is synchronized, the scheduling time is divided into several time slices of the same length. Each node can finish sending or receiving a piece of data in a time slice.
is paper regards the physical Complexity 3 interference model as the signal interference model. at is to say, when a node's signal to noise ratio is not lower than a certain threshold, the node can correctly decode the required signal.
Problem Definition.
is paper researches the broadcasting problem of opportunistic complex networks, among which the source node needs to transmit its data to all sensor nodes at time slice 1. When all sensor nodes receive data from the source node, the broadcast task is completed. Broadcast scheduling is used to allocate the transmission time slice of each node. e goal of this paper is to determine how to optimize the latency that all nodes receive source node data and ensure that the scheduled data transmission signals do not interfere with each other.
Definition 1. BDPIM (Broadcast Delay under Physical
Interference Model) question. Given the wireless sensor network G 0 � (V 0 , E 0 ) and a source node j, under the physical interference model, a broadcast algorithm is designed so that all nodes can receive data from the source node with the lowest broadcast delay.
Existing work has proved that the BDPIM problem is an NP-difficult problem [16], so it is impossible to design an optimization algorithm for polynomial time. In order to decrease the time complexity of the algorithm and to optimize the performance of the algorithm as much as possible, a low-latency broadcast algorithm with polynomial time needs to be designed.
In opportunistic complex networks, nodes use the "storage-carry-forward" routing mode to implement internode communication. When we analyze the nodes in the opportunistic complex networks, we must first know its characteristics.
erefore, we point out that all social complex network nodes meet the following conditions.
We can define that, at time s, the modularity of the community can be expressed as (1) Among them, U is the modularity of the community. M is the total weight of the edge. m e represents the total weight of edges in community e. l t expresses the total level of node t in the community.
Condition 1.
If the node is in an opportunistic complex network, then as the weight of edges formed with other nodes in the network increases, the total edge weight m e of the community will increase. e above situation will increase the relevance degree of the community in the opportunistic complex networks.
Proof. ProofAt time s, the modularity in the community is U(s), and then, the change in modularity after time s + 1 can be expressed as en, We can get Δm > 0. So we just need to prove (2M 2 − 2Ml t − l t Δm) × (2M − l t ) > 0; then, we can get In other words, 4 Complexity However, we also know that 2U is the sum degree of the nodes in the network, and no community in the network has a saturation greater than 2U. In summary, we can get that increasing the weight can increase the relevance to the community in the social opportunity network. at is to say, we can get that if a node belongs to this opportunistic complex networks, its weight will affect the relevance of the community in the social opportunity network.
Condition 2.
In an opportunistic complex network, if node N i meets the condition l i l j /2M < m ij < Δm + (l i l j + l t Δm + Δm 2 /2(M + Δm)), then it will be separated from the community of node C j .
Proof. We first assume that community E is divided into two subcommunities E i and E j , where nodes N i and C j are in different communities. e total weight of the community decreases. en, When the total weight has decreased, we can express the formula as As mentioned above, we can know that if two nodes in communities E i and E j satisfy the condition l i l j /2M < m ij < Δm + (l i l j + l t Δm + Δm 2 /2(M + Δm)), then the community has been divided. □ Condition 3. For node N i in the opportunistic network, if its edge is connected to node C j and this edge is the only edge of node C j , then when the weight between nodes N i and C j drops, node C j will still not be separated from the community.
Proof. If community E is divided, then it must meet the following three conditions: Along with the weight change, the formula can be expressed as It can be understood as Because So, in the end, we can get l i l j /2M < m ij < Δm + (l i l j + l t Δm + Δm 2 /2(M + Δm)) is false.
By the above proof, we can get the conclusion that, for the nodes in opportunistic complex networks, if its edge is connected to another node and this edge is the only edge of the node, then when the weight between two nodes drops, the node will still not be separated from the community.
In summary, if a node is in an opportunistic complex network, then it should meet the above conditions.
Basic eory of Successive Interference Cancellation.
For the interference characteristics of wireless signals, traditional algorithms usually adopt the idea of interference avoidance scheduling for broadcast link scheduling. Different from the traditional interference avoidance method, this paper considers adopting successive interference cancellation technology to increase the number of broadcast Complexity links that can be transmitted concurrently to improve the performance of information transmission. Successive interference cancellation is a multipacket receiving technology, which can decode the required data messages from the conflicting signals and thereby effectively reduce the signal interference in wireless networks. During the iterative detection of receiving nodes with successive interference cancellation, the strongest signals are decoded, while other signals are considered interference. e condition that a signal meets the SIC at the receiving node is that its signal internet performance noise ratio (SINR) is not lower than a specific threshold. We will try to decode the signal with the strongest signal strength when it receives the conflict signal. At this time, other transmitted signals are regarded as noise. If the decoding is successful, the receiving node removes the signal. en, the receiving node attempts to decode the strongest of the remaining signals. is process continues until all signals are extracted or decoded fails. rough this process, all the information carried in the conflict signal can be decoded gradually, and then the required information can be obtained. is process is called SIC's sequential detection feature. Obviously, the decoding of weak signals requires the successful decoding of all stronger signals as a prerequisite. In other words, in the conflict signal, the weak signal is dependent on the strong signal.
Firstly, this article shows the association between nodes in the communication domain, as shown in Figure 1. e source nodes directly transmit information to each other, and then the source nodes transmit the information to other nodes in the communication domain through broadcasting. In a communication domain, we define 1 source node and m sensor nodes.
In this paper, the noise power is defined as W N 0 , the specific threshold value meeting successive interference cancellation is χ SIC , and the distance between two nodes A 1 and B 1 is d A 1 B 1 . When two data links I A 1 B 1 and I A 2 B 2 transmit simultaneously, according to the constraints of successive interference cancellation techniques, whether the node R 1 can decode the signal of S 1 has the following three cases: transmitted at the same time. Node B 1 can still decode the signal of I A 1 B 1 under the interference of I A 2 B 2 . e following conditions must be met: Among them, W B 1 (A 1 ) and W B 1 (A 2 ) indicate the signal strength the node B 1 received from two sending nodes A 1 and A 2 . Different signal fading models can obtain different received signal strengths. For the convenience of analysis, this paper uses the same signal fading model as in [24]; that is, can be calculated according to the following formula: Among them, W A 1 and W A 2 represent the signal transmission power of nodes A 1 and A 2 , respectively; n represents the signal attenuation index, and the value ranges from 2 to 6. By modifying equations (14) and (15), the proposed algorithm can also be extended to other signal fading models in this paper. As shown in Figure 2, when the distance from node A 2 to node B 1 is farther than the distance from node A 2 to node A 1 , the above conditions may be satisfied.
mitted at the same time. On node B 1 , the signal of node A 2 is stronger and satisfies the successive interference cancellation conditions. erefore, the signal of node A 1 can be used as the interference signal. Firstly, the signal of node A 2 is decoded, and then, the signal is removed, thereby decoding the signal of node A 1 . e following conditions need to be met: As shown in Figure 3, when the distance from node A 2 to node B 1 is closer than the distance from node A 2 to node A 1 , the above conditions may be satisfied. I A 2 B 2 interferes with I A 1 B 1 . I A 1 B 1 and I A 2 B 2 are transmitted at the same time. Any signal as an interference signal does not satisfy the successive interference cancellation conditions. Node B 1 cannot decode the signal of node A 1 ; that is, it meets the following two conditions at the same time: As shown in Figure 4, when the distance from node A 2 to node B 1 and the distance from node A 2 to node A 1 is similar, the above conditions may be satisfied.
For the first two cases, when two links are transmitting data at the same time, node B 1 can still decode the signal of node A 1 . While in the third case, node B 1 cannot decode the signal of node A 1 . Based on the above characteristics of successive interference cancellation techniques, this paper will specifically design different link scheduling strategies to maximize the number of data links that can be transmitted concurrently, thereby reducing broadcast delay. 6 Complexity In [45], we can see a good application of successive interference cancellation technology in single-and multipleantenna OFDM systems. Among them, the SIC-OF system has been applied to various famous network implementations, such as cellular, ad hoc, and infrastructure-based platforms. In [46], we can see the application of SIC in uplink massive MIMO systems. In the article, they research the energy efficiency when nonlinear successive-interference cancellation (SIC) receivers are employed at the BSs and provide an asymptotic analysis of the total transmit power with zero forcing SIC. As a result, as shown by the numerical results, the EE using the SIC receiver may be significantly higher than the EE using the linear receiver.
Greedy Broadcast Algorithm.
Although the theoretical delay of the greedy broadcast algorithm is usually high in the worst case, it can often obtain a good average delay in the experiment. erefore, this paper firstly considers designing a greedy broadcast algorithm (GreedyA) in this section. en, on the basis of this algorithm, in the next part, we design another low-latency broadcast algorithm by combining successive interference cancellation techniques. e pseudocode of the GreedyA algorithm is shown in Algorithm 1.
e GreedyA algorithm uses a layer-by-layer scheduling method for broadcast scheduling, so the first step is to construct a breadth first search tree with the source node as the root node and divide all nodes into different layers. With the purpose of effectively improving the performance of the broadcast algorithm and avoiding signal interference, the parent-child node relationship must be determined. Wireless signals have broadcast characteristics. In order to reduce the number of broadcast forwarding nodes, according to the rule that the node with the largest number of covered nodes takes precedence as the parent node, the algorithm constructs the broadcast tree T s layer by layer starting from the top layer.
After the broadcast tree is constructed, the broadcast scheduling is performed layer by layer from the top layer.
at is, after the broadcast data transmission of the nodes in the previous layer is completed, the next layer of nodes performs broadcast data transmission. In each layer, each node with child nodes is taken out in order, and then the transmission time slice of the node is allocated. e time slice allocation method is starting from the initial scheduling time slice t j of this layer and analyzing whether the node has signal interference with the node that has been scheduled in the current time slice. e basis for judging the existence of signal interference is assuming that the node can perform broadcast data transmission with the node that has been scheduled in the current time slice at the same time and analyzing the receiving nodes of these nodes. If the SINR of a Complexity receiving node is less than the specific threshold χ SIC , it is assumed that the assumption is not true; that is, there is signal interference between the node and the node that has been scheduled in the current time slice.
Because the GreedyA algorithm does not apply successive interference cancellation techniques, as long as the above situation occurs, the two links are considered to interfere with each other. e strategy adopted by the algorithm is performing interference avoidance scheduling, that is, dividing the interfering links into different time slices for data transmission. M(y) in Algorithm 1 represents the set of all nodes whose transmission time slices are allocated in y time slices, where y is a positive integer. However, |W| represents the number of nonempty elements M(y) contained in the set W, and it also represents the maximum transmission time slice of all scheduled nodes.
In the following, the main idea of the GreedyA algorithm is introduced through the communication domain shown in Figure 1. First, all nodes are divided into 4 layers according to the concept of breadth first search tree. As shown in Figure 5, we can get the number of layers for each node in the communication domain.
Based on the layer of each node, we can build a breadth first search tree, as shown in Figure 6.
According to the rule that the node with the largest number of covered nodes takes precedence as the parent node, the algorithm constructs the broadcast tree T s layer by layer starting from the top layer D 0 . For example, the number of nodes covered by node n 3 in layer D 1 is the largest, so the node n 3 is selected as the parent node of n 6 , n 7 , and n 8 in layer D 2 . As shown in Figure 7, we can get the final broadcast tree. rough the idea of the GreedyA algorithm mentioned above, we list three situations that may encounter signal interference during transmission in community 1, community 2, and community 3. e next step of the GreedyA algorithm is to perform the broadcast chain scheduling. e method adopted is interference avoidance scheduling. Because there is only one sending node, scheduling in D 0 and D 1 layers is simple. In the D 1 layer, after node n 3 dispatches the transmission time slice, it needs to allocate the transmission time slice of node n 2 . As shown in Figure 8, whether the signals of nodes n 3 and n 2 will affect their respective receiving nodes is analyzed; that is, whether the SINR of their receiving nodes will be below a specific threshold. If not affected, nodes n 3 and n 2 can be arranged for transmission at the same time slice; otherwise, node n 2 will be arranged for transmission at the next time slice to avoid signal interference.
In the D 2 layer, after node n 6 dispatches the transmission time slice, it needs to allocate the transmission time slice of nodes n 5 and n 7 . As shown in Figure 9, first of all, it is analyzed whether the signals of nodes n 6 and n 5 will affect their respective receiving nodes, that is, whether the SINR of their receiving nodes will be below a specific threshold. If not affected, nodes n 6 and n 5 can be arranged for transmission at the same time slice; otherwise, node n 5 will be arranged for transmission at the next time slice to avoid signal interference. It is analyzed whether the signals of nodes n 6 and n 7 Data transmission Separation distance 8 Complexity will affect their respective receiving nodes. If not affected, nodes n 6 and n 7 can be arranged for transmission at the same time slice; otherwise, node n 7 will be arranged for transmission at the next time slice to avoid signal interference. e transit time relationship between nodes n 5 and n 7 is also analyzed as above.
Broadcast Algorithm Based on Successive Interference
Cancellation. It is worth noting that the method adopted by the GreedyA algorithm in broadcasting chain scheduling is the idea of interference avoiding scheduling. However, combined with successive interference cancellation technology can increase the number of broadcast links that can be transmitted concurrently in a certain extent. In this section, designing another low-latency broadcast algorithm EDTC based on successive interference cancellation technology and GreedyA algorithm is to further improve transmission performance. e steps of the EDTC algorithm and the GreedyA algorithm are basically the same, but the main difference is that the basis for judging whether the broadcast link interferes; that is, the 10th line in the pseudocode is different. e two broadcast links, I A 1 B 1 and I A 2 B 2 , do not interfere with each other, and the judgment basis of the GreedyA algorithm is as follows: Formula (7) requires that the SINR of the receiving nodes of the two links must be higher than the specific threshold. However, the judgment of the EDTC algorithm is based on the interference between the two links under the SIC condition, which satisfies Among them, result is the judgment consequent. It is different from GreedyA algorithm, when I A 1 B 1 is independent of I A 2 B 2 or I A 1 B 1 depends on I A 2 B 2 , EDTC algorithm considers that I A 1 B 1 does not interfere with I A 2 B 2 . erefore, the interference restriction is relaxed, and the number of broadcast links that can be transmitted concurrently is increased, which is beneficial to improving the performance of information transmission.
In the example of the broadcast link scheduling shown in Figure 9, if the situation shown in Figure 10 exists, node n 6 is closer to node n 10 ; therefore, the GreedyA algorithm analyzes whether node n 6 will interfere with node n 10 to receive the signal from node n 5 . As a result, two sending nodes cannot perform data transmission at the same time and need to perform interference avoidance scheduling. However, after combining with the successive interference cancellation technology, the EDTC algorithm analyzes the receiving node n 10 , which can firstly decode the signal of the n 6 node and then remove the signal. us, the signal of node n 5 is obtained, so the simultaneous data transmission by the two sending nodes will not affect the normal reception of data by the receiving node.
Because in the practical application of the EDTC algorithm, there may be multiple broadcast links. erefore, it is necessary to analyze the cumulative interference effects of multiple broadcast links; that is, the influence of multiple interference signals needs to be considered in the denominators of formulas (13) and (16).
Theorem 1.
e GreedyA algorithm provides a correct broadcast scheduling scheme.
Certification. Consider the entire opportunistic complex networks as a connected network. e GreedyA algorithm first stratifies all nodes, then constructs broadcast tree, and finally uses the method of layer-by-layer scheduling for information transmission. Interference avoidance scheduling allocates the transmission time slice of each sending node. Because the broadcast tree covers all nodes in the entire network, each node has the parent node. at is, broadcast data can be transmitted from the source node to all nodes in the network. Next, it is analyzed whether the forwarding node in the broadcast tree has scheduled its parent node to broadcast data to the node before sending the data. In addition, consider whether the scheduled broadcast links will affect the correct reception of data due to signal interference.
According to the construction rules of the broadcast tree, the parent node of a forwarding node is one level above the node. Because after the ending of scheduling at each layer, the scheduling time slice will be set as the maximum scheduled transmission time slice plus 1. erefore, the transmission time slice of the forwarding node must be smaller than the transmission time slice of any node in the previous layer. e idea of interference avoidance scheduling adopted by the GreedyA algorithm allocates the transmission time slice of the broadcast link. at is, when two broadcast links interfere with each other, the two broadcast links are allocated to different time slices for transmission. We can understand that when two broadcast links interfere with each other, the two broadcast links are allocated to different time slices for transmission. erefore, the scheduled broadcast links will not affect the correct reception of data due to signal interference.
Theorem 2.
e EDTC algorithm provides a correct broadcast scheduling scheme.
Certification. e steps of the EDTC algorithm are similar to the steps of the GreedyA algorithm, and the only difference is the basis for judging whether the link is interfered. According to the basic idea of successive interference cancellation, when the two links have a dependency relationship, the receiving node can correctly decode the required data signal. erefore, the broadcasting chain scheduled by the EDTC algorithm can also realize the correct reception of data. Certification. e first step of the EDTC algorithm is to construct a breadth first search tree to stratifies all nodes, and 10 Complexity it needs to traverse each node. So, the time overhead is O(n). e second step is to construct a broadcast tree, which determines the parent-child relationship between nodes. e rule with the largest number of covered nodes needs to analyze the neighbors of each node, so the time overhead is O(n 2 ); e final step is to broadcast scheduling level by level. When performing broadcast scheduling at each layer, it is necessary to consider whether there is an interference relationship between different broadcast links; therefore, the time overhead of broadcast scheduling for each layer is O(n 3 ). Because the total number of layers is Y, the last step requires O(Yn 3 ) time. erefore, the time overhead of all steps can be summed up to get this theorem.
Theorem 4. e time complexity of the EDTC algorithm is O(Yn 3 ).
Certification. e steps of EDTC algorithm and GreedyA algorithm are basically the same. e main difference is the basis for judging interference. erefore, by proof similar to
Theorem 5. e spatial complexity of the GreedyA algorithm is O(n).
Certification. Analyze the amount of temporary storage space at each step of the GreedyA algorithm during the working process. e first step is to construct a breadth first search tree and stratify the nodes. e information of the neighbor node needs to be stored during the working process. Because the node has at most Φ neighbor nodes, the space complexity of this step is O(Φ). e second step is to construct a broadcast tree. e information of the neighbor nodes is also required to be stored during the working process, so the spatial complexity is O(Φ). e last step is to perform broadcast scheduling according to the broadcast tree. During the operation, the scheduled broadcast link information needs to be stored. e spatial complexity is O(n). After synthesizing the spatial complexity of all steps, it can be obtained that the spatial complexity of the GreedyA algorithm is O(n).
Theorem 6. e space complexity of the EDTC algorithm is O(n).
Certification. e steps of EDTC algorithm and GreedyA algorithm are basically the same. e main difference is the basis for judging interference. No extra storage space is required in the judgment process. erefore, this theorem can be established through a certificate similar to eorem 5.
Results and Discussion
For the evaluation of the experimental performance of the EDTC algorithm, we test it with the opportunistic complex networks environment (ONE). In addition, in order to better judge its performance, EDTC will be compared with four other algorithms: ICMT (information cache management and data transmission algorithm) [1], SECM (status estimation and cache management algorithm) [47], Spray and Wait routing algorithm [36], and GreedyA algorithm. e following is an introduction to the principles of these algorithms: (1) ICMT: this algorithm is an information cache management and transmission algorithm based on node data information cache. For achieving the purpose of adjusting the cache, the algorithm evaluates the transmission probability between nodes by identifying the neighbor nodes and then adjusts the cache data distribution to ensure that nodes with a higher transmission probability have priority to access information. Simultaneously, neighbor nodes share the cache tasks of the nodes and effectively distribute data [1]. (2) SECM: to evaluate the probability of nodes in the project by establishing a method to identify surrounding neighbors, ensuring that nodes with high project probability obtain information first and achieve the purpose of adjusting the cache [47]. (3) Spray and Wait: this algorithm successfully overcomes the shortcomings of epidemic routing and other floodbased schemes, avoiding the performance dilemma inherent in utility-based schemes [36]. (4) In addition, we compare the proposed EDTC algorithm with the GreedyA algorithm. e algorithm we proposed is based on the GreedyA algorithm of the greedy broadcast algorithm and fully utilizes the benefits of successive interference cancellation to schedule the broadcast link in the social network, relaxing the limit of link interference and increasing the number of broadcast links that can be transmitted concurrently. By comparing the simulation experimental data between the two algorithms, we can see more clearly the impact of successive interference cancellation technology on the information transmission in the social network.
In simulation experiments, we mainly evaluate the performance of the algorithm based on the following parameters: (1) Delivery ratio: this parameter represents the possibility of selecting a relay node during transmission. e delivery ratio is an important indicator in the performance evaluation of opportunistic social networks, which directly reflects the performance of the data distribution mechanism.
(2) Overhead on average: the network overhead of successfully transmitting information between a pair of nodes can be expressed as this parameter, referring to the ratio of the difference between the total number of forwarded message copies in the network and the total number of messages successfully delivered to the destination node. (3) Energy consumption: this parameter represents the energy consumption during transmission. (4) End-to-end delay on average: this parameter represents the average delay in transmitting information between two nodes. End-to-end network delay is the time from when a message is generated to when it is successfully delivered to the destination node.
During the simulation experiment, we set its parameters as follows: the maximum transmission distance is 35 m, the signal transmission power is 1300 W, the noise power is 1.5 W, and the signal attenuation index is 2. Other parameters for environment configuration are shown in Table 1.
In the EDTC algorithm, the size of the specific threshold will directly affect whether a receiving node can decode the obtained signal and remove it from the mixed signal, thereby receiving the required signal. erefore, different specific thresholds will cause the EDTC algorithm to exhibit different performances during the transmission of information. In order to better explain the relationship between the specific threshold value and transmission performance, this paper uses subjective judgment to set the specific threshold value to 1.5, 2.5, 3.5, 4.5, and 5.5 for experiments.
First, through Figures 11(a)-11(d), we can find that no matter what the value χ SIC is, their delivery ratio, overhead on average, and end-to-end delay on average all tend to stabilize over time. For energy consumption, it gradually increases with time. By comparing the experimental data curves obtained with different thresholds, it can be found that when the value of threshold is too high or too low, the performance of information transmission in social networks is not very ideal. For example, when χSIC � 1.5, the message delivery ratio is only maintained at about 0.56, and when χSIC � 5.5, the energy consumption is the most.
is is because the standard of whether the receiving node can decode the signal is determined by the specific threshold value. When the specific threshold is continuously increased, the conditions for decoding interference signals become more and more severe so that the propagation performance of the EDTC algorithm shows a certain upward trend. However, as the specific threshold continues to increase, the interference signal cannot be effectively decoded, thereby reducing the performance of information transmission. But when the specific threshold value is in the interval [2.5, 3.5], a better transmission result can be obtained. In order to better verify the performance of the EDTC algorithm in subsequent experiments, we set the specific threshold to 3.5.
rough the simulation experiments, we found that the relationship between the four parameters to be evaluated and time is shown in Figures 12-15.
Firstly, in Figure 12, we show the relationship between each algorithm's delivery ratio in simulation experiments and simulation experiment time. We can clearly see from the figure that the lowest delivery ratio are Spray and Wait routing algorithms (copy � 30) and SECM. eir values are 0.31-0.36 and 0.35-0.39 separately. We can know that when the Spray and Wait routing algorithm (copy � 30) and SECM algorithm are used for information transmission, because they use the flooding method to transmit information to the nodes in the community, a large amount of information is lost in the process. Especially for the Spray and Wait routing algorithms, we can see that when the number of copies is 30, its delivery ratio is significantly lower than when the number of copies is 15. e delivery ratio of the Spray and Wait routing algorithms (copy � 15) is 0.41-0.47. erefore, the excessive copying of data is an important reason for the reduction of the delivery ratio of the Spray and Wait routing algorithm. However, in the ICMT algorithm, the transmission of all its packets depends on the cooperation of the cache, which effectively uses the cache space. is achieves the purpose of increasing the delivery ratio. It can be found from Figure 12 that the delivery ratio of the experimental data using the ICMT algorithm is 0.53-0.59, which is 147% more than SECM. As for the EDTC algorithm proposed in this paper, because when information is transmitted, the nodes in the same community are used to construct a broadcast tree, and the appropriate next-hop node is selected for hierarchical propagation through the coverage. In this way, the reliability and value of the nodes are comprehensively considered, and the node with the highest comprehensive utility value is accurately found, which avoids signal interference when data are propagated in parallel and also guarantee the probability of the message reaching the destination node. Because of this, it has greatly improved the delivery ratio in social networks. It can be seen that its delivery ratio reaches 0.66, which is the highest among these 14 Complexity algorithms. For the GreedyA algorithm, because it is limited by the number of broadcast links that can be transmitted concurrently, the delivery ratio is lower than the EDTC algorithm. However, because it uses the layer-by-layer scheduling and interference avoidance scheduling methods, it allocates the transmission time slices of the broadcast link, which effectively solves the signal interference problem, so its delivery ratio is stronger than the Spray and Wait and SECM algorithms, which is in a relatively high level. From Figure 13, we can get the relationship between routing overhead and time. In the transmission algorithms of social networks, if there is a lack of a cache management method to maintain information transmission when a node meets its neighbors, a large amount of redundant data will be received in the cache, thus increasing the overhead. erefore, in the simulation experiments, the overhead on average obtained by applying the SECM algorithm and the Spray and Wait algorithms is high. For the ICMT algorithm, the overall mean overhead is low and stable, but it increased in the early stage, reaching its peak in 2 hours, and then began to decline and stabilize. Because when using the EDTC algorithm to transmit information, through the constructed broadcast tree, messages can always be delivered to the correct node, thereby avoiding forwarding to unnecessary nodes and reducing the number of copies of forwarded messages in the network. ose improve network congestion and effectively reduce network overhead. Like the EDTC algorithm, the overhead on average of the GreedyA algorithm is also maintained at an ideal level. Figure 14 shows the connection between energy consumption and stimulation time. In simulation experiments, all energy consumption increases with time. Among them, because the Spray and Wait routing algorithms need to transmit information through spray, its energy consumption is the largest. It can be seen from Figure 14 that, at the 6th hour, its energy consumption reached 540. For the ICMT algorithm, because it can exchange valid data through cooperative nodes, it retains more energy to continue transmission, so its energy consumption data can maintain a low level. e energy consumption data obtained by applying the SECM algorithm is similar to the ICMT algorithm. For the EDTC algorithm, it is different from flooding to achieve the highest delivery rate without any cost for data transmission, resulting in a large amount of energy consumption during data transmission. e EDTC and GreedyA algorithm spreads information transmission by selecting the most appropriate nodes in a hierarchical manner, thereby ensuring the delivery rate and keeping consumption low. In Figure 15, we show the relationship between mean transmission delay and time. It can be seen from Figure 15 that the transmission delay in the social network using the SECM algorithm is very high mainly because the spray step is carried by nodes. e ICMT algorithm uses a cooperative mechanism to achieve reasonable utilization of node cache space. is reduces the propagation delay, but the effect is not very obvious. It is not difficult to find that the best algorithm to reduce the transmission delay among these algorithms is Spray and Wait routing algorithms, which is mainly because neighbors and cooperative nodes are used when data are transferred between nodes, and a large number of shared caches can be used in transmission. According to the broadcast characteristics of wireless signals, nodes only need to broadcast once to transmit data to nodes within coverage. For our proposed EDTC algorithm, it is a broadcast tree constructed according to the rule that the number of covering nodes has the highest priority as the parent node. In addition, the signal interference is avoided through successive interference cancellation technology so as to increase the number of simultaneous transmissions. As shown in Figure 15, its mean transmission delay is maintained at a very low value. Compared with the EDTC algorithm, the GreedyA algorithm has a lower number of broadcast links that can be transmitted concurrently, so its broadcast delay is higher than the EDTC algorithm.
In summary, in the four aspects of delivery ratio, overhead on average, energy consumption, and end-to-end delay on average, we can conclude that the performance of EDTC is better than other four algorithms. However, for the end-to-end delay on average, the performance of EDTC is worse than Spray and Wait routing algorithms.
In social networks, node cache has a great impact on the transmission efficiency of the algorithm. erefore, we continue to do simulation experiments to test the influence of cache on these four parameters. e experimental data that we obtained are shown in Figures 16-19.
In Figure 16, we show the relationship between delivery ratio and cache. When the buffer is small, the network cannot meet the message cache requirements due to the large number of message copies, and the old messages will be quickly squeezed out by the new ones, causing a large number of packets to be dropped. erefore, the delivery ratio of the five algorithms is not high when the cache is low. But with the increase in the buffer capacity, the transmission rate has increased to varying degrees. Among them, because the Spray and Wait routing algorithms (copy � 30) use the method of flooding to spread information and require high cache size, its delivery ratio is the lowest. However, for the EDTC and GreedyA algorithm, nodes only need to broadcast once to transmit data to nodes within coverage. Because the GreedyA algorithm is limited by the number of concurrent broadcast links, the delivery ratio is slightly lower. It can be seen from the experimental results that EDTC has the highest delivery ratio. As for the application of ICMT and Spray and Wait routing algorithms (copy � 15) in the process of transmitting information on opportunistic complex networks, with the increase in the node cache, the transportation conditions improved, and the delivery ratio increased by more than 50%. e association between rooting overhead and cache is shown in Figure 17. In general, as the cache increases, the packet loss of nodes in the network becomes smaller, more messages can be successfully transmitted, and the overhead is getting lower and lower. As shown in Figure 17, the overhead of SECM algorithm is the largest because many redundant data are injected by nodes. As the node cache increases, the overhead on average applying the Spray and Wait routing algorithms (copy � 30) is reduced from 310 to 136. Similarly, the overhead of Spray and Wait routing algorithms (copy � 15) is reduced from 281 to 98. But their percentage of decline is much lower than the EDTC and ICMT algorithms. It can be found that, by increasing the node cache, the purpose of decreasing the routing overhead of the nodes in the community can be achieved. As the cache increases, the overhead on average of GreedyA algorithm also decreases from 280 to 117. e relationship between energy consumption and cache is shown in Figure 18. e experimental results show that, with the increase in node cache, the energy consumption of EDTC algorithm in opportunistic complex networks can be maintained at 48. GreedyA also has a good performance, and energy consumption is stable at 76. e energy consumption of the other three algorithms has increased significantly. Because the Spray and Wait routing algorithms use the "Spray" method and all neighbors receive the data packets, its energy consumption is the largest. In two experiments using the Spray and Wait routing algorithms, the Spray and Wait routing algorithms (copy � 30) with a larger copy data consumes more energy. For the ICMTalgorithm, the effect buffer management method can cut down energy consumption. So its energy consumption is less than the Spray and Wait routing algorithms.
As shown in Figure 19, the relationship between the average delivery delay and the cache can be obtained. Figure 19 shows that the mean delay decreases as the node cache increases. For the SECM algorithm, the average delay is generally high, which is mainly because many probability calculation tasks are carried by nodes. For the ICMT algorithm, as the node cache increases, the average delay decreases from 178 to 56. In the EDTC algorithm, the mean delay is almost stable at 41. is shows that the size of the cache has a small impact on the average delay of the EDTC algorithm. In addition, the delay generated by the greedy algorithm is higher, which shows that multiple concurrent links generated by successive interference cancellation techniques can reduce the delay of information transmission in social networks. For the Spray and Wait routing algorithms, although the average delay decreases with the increase in the node cache. However, in the case of copy � 30, the mean delay is significantly lower than the copy � 15.
In actual social networks, the choice of information transfer methods also has a great impact on the performance of the algorithm. So in order to test our proposed EDTC algorithm, in the following simulation experiments, we choose three different models to evaluate the performance of the EDTC algorithm. ese three models are SPMBM (shortest path map-based movement), random way point (RWP), and random walk (RM) models [48]. Figure 20 shows the change in deliver ratio in the EDTC algorithm of different mobile models. It can be found from Figure 20 that the delivery ratio obtained under the SPMBM model is the highest, finally reaching 0.679. For the experiment using the RM model, the delivery ratio reached the peak 0.583 in the 6th hour. However, in the case of applying the RWP model, the peak 0.626 reached at the 4th hour. In general, the delivery ratio of the EDTC algorithm in the SPMBM model is higher than in RM and RWP models. Moreover, the RWP model is better than the RM model.
We can get the transformation of EDTC algorithm's routing overhead in different models from Figure 21. In general, in the process of applying EDTC algorithm for information transmission under these three models, the overhead change with time is relatively small, and it remains in the range of 108∼117. is experiment shows that the choice of different models has little effect on the delay caused by the EDTC algorithm for information transmission. Figure 22 shows the difference in energy consumption over time for different models. In general, the energy consumption in all three models increases with time. e difference in energy consumption between the three models is small. e results show that the EDTC algorithm has stable node information transmission performance and does not consume a lot of energy when the model changes. 18 Complexity We can get the information of EDTC algorithm's average delay in different models from Figure 23. e mean delay obtained by transmission under the three models is almost floating between 183 and 216, which shows that the EDTC algorithm can effectively transfer information.
Conclusion
In this paper, we propose an opportunistic complex networks data transmission and control based on successive interference cancellation techniques.
is algorithm performs broadcast link scheduling through layer-by-layer scheduling and interference avoidance scheduling. e link scheduling strategy is designed in conjunction with successive interference cancellation techniques to increase the number of broadcast links that can be transmitted simultaneously. is effectively solves the problem of signal interference during data transmission in an opportunity-complex network. And in the experimental stage, we compared the proposed algorithm with other classic algorithms of opportunistic complex networks. Experimental results show that the algorithm has good transmission ability. And relative to the GreedyA algorithm, because it increases the number of broadcast links that can be transmitted concurrently, it achieves better transmission performance. Subsequently, with the purpose of testing the impact of different mobile models on the EDTC algorithm, we, respectively, tested the algorithm based on the SPMBM, RWP, and RM models. Experimental results show that the algorithm has excellent performance in each model. Applying this algorithm to information transmission in opportunistic complex networks can reduce node energy consumption and propagation delay and greatly improve data transmission efficiency. In the future work, this method can adopt to big data environment to solve the problem in transmission.
Data Availability
Data used to support the findings of this study are currently under embargo, while the research findings are commercialized. Requests for data, 12 months after publication of this article, will be considered by the corresponding author.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 2020-06-11T09:07:35.708Z | 2020-06-08T00:00:00.000 | {
"year": 2020,
"sha1": "6a7a08930a77878d5f9ab77d7a9464f66b7cbccd",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/complexity/2020/3721579.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e1f24d739bf986ad4011a7a66423e3422416d2ad",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
225646412 | pes2o/s2orc | v3-fos-license | Length-weight Relationship and Condition Factor of Auchenoglanis biscutatus in Kiri Reservoir, Adamawa State, Nigeria
Length-weight relationship and condition factor of Auchenoglanis biscutatus obtained from Kiri Reservoir Adamawa State, Nigeria from July to December 2014 were studied. The objective of this study was to determine the state of physiological wellbeing of the fish in the reservoir. A total of 60 Auchenoglanis biscutatus were collected from fishermen catch and transported to the laboratory for analysis. Identification of fish was done using the Babatunde and Raji method. Length-Weight relationship and condition factor were calculated using the Froeze method. The results of the length weight analyses showed that the entire fish exhibited a negative allometric growth pattern with regression exponent b values less than 3. The analyses showed that the condition factor of Auchenoglanis biscutatus were greater than 1 and implied that they were in good physiological condition.
INTRODUCTION
Fish especially those of tropical and sub-tropical water systems are known to experience growth fluctuations due to many factors such as environmental changes, food composition, competition within the food chain, and changes in the physical and chemical properties of the aquatic medium [1,2]. Growth in fish is in length as well as in bulk [3]. Bake and Sadiku [4] described growth as the change in absolute weight (energy content) or length of fish over time, while Adedeji and Araoye [1] summarized growth as a function of fish size. The study of growth patterns in fish has been based principally on length -weight relationships [5]. The length-weight relationship is widely used in fisheries biology for several purposes such as estimating the mean weight of fish based on the known length [6,7]. Akintola et al. [8] posited that the length-weight relationship of aquatic organisms is an important predictor in fisheries biology [8].
The condition factor (K) is widely used in fisheries and fish biology studies. This factor is calculated from the relationship between the weight of a fish and its length, to describe the "condition" of that individual fish [9]. Different values in K of a fish indicate the state of sexual maturity, the degree of food source availability, age and sex of some species [10]. The condition factor which show the degree of well-being of the fish in their habitat is expressed by 'coefficient of condition' also known as lengthweight factor. This factor is a measure of various ecological and biological factors such as degree of fitness, gonad development and the suitability of the environment concerning the feeding condition [11]. When the condition factor value is higher it means that the fish has attained a better condition. The condition factor of fish can be affected by a number of factors such as stress, sex, season, availability of feeds, and other water quality parameters [12]. The study is aimed at determining the state of physiological wellbeing of Auchenoglanis biscutatus in the reservoir.
Study Area
Kiri Village is located in Shelleng Local Government Area of Adamawa State. Kiri village lies on Latitude 9°40'47" north, Longitude 12°0'51" east on the southern part of Adamawa State. The reservoir was a result of a dam that was constructed on river Gongola [13].
Sample Collection
Samples were collected from July to December, 2014. A total of 60 Auchenoglanis biscutatus was collected from fishermen catch and transported to the laboratory for analysis.
Sample Identification
The taxonomical key of fish by Babatunde and Raji [14] was used to identify the species.
Sampling Procedure
The length weight relationship and condition factor of the fish was carried out using the Froeze (2006) method [9].
Length -weight Determination
The results of the measurements of the total length (TL), standard length (SL) and body weight (BWT) of the fish examined are presented in Table 1
Determination of the Condition Factor (K)
The average values of the condition factor of the fish (
DISCUSSION
In this study, all the fish investigated exhibited a negative allometric growth pattern with regression analyses exponent b values less than 3. According to Adeyemi et al.
[5] negative allometric growth pattern in fish implied that the weight increases at a lower rate than the cube of the body length. The LWR is indicative of spatial and temporal variations related to water temperature, food availability, and reproductive activity [15]. LWR parameters a and b are affected by several factors, including sex, gonad maturity, health status, season habitat, nutrition, environmental conditions such as temperature and salinity, stomach fullness, general fish condition, differences in the length range of fish specimens and collection gear [9].
CONCLUSION
In conclusion, the results provide basic information on the Length weight relationship and condition factor of Auchenoglanis biscutatus in Kiri reservoir. Auchenoglanis biscutatus from Kiri Reservoir exhibited a negative allometric growth pattern. The condition factor showed that Auchenoglanis biscutatus was in a good physiological state of well-being in the Reservoir.
ACKNOWLEDGEMENT
Profound appreciation and gratitude goes to does that assisted in the field and laboratory. | 2020-07-16T09:09:07.253Z | 2020-07-09T00:00:00.000 | {
"year": 2020,
"sha1": "8cd83bdda48edf6523f0370da2ea6d2cfedfbe64",
"oa_license": null,
"oa_url": "https://www.journalajriz.com/index.php/AJRIZ/article/download/30088/56461",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c1be82dc0f269cfdc948a7f3b8461424e11b1c54",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
248377379 | pes2o/s2orc | v3-fos-license | On a Fast Solution Strategy for a Surface-Wire Integral Formulation of the Anisotropic Forward Problem in Electroencephalography
This work focuses on a quasi-linear-in-complexity strategy for a hybrid surface-wire integral equation solver for the electroencephalography forward problem. The scheme exploits a block diagonally dominant structure of the wire self block -- that models the neuronal fibers self interactions -- and of the surface self block -- modeling interface potentials. This structure leads to two Neumann iteration schemes further accelerated with adaptive integral methods. The resulting algorithm is linear up to logarithmic factors. Numerical results confirm the performance of the method in biomedically relevant scenarios.
I. INTRODUCTION
Several neuro-pathologies require precise functional brain imaging as part of their diagnostic or therapeutic protocols (see [1] and references therein). Among non-invasive strategies, high resolution electroencephalography (HR-EEG), that images the electric activity of the brain from scalp potentials, is widely used. In HR-EEG the volume currents are retrieved from the measurements of the electric potentials on the scalp by solving the EEG inverse problem. Solving this inverse problem requires multiple solutions of the EEG forward problem (FP) in which the surface potential generated by a known current configuration is computed. Boundary element methods (BEMs) are very popular in the biomedical community to model the FP and a recent hybrid formulation [1] has introduced the possibility of modeling white matter anisotropies by coupling surface BEM with an integral equation for partially conducting wires. In this work we present a fast matrix-vector multiplication algorithm for this hybrid formulation which, by exploiting the block diagonal dominance structure (induced by the presence of neuronal fibers in the model) and coupling this matrix structure with adaptive integral methods, obtains a scheme with O(N log N ) complexity in the N degrees of freedom. Theoretical and algorithmic considerations will be complemented by numerical experiments showing the impact of the formulation on medical scenarios.
II. BACKGROUND AND NOTATION
Consider a sequence of nested compartments Ω i , i = 1, . . . , C modeling the different layers of the head medium, characterized by homogeneous and isotropic conductivities σ i . The boundary of each compartment is denoted by Γ i . Following the strategy in [1], the inhomogeneity and anisotropy of the head medium is modeled by populating the white matter with wires of finite anisotropic conductivity contrast χ(r) = (σ iw I −σ(r))σ −1 (r) with respect to the background conductivity σ iw of the white matter's compartment. In this setting, the EEG FP consists in finding the electric potential φ(r) on the scalp surface Γ C generated by a primary current J p (r). To do so, the surface ξ and wire J eq unknowns (see [1] for their physical definition) are expanded with discrete basis where p i and h i are the 2D and 1D linear Lagrange interpolants, respectively. Following a Galerkin approach leads to a linear system of N = N s + N w unknowns Aboven denotes the unit normal vector pointing outwards Γ i and G(r, r ) = 1 4π r−r is the static Green function. Once (1) is solved, S and S v can be applied to α to get the potential φ(r) on Γ C .
III. A FAST SOLUTION STRATEGY
With respect to a standard integral formulation for isotropic media, corresponding to the left diagonal block in (1), the inclusion of the white matter anisotropy adds a new wirewire diagonal block and two coupling blocks in the system. First, the new scheme aims at decoupling the surface and wire solution via block diagonal inversion and Neumann series solution of the remainder: after separating diagonal and off-diagonal blocks: Z = Z self + Z coupl with Z self = [Z ss , 0; 0, Z ww ] and Z coupl = [0, Z sw ; Z ws , 0], we solve (1) as I + Z −1 self Z coupl α = Z −1 self v via a Neumann series approach enabled by the block diagonal dominance, in cases of practical relevance, of the original matrix (i.e. for the spectral radius ρ Z = ρ Z −1 self Z coupl < 1). Thus we have s ) whose complexity reduces to the one of the two inversions and of the multiplication of the coupling terms. The multiplication of the coupling terms can be done efficiently if a fast matrix vector product algorithm is available. We have opted for an adaptive integral method (AIM) [2]. In other words, all kernel interactions in D * ss , S ww , S ws and D * sw between all Gaussian quadrature points are interpolated on the same Cartesian grid with a number of nodes proportional to the number of unknowns N and handled via FFT in O(N log N ) complexity. As is standard in AIM [2], a near field precorrection is required for all kernels: a generic D * and S (for surface, wire, and off diagonal couplings) is written as D * = D * near − D * near + Φ p Λ T g D ΛΦ f and S = S near − S near + Φ p Λ T g S ΛΦ f , where D * near and S near are the uncompressed near fields, D * near and S near are the FFT precorrections, Λ is unique for every product and interpolates the quadrature points, and Φ p and Φ f map quadrature points to basis functions; all these matrices are sparse. The FFT is applied to the Toeplitz matrices g S and g D that, because of the translation invariance of all Green functions involved, require O(N ) memory storage. Since the double layer kernel isn · ∇G(r, r ) =n · r −r 4π r −r 3 , the product of g D with a vector is split into three scalar components.
Since the Z ss block corresponds to the classical homogeneous multilayer BEM formulation, once a fast matrix vector product algorithm is available, it can be inverted iteratively with standard techniques (see [1] and references therein). Regarding Z ww , the near field kernel interactions are extracted with an octree and the resulting sparse matrix N is used as a preconditioner of the linear system Z ww x = b. The near field dominance of Z ww -due to the electric current flowing along the fibers, i.e. ρ w = ρ N −1 (Z ww − N) < 1enables a second usage of a Neumann series from which
IV. NUMERICAL RESULTS
The favorable complexity scaling of the proposed scheme has been verified on a set of canonical geometries composed of spherical surfaces and orthogonal brain fibers. The total timings are reported in Fig. 1 and clearly confirm that the scheme we propose is, up to logarithmic factors, linear in complexity. The relevance of our fast solution strategy for real case scenarios has been studied on a realistic head model obtained with magnetic resonance imaging (MRI) data that includes white matter neuronal fibers with a tangential anisotropic conductivity of 1.3 S m −1 and four layers (gray matter, cerebrospinal fluid, skull, scalp) with conductivities 0.13 S m −1 , 1.79 S m −1 , 0.01 S m −1 , and 0.43 S m −1 respectively. The obtained current on the neuronal fibers is shown in Fig. 2. For this problem the radius of the fibers is chosen to match a total volume of 450 mm 3 . The total number of unknowns is 63 922. The two spectral radii are ρ Z = 0.439 and ρ w = 0.799, both less than one, thus allowing the Neumann strategy. For this experiment we have compared in Table I the method proposed in this work with the uncompressed solution. In both cases the tolerance iterative schemes has been set to 10 −3 and the results show the advantage of the new scheme. | 2022-04-26T06:48:26.677Z | 2022-04-25T00:00:00.000 | {
"year": 2022,
"sha1": "74dc9e6492b7e13089649fbfc2b3b97057817b63",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2204.11491",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "74dc9e6492b7e13089649fbfc2b3b97057817b63",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
236659064 | pes2o/s2orc | v3-fos-license | Evolution of rattling particles in deviatoric shear deformation of granular material
Granular material such as clean sand in geotechnical engineering is characterized by structured internal deformation pattern and some interesting particle arrangement patterns. This study focuses on the evolution of the fraction of rattling particles in deviator deformation until the critical state. Numerical simulations using the discrete element method reveal the presence of rattling particles (with zero or only one contact with neighbouring particles) even in a very dense packing system. The results show that the initial fraction of rattling particles depends on sample density and particle size distribution. With the increase of deviator strain, the number and volume fractions of rattling particles gradually approach a steady critical state from either a loose or a dense starting point. An effective void ratio, which is calculated by treating rattling particles as voids, can be viewed as new state parameter describing the effective packing density of sands. Besides, the rattling behaviour strongly depends on particle size distribution.
Introduction
The existence of rattling particles has been recognized through discrete element method (DEM) simulations [1]. The concept of particle rattling has been explored theoretically and employed to explain experimental results of granular sands with gapped particle size distribution [2][3][4][5]. The major idea in these theories for fine-coarse mixture of granular material is to assume that smaller particles float (as rattling particles) in the voids formed by the skeleton of larger particles. The mechanical contribution of smaller particles to the static and dynamic behaviour of fine-coarse mixture depends on the content and size of smaller particles relative to the larger ones. Following this idea, the inter-grain state concept has been widely adopted to interpret the observed behavior of sands, e.g. in [6,7]. Starting from an idealized binary packing system of granular particles, these theories often assume that: (1) only two distinct particle sizes exist in the system; (2) the particle size disparity is large enough; and (3) the packing condition of the coarse particles is unaffected by the presence of the fine particles, and vice versa. Thus, by neglecting the fines, an index known as the intergranular void ratio, or also widely known as the skeleton void ratio, was used as an alternative to characterize the state of the mixtures of fines and coarse grains [7][8][9]. A more general case is to have a fraction of the fines participating in force transferring. Thus, the effect of fines is considered by introducing an alternative equivalent skeleton void ratio to replace the skeleton void ratio. However, to fit against experimental data, some semi-empirical functions have to be introduced to characterize the effects of fine particles [9,10]. * Corresponding author: zhifu.shen@njtech.edu.cn In fact, even in relatively uniform (compared with the binary fine-coarse mixture) granular material, the presence of rattling particles is ubiquitous. Although rattling particle is a well-known phenomenon in granular material community, there is still a gap between the theoretical understanding and the application in geomechanics community. We may need further work to promote the idea of effective void ratio (or those previous terms such as intergranular void ratio, equivalent skeleton void ratio) for its general application in geomechanics community to describe the density state of granular soil of various types of particle size distribution, from gapped one to uniform one.
This study will examine the phenomenon of particle rattling for granular material with different particle size span with DEM simulations.
DEM simulations of triaxial loading of clean sand
Samples with three different particle size distributions (referred to as PSD1, PSD2 and PSD3) have been simulated, as shown in Fig. 1. The three PSDs, representing continuous spans of particle sizes with dmax/dmin from 1.35 to 7. The coefficient of curvature, defined as (d30) 2 /(d10d60), ranges from 1.05 to 1.9, and the coefficient of uniformity, defined as d60/d10, ranges from 1.16 to 3.31. Here, d10 is a soil mechanics terminology meaning that particles with a diameter smaller than d10 constitute 10 percent of a soil sample by mass or solid volume. The same applies to d20, d30 and d60 through this paper). The rolling and twisting resistance model for clean sand were used to model the contact behavior [11]. In this contact model, two spheres were assumed to physically interact over a circular contact area and rolling moment and torque can be transmitted in addition to the normal and tangential interactions.
The contact behavior formulations are listed in Table 1.The following parameters were chosen according to a parametric study in [11], which can replicate quantitatively the macroscopic behavior of clean sand: Ep = 0.7 GPa (equivalent contact modulus), = 5 (ratio of normal stiffness to tangential stiffness), μ = 0.5 (interparticle friction coefficient), βc = 0.15 (particle shape parameter used to consider the mechanical effects of particle angularity), c = 4 (local crushing parameter, crushing not considered when equal to 4). Using this contact model, the shape effects of real sand particle (with low aspect ratio) on bulk behavior can be captured with spherical particles. Please refer to [11] for more details of this contact model. The contact model was implemented in PFC3D software [12] for numerical simulations.
A total of 20, 000 particles were used in each sample. A sample with 40, 000 particles were also simulated, showing that 20, 000 particles are enough for the purpose in this study. The cubic DEM samples with different initial void ratios were prepared by radius expansion method to arrive at an isotropic microstructure. Then, the rigid boundary walls were moved to achieve a desired isotropic stress state. Finally, triaxial compression tests under a confining pressure of 50 kPa were simulated.
Two groups of simulations were run in this study. Group 1 simulation studies the effects of sample density using PSD2 samples with initial void ratios of e0=0.67 (dense sample) and 0.94 (loose sample), or initial solid fractions of 0.599 and 0.515, respectively. Group 2 simulation studies the effects of particle size distribution using PSD1, PSD2 and PSD3 samples, with an initial void ratio of 0.67 (or initial solid fraction of 0.599). Fig. 2 (a) presents typical hardening and softening behaviour of loose and dense sands, respectively, which are accompanied with contraction and dilation, respectively, as shown in Fig. 2 (b). The geometrical arrangement of particles in a sample allows some particles to have zero or only one contact with neighbouring particles, which are called rattling particles. An effective void ratio is defined by treating rattling particles as voids. In Fig. 2 (b), the effective void ratio of the loose sample is much higher than that of the dense sample. With large enough deviator strain, their effective void ratios reach the same steady critical state. Fig. 2 (c) shows that the number fraction of these rattling particles is initially lower in a dense sample (0.215) than in a loose sample (0.376). With an increase in deviator strain, rattling particle fraction in the loose sample decreases while it increases in the dense sample, and finally both samples reach a unique steady state. The same trends are observed for the volume fractions of rattling particles in dense and loose samples as shown in Fig. 2 (d).
Effects of sample density
Note that the volume fraction of rattling particles is much lower than the number fraction of rattling particles in Fig. 2, which implies that particles of different sizes may have distinct chance to be rattling particles. To investigate this phenomenon, Fig. 3 gives the size distributions of rattling particles at various loading states (with different deviator strains). The size distributions of rattling particles are almost independent of the shear deformation magnitude. Compared with the size distribution of all particles, there exists a threshold particle size d0; particles with a diameter smaller than d0 have a higher chance to behave as rattling particles than particles with a diameter larger than d0. For PSD2 samples simulated here, d0 is approximately equal to d30 and this d0 is independent of sample density. Fig. 4 presents the mechanical responses and rattling particle fractions of clean sand samples with different PSDs. The general behavior is the same for the three examined PSDs. The widening of particle size brings down the peak stress ratio and enables the sample to be less dilative, as shown in Fig. 4 (a) -(b). Although the three samples with different PSDs have the same initial conventional void ratio, their effective void ratios are quite different, leading to distinct stress-strain curves. The PSD1 sample shows the highest initial effective void ratio among the three while the PSD2 and PSD3 samples have very close initial effective void ratios, which explains the less pronounced softening of PSD1 sample and the close responses of PSD2 and PSD3 samples. Fig. 4 (c) -(d) show that: (1) both the number and volume fractions of rattling particles are the lowest for PSD3 that has the least particle size span; (2) both fractions increase nonlinearly with deviator strain and finally reach a stable critical state. It is interesting to note that both fractions vary only slightly for PSD1 sample, seemingly indicating that the rattling behavior has been predefined by the initial particle arrangement; this needs further study. 5 shows that the threshold particle size d0 is approximately equal to d20 for PSD1, d30 for PSD2 and d60 for PSD3. The difference between rattling particle size distribution and particle size distribution of all particles becomes less significant when the particle size range reduces from in PSD1 to in PSD3. That is, when the particle size in an assembly tends to be uniform, particles of each size tend to have the same chance to be candidates of rattling particles; otherwise, smaller particles are preferred as rattling particles.
Concluding remarks
The purpose of this paper is to present some numerical simulation results, focusing on rattling particles in deviatoric loading, and to initiate discussions on the implication of particle rattling in geomechanics. Particle rattling is a general phenomenon in stressed granular material, such as clean sand in geotechnical engineering. That is, some particles float in the voids formed by other particles when an assembly is subjected to external loads. The effective void ratio, which treats rattling particles as voids since they take no loading, can be viewed as a new density state parameter. Both the number and volume fractions of rattling particles increase (decrease, respectively) with deviator strain for dense (loose, respectively) samples and finally reach the same steady critical state. The rattling behavior strongly depends on particle size distribution of clean sands, which can be captured by a threshold particle size d0 that delimits the size range of distinct participation tendency of rattling. With additional simulations (not shown here), it is found that d0 does not change with stress level, stress path and sample density, but it strongly depends on particle size distribution. Therefore, d0 may be regarded as a characteristic particle size for a specific particle size distribution.
Further study is needed to examine the effects of gapped particle size (i.e., discontinuous particle size distribution), and the effects of particle shape. Then, construction of a predictive model of rattling fraction and effective void ratio based on particle size distribution is possible, which finally can be inserted into a micromechanics based constitutive model for sand in geomechanics.
The macroscopic phenomenological constitutive model of sand may also incorporate the effective void ratio, rather than conventional void ratio, in hope of simplifying the constitutive functions, such as stress-dilatancy relation. This would be promising in theoretical geomechanics since the effective void ratio is an easy-to-use scalar quantity and it does shed light upon an important aspect of the microscopic arrangement of particles under deviatoric loading. | 2021-08-03T00:05:42.035Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "ac7a4c5abc417e9a3ce77a89fc52d4b42963fda5",
"oa_license": "CCBY",
"oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2021/03/epjconf_pg2021_11017.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "ded95aa4e20303987b711d79a63daff690fbaaac",
"s2fieldsofstudy": [
"Engineering",
"Geology"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
53699357 | pes2o/s2orc | v3-fos-license | Development of Physical Therapy Practical Assessment System by Using Multisource Feedback
The purposes of the research were (1) to develop the physical therapy practical assessment system by using the multisource feedback (MSF) approach and (2) to investigate the effectiveness of the implementation of the developed physical therapy practical assessment system. The development of physical therapy practical assessment system by using MSF was determined by nine experts in physical therapy. Suitability and feasibility of the system for each sub-group were investigated. Five input factors, two process factors, two output factors, and two feedback factors were involved in the system. Level of suitability and feasibility of elements in each sub-group presented at high to the highest level. In system testing, 40 physical therapy students were participated. Raters consisted of clinical educators, students (self-assessment), friends (students who in the same practical group), and patients. Twice assessments during the period of clinical practice were performed. Data analysis for generalizability coefficient (G-coefficient) was evaluated by the EduG program. Quality of the system was evaluated 4 aspects including the utility, feasibility, propriety, and accuracy. These were calculated by mean ( ̅) and standard deviation (SD). The values of G-coefficient for absolute and relative were 0.86 and 0.88, respectively. In addition, quality of the system showed value at high to the highest level in all aspects.
Introduction
Teaching and learning in higher education, there are many processes to facilitating knowledge, skills, and behaviors.Clinical practice is one of the most important processes to develop the students to be professional.This is the highest level of competency that students must achieve prior to graduation.Students are required to integrate and apply their knowledge and practical skill in classroom to real-life situations.This is very challenging for clinical educators to arrange the suitable situation and adapt or develop methods to motivate students to achieve the professional competence.In addition, the clinical practice management should consider several elements enriched in encourage student's experience effectively, such as being a good role model of the clinical educator, teaching preparation, facilitate reflective learning, adequate learning resources, assessment system that consistent with the objective, and giving and receiving good feedback.Especially, feedback in clinical education should be specific which had provided information for narrowing the gap between actual and expected performances.The purpose of all is to promote students to success in clinical practice (Archer, 2010;Cunningham, Baird, & Wright, 2015;Ernstzen, Bitzer, & Grimmer-Somers, 2009;Janicik & Fletcher, 2003;Jette, Nelson, Palalma, & Wetherbee, 2014;Ramani & Leinster, 2008).Physical therapy is one of the health professional team that provides health care for all people.Teaching in clinical practice are different in classroom learning because it occurs in real situation.Students are divided into small group of 3-5 persons and entrusted to the care of patients under clinical educator responsibility.
However, not everyone can achieve the requirement by clinical practice.So, it is necessary to give students an opportunity to improve and develop their practical skill.In order to get the appropriate information for improving of student's performance, the appropriate assessment method is required.Moreover, quality of clinical practical assessment is the most importance part which has to be concerned.Because clinical practical assessment is a key to succeed in health professions in the future, the formative assessment is benefit to help students to get valuable feedback and use this information in planning and developing their own practice in time.
Moreover, formative assessment can identify students learning to result in academic achievement in the desired direction (Andrade & Cizek, 2010;Black & Wiliam, 2009;Kahl, Hofman, & Bryant, 2013).In addition, these can be sharing attitude of teachers and students during teaching.But the most recent assessment in clinical practice is often used to judge performance of student's practice (summative assessment).Formative assessment was only used in term of verbal or informal that can lead to misunderstand between clinical educators and students.These issues can be found in studies that reported perception of clinical educator providing feedback.The research suggested that the understanding of clinical educators were always providing feedback to their student frequently.On the other hand, students reported that the receiving feedback from clinical educators was rare.It might be due to an assessment was used that not enough to students perceive and understand (Archer, 2010;Liberman, Liberman, Steinert, McLeod, & Meterissian, 2005;Van de Ridder, Stokking, McGaghie, & Cate, 2008).
Multisource feedback (MSF) is one of the assessment tools for collection of detailed information from multiple sources which called the assessors.The MSF can be used to improve performance of an examinee in the clinical practice of medical education.The MSF can be applied in two manner including formative and summative assessments, but mostly used in the form of formative assessment.The application of MSF was usually used to assessment and report informal.The process of MSF will get valuable information of feedback which impacted to student's performance directly.Therefore, there were several studies suggested that MSF was a tool for improving student's competences such as communication, teamwork, patient management, and professional development (Bracken, Timmreck, & Church, 2001;Davies, Ciantar, Jubraj, & Bates, 2013;Violato, Worsfold, & Polgar, 2009).There are many elements involved when using MSF such as design and select instrument, select raters, collect data, analyze data, report feedback etc.These assessment can help students to know their strength and weak performances, similarly, the teachers can use the result to plan the teaching and learning for students.
Furthermore, MSF will be received the result of assessment from the assessors more than one person.The assessors will consist of individuals who have relationships with the student's practice e.g.clinical educator, peer, and patient.This is in line with the authentic situation that there are many people involving student's practical experience.The different views of assessors will provide important information on the development of student's performance.For examples, clinical educator views can identify strength and weak performance.Peers views who practice in same group that can explore behavior practice routine.In addition, patient views can give another valuable feedback because patients are an importance stakeholders in the future.This assessment will assist the students to ready for work and also serve as reflection of the curriculum quality.Moreover, MSF will promote the student to be one of the assessors (self-assessment).This is very useful for the students to assess themselves more precise and encourage student lifelong learning (Cox & Irby, 2007;Davies & Archer, 2005;Davies, Ciantar, Jubraj, & Bates, 2013;Overeem et al., 2010;Reinders et al., 2011;Wall, Singh, Whitehouse, Hassell, & Howes, 2012).However, there was no study about the application of MSF in formative manner for improving student performance in the clinical practice for physical therapy education.Therefore, as the first step, researchers interested in developing the physical therapy practical assessment system by using MSF and investigate the effectiveness of the developed system in order to improve quality of clinical education.The purpose of the research was (1) to develop physical therapy practical assessment by using MSF and (2) to investigate the quality of the implementation of the developed physical therapy practical assessment system.
Method
There were 2 steps of developing the system as described in the following; Step 1: Developing of the physical therapy practical assessment system by using MSF; the assessment system was developed by a synthesis of documents related to the assessment in clinical practice and MSF approach.Then, investigation of suitability and feasibility of the developing system were performed by nine experts.
Step 2: Experiment of the developed physical therapy practical assessment system; after completed developing, the quality of the assessment system was evaluated.The participants in this step were mentioned as examinee.There were consisted of 40 physical therapy students of the Faculty of Physical Therapy, Mahidol University.The raters consisted of clinical educators, students (self-assessment), peers (students who in the same practical group), and patients.The instrument used for assessing students practice was a checklist format questionnaire.The topics of assessment were competence requirements in physical therapy profession, such as professional behavior, communication, and patient management.Validity and reliability of the questionnaire were tested and found to be high (0.87 for validity and 0.91 for reliability).
Data analysis for generalization was using EduG program.In addition, quality of the system was evaluated in 4 ies.ccsenet.aspects fol standard d
Results
The Scott (2008).Group of elements had inter-related to function to making goals success.This study found that the system consisted of five input elements, two process elements, two output elements, and two feedback elements.The system suitability and feasibility evaluation by experts indicated that each sub-group had values in high to the highest levels.These indicated that the essential elements had been prepared when using this system.The input and process groups of system were support by studies as follows.
The study of Mizikaci (2006) reported that the essential elements for quality in higher education were clarify of the aims and objectives of assessment.Research by Donabedian (1966) cited in Jette, Nelson, Palalma, and Wetherbee (2014) indicated that duration of experiences, evaluation of student's performance, and model of supervisory were considered in quality of clinical education.Besides, the study by Calman, Watson, Norman, Redfern, and Murrells (2002) presented that the assessing practice should be prepared of assessors and assessment method.In addition, student academic achievement always put in the output of evaluation in educational system.As the study by Jette, Nelson, Palalma, and Wetherbee (2014), which indicated the essential outcomes, should be concerned in student's competencies and stakeholder satisfaction.It supports that the output of the system has appropriate.
The results of G-coefficient for absolute and relative were 0.86 and 0.88, respectively.As the description by Shavelson and Webb (1991) explained that G-coefficient was used to interpret of reliability coefficient across various facets of the study.The results of the study indicated this system had reliability in high level.This was supported by Cunningham, Baird, and Wright (2015) that mentioned the assessment in clinical education has to adequate reliability.
In addition, G-coefficient for absolute and relative in D-study which provided information for make decision in the most stable and efficient measurement procedures (Shavelson & Webb, 1991).The results of D-study presented that generalization for absolute (ρ 2 Abs ) were 0.7566, 0.8614, 0.9031, 0.9256, and 0.9491, respectively.The generalization for relative (ρ 2 Rel ) was 0.7919, 0.8839, 0.9195, 0.9384, and 0.9501, respectively.So, the appropriate implementation of this system should be used in assessment about two or three times because small different in G-coefficient was found since the third assessment.So, more than 3 times of assessment is not necessary.This is supported by Jette, Nelson, Palalma, and Wetherbee (2014), which presented the frequent of assessment, should be at least two times which usually performed at mid-point and final week of clinical practice periods.Excessive assessments may be waste in time and resource and resulted in unexpected outcomes.These could be found in previous studies that mentioned about negative impacts to the students.For example, it can make stress and uncomfortable that lead to disturb student's learning in clinical experiences (Changiz, Malekpour, & Zargham-Boroujeni, 2012;Razaee, Esmaeili, & Habibzadeh, 2015).
According to the joint committee on standards for educational evaluation (1994) recommended that the system effectiveness should be evaluated standards of utility, feasibility, propriety, and accuracy.This study found that the effectiveness of the developing of physical therapy practical assessment system by using MSF were at high to the highest levels of all aspects.Therefore, this system had adequate quality for implementing future physical therapy practical assessment.
This study was included participants only in the Faculty of Physical Therapy, Mahidol University.Because there were differences in pattern of clinical practice and evaluation in each physical therapy institutes in Thailand, future study will be challenge to investigate this system used in other institutes or other health professional educations that required clinical practice. | 2018-11-15T15:42:39.636Z | 2017-08-27T00:00:00.000 | {
"year": 2017,
"sha1": "0c0067a47ddda5760aa7ad3f39b32d54d461bfb8",
"oa_license": "CCBY",
"oa_url": "https://www.ccsenet.org/journal/index.php/ies/article/download/66715/38282",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "0c0067a47ddda5760aa7ad3f39b32d54d461bfb8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
237669749 | pes2o/s2orc | v3-fos-license | Game Theory-Based Power Allocation Strategy for NOMA in 5G Cooperative Beamforming
Non-orthogonal multiple access (NOMA) is a Fifth Generation (5G) technique that allows many users to simultaneously access the same time–frequency separating channels via successive interference cancellation (SIC) receiver. Cooperative NOMA (CNOMA) is an effective tool to prevent performance degradation of far users by allocating minimal power to users with good channel conditions. In this paper, we proposed a fair power and channel allocation scheme based on the Nash bargaining solution (NBS) game solution in full-duplex, cooperative beamforming (BF) for multicarrier (MC) NOMA. The proposed NBS scheme assigns optimal power and channel allocation according to channel conditions while maintaining a fair rate amongst cooperative users. NBS provides a fair and optimum approach for maximizing the total rate of CNOMA. The signal-to-leakage (SLR) ratio precoding technique is considered as a design performance criterion for beamforming vector towards achieving power domain CNOMA players. Simulation results show that at BER = 10-5\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${10}^{-5}$$\end{document}, the NBS power allocation (proposed scheme) improved by 2 dB in terms of Signal-to-Noise Ratio (SNR), compared with the non-cooperative scheme, and 3 dB compared with the multiple-input multiple-output NOMA (MIMO-NOMA). Both improvements were as a result of interference reduction and information sharing in the network. In terms of fairness, the proposed NBS scheme shown a high level of fairness at 0.8401, compared to the other similar approaches in the literature.
Introduction
non-orthogonal multiple access (NOMA) has recently received extensive attention compared with conventional orthogonal multiple access because it allows multiple users to communicate with each other simultaneously using the same time/frequency channel, leading to enhanced spectral efficiency [1]. Successive interference cancellation (SIC) 1 3 is applied at users' receivers to separate the superposition of different power levels of users' signals [2] [3]. NOMA with cooperation (CNOMA) between superposition users' signals is a candidate solution proposed to prevent performance degradation of far users [4][5] [6]. Hence, the best users channel can exploit redundant information by acting as relays to improve the reliability of other users who have a poor connection with BS. Most studies on CNOMA assumed that the communication channel between the base station (BS) and users adopts a Rayleigh fading channel, which is characterized by Nonline of sight (NLOS) signal scattering scenario. This assumption is short-sighted based on the premise that both line of sight (LOS) and NLOS are expected to feature predominantly in the emerging scenario.
In 5G applications, such as massive Internet of things (IoT) and machine-type communications, low-cost sensors required a small area, which can be better exhibited by the Rician fading channel because both LOS and NLOS exist. Exploiting the emergence of the LOS and NLOS loophole, the performance of the NOMA scheme was evaluated with Rician fading channels and noticeable results were observed [7]. NOMA technology can be divided into: (i) power domain multiplexing and (ii) code domain multiplexing [8][9][10]. Power domain multiplexing is quite challenging because of the need to execute an optimal power allocation scheme, which is important for the NOMA systems' overall performance.
Many studies on resource allocation have included power allocation and proposed optimization methods. For example, [11] jointly optimised power and subcarrier allocation for multicarrier NOMA (MC-NOMA). MC-NOMA is NP-hard and solved using the Lagrangian dual optimisation and dynamic programming technique. In [12], an algorithm for subchannel assignment and power allocation across subchannels was proposed to maximize the energy efficiency in MC-NOMA systems. In [13], maximization of the total sum rate by fairness between users was performed for optimal power allocation, and subchannel assignment in MC-NOMA was proposed. The author in [14] proposed a resource allocation algorithm for full-duplex MC-NOMA systems to maximise the weighted sum throughput of the system. Other studies have proposed the use of beamforming with NOMA. For example, [15] aimed to minimise transmission power depending on the beamforming design in multiple-input multiple-output (MIMO)-NOMA.
Driven by the beamforming vector design concept, two NOMA concepts have been established in the literature, which are: clustering NOMA [15] and non-clustering NOMA [16,17]. In the clustering NOMA scheme, the users are grouped into many clusters (at minimum, two users in each cluster). Consequently, each transmits beamforming vector is assigned to support one cluster. In the non-clustering NOMA scheme, there is no clustering assigned and each user is supported by its beamforming vector. In fact, clustering is used to support a huge number of users to reduce the separation complexity at SIC. In [18], beamforming vectors were employed in the multiuser transmitted system, and each user was assigned a single antenna. The proposed scheme aimed to guarantee user fairness and used channel gain as a constraint factor. Among the available methods for power allocation, the Nash bargaining solution (NBS) game theory has been suggested.
Game theory has been chosen to achieve better payoffs by cooperation between users to share some information. Players can determine whether there is a potential extra utility for everyone if they cooperate. If there is such extra utility, players may bargain with each other to decide how to share information. Thus, without losing the generality of the NBS, our contributions are: (1) We propose NBS as a method for power allocation in non-clustering full-duplex, cooperative beamforming (BF) for multicarrier non-orthogonal multiple access NOMA. (2) Derivation of a mathematical model for implementing a fair NBS scheme for optimal power allocation cooperative BF for MC-NOMA system. According to channel conditions, the proposed scheme assigns optimal power and channel allocation while keeping a fair rate amongst cooperative users. (3) The performance of the proposed NBS-based optimal power allocation scheme is validated and compared against the other scheme in the literature based on BER performance and fairness gain.
The limitation of employed NBS in any system is the requirement of convex utility space. In the application of a multiuser communication system, the information rate is chosen as a user utility. The interference amongst the users will push the utility space (rate region) from a convex to a non-convex domain. Orthogonal signalling, such as frequency division multiple access and time division multiple access, converts the non-convex utility space to a convex one, which is considered a drawback of using NBS [19] with NOMA. Thus, in this study, we adopt signal-to-leakage ratio (SLR) [20][21][22] as a beamforming vector in order to: (1) To reduce the interference amongst users' which leads to maintaining the convex utility space of NBS. Since maximizing the value of SLR is expected to improve the desired user's power level and reduce interference to other users from the desired user. (2) As a technique to circumvent the coupled variables problem, we suggest SLR as an optimization criterion in the achievable rate equation, instead of using SINR, which leads to the coupled variables problem.
Meanwhile, NBS offers a fair and optimum approach to maximize the total rate of the CNOMA system. Our proposed power allocation game scheme considers the Rayleigh fading channel as a communication channel between the BS and each user (firsttime slot), whereas Rician fading channels consider as inter-user channels between users (second-time slot). The reason for chosen Rician fading channels in the second time slot is simply because there is LOS between a user and the next user. Hence, the need for cooperation.
The remainder of this paper is organized as follows. Section 2 introduces the system model of downlink BF for MC of CNOMA (the proposed scheme), the channel model description (Raleigh and Rician Fading), and the deployment of SIC in a cooperative scheme. Section 3 presents the problem formulation include the SLR beamforming analysis, interference analysis, and SLR limit theorem. In Sect. 4, the bargaining solution has been addressed starting with power allocation based on NBS, which involved optimization problems, the existence of NBS, and the NBS Scheme for power allocation. According to the studies in literature, the best performance of NOMA are shown when two users are considered, therefore the proposed scheme validated with the previous two users MIMO-NOMA [23] study in terms of bit error rate while the fairness performance of the proposed scheme validated with the previous study in term of the Jain's fairness index. The conclusions are provided in Sect. 4.
System Model
The downlink of the full-duplex cooperative BF for the MC-NOMA system consists of U cooperative users and one BS. Each of the cooperative users is assumed to be equipped with N U antennas, and the BS is assumed to be equipped with M antennas. The proposed scheme considers beamforming based maximal SLR in MC-CNOMA to achieve power domain superposition cooperation in users' signals. More explicitly, SLR beamforming vectors is suggested as an optimization criterion in the achievable rate equation instead of using SINR, since using SINR as an optimization criterion will lead to coupled variables problem. The block diagram of the downlink full-duplex cooperative BF MC-NOMA is shown in Fig. 1.
Total bandwidth B is divided into C subchannels, each of which has bandwidth B C . There are two techniques to assign subchannels in NOMA. The first technique considers that each user can use all the available subchannels by sharing the same time and frequency and exploiting the difference in power levels. Meanwhile, the second technique considers that each user can use one subchannel or more that is similar to OFDM but allows the exploitation of different power levels instead of exploiting the orthogonality between subcarrier signals. The concept applies both uplink and downlink transmission [24]. We define i, c as the power allocated to user i, i ∈ {1, … , U} on subcarrier c. In the resource allocation problem, we use parameter i to denote the set of users allocated with positive power i, c ≥ 0 to subchannel c [25].
Assumptions
(i) The transmitted symbols of the ith desired user is s i and the uth interfering symbols s u are assumed to be zero-mean and unit variance. (ii) The antenna spacing at the receiver is sufficiently large so that the fading at each antenna is spatially uncorrelated, i.e., the channel vector H i is distributed as ℂℕ(0, 1) . Further, the interfering channels are also spatially uncorrelated, implying that the ith interfering vector H i is distributed as ℂℕ(0, 1). (iii) The fading coefficient vectors, and the noise vector n are uncorrelated.
Raleigh Fading Channel Model
During the first time slot of the CNOMA system, the BS transmits a superposition of the individual messages on subchannel c, i.e., w i s i , to all users over Rayleigh fading channels. s i and w i are symbols intended for the ith user and the corresponding beamforming weight, respectively, as shown in Fig. 1. NBS is applied to allocate optimal power i and implement channel assignment to each user before user cooperation. The received signal at the ith user is given by [20]: where s i denotes the transmitted data intended for user i. The scalar data s i is multiplied by an M × 1 beamforming vector w i before being transmitted over the channel, i is the power allocation factor for strong user i, n i is an additive white Gaussian noise (AWGN) vector whose entries are independent and identically distributed (i.i.d.) with zero mean and variance 2 i , w u s u is the co-channel interference (CCI) caused by the multi-user nature of the system. and H i is given as [20]: where h (n,m) i represents the channel coefficient that affects the propagation signal between the mth transmitter array antenna of BS and the nth receiver array antenna of the ith user.
Rician Fading Channel Model
SIC is used at the receiver of each user by exploiting maximal SLR and NBS to provide an optimal power domain for transmitting multiple signals over the same frequency and time domain. To achieve power and channel allocation, cooperative users should be ordered based on their channel quality from the BS [16] [17], i.e., During the second time slot, following this previous order, the SIC in the full-duplex cooperative user's strategy will detect each user as follow: user i detects the first i − 1 users' signals by using SIC and sends the i − 1 users' signals to the user over a Rician fading channel. Meanwhile, user i − 1 detects its signal by using SIC and sends i users' signals to (1) another user during the second time slot over a Rician fading channel. In the same way, the message of other users, i.e., from i + 1 to U, is sent to user i in the second time slot over a Rician fading channel. In other words, the ith user's signal should be detected by user l for all l ∈ {i, i + 1,..., U}. This study focuses on optimal power allocation and channel assignment based on NBS to maximize the total rate of cooperative BF of MC-NOMA. Hence, the remaining signal at l to detect the ith user is presented as follows: Specifically, users with good channel conditions have prior information on the messages of other users, and users with poor channel conditions have information on other users, including those with good channel conditions.
Cooperative Communication
For a thorough understanding of full-duplex cooperative BF for MC-NOMA, we considered two cooperative users ith and (i + 1) th in the beamforming downlink scheme, as shown in Fig. 2. In the NOMA full-duplex cooperative BF strategy, both users act as a relay and exploit redundant information for other users to improve their reliability and prevent the degradation of users who have a weak connection with BS. The superposition information is transmitted in two time slots, namely, direct and cooperative phases.
In the direct transmission phase, a superposed message of users i and i + 1 , is transmitted by BS. Assuming that the channel condition of user i + 1 is the better channel condition, the SIC technique is employed in both users' receivers.
Two-users NBS power allocation in full-duplex cooperative BF for MC of NOMA channel model Thus, user i + 1 decodes the information of user i before decoding its own information. During the second time slot, user i + 1 starts working as a relay and forwards the prior decoded information √ i s i of user i. Meanwhile, user i decodes its own information then decodes user i + 1 's information. During the second time slot, user i starts working as a relay and forwards the information √ i+1 s i+1 of user i + 1. Therefore, two copies of signals are received by each user through different paths. The reliability of signal reception of the user with poor channel conditions is improved by having two copies of the message. The strong user channel is also enhanced by having two copies. The channel model of full-duplex cooperative BF for MC of NOMA is shown in Fig. 2.
The maximum ratio combiner (MRC) receiver is considered, since MRC has lesser complexity and achieves the best performance comparing with another estimator, i.e. zero-forcing ZF estimator [20], especially when employed MRC with SLR beamforming techniques and SIC technique. For user i, the MRC detection scheme is used to estimate s i signal from the received signal as follows [20]: In the transmitter of the proposed scheme, the constraint of transmission power is employed and described as i s 2 i ≤ P i , where i is a constant to meet the total transmitted power constraint, and it is given as [26].
According to [26], the received symbol s is preceded by pre-equalization weight w, so ŝ = ws , where w = H −1 .Therefore, the transmitted signal to the ith user in the second time slot is where s u−i−1st is the leakage signal from the ith user detected by the uth user in the first time slot. The received signal in the second time slot by ith user is given by where H u−i−2nd represents the inter-user channel between uth and ith users and n i is the AWGN in the ith user. The MRC is used to combine the desired signal.
s i (which is detected by itself as its signal in the first time slot) with ŝ u−i−2nd (leakage signal detected by the second user in the second time slot), as shown in Fig. 3.
For signal detection at each user receiver and cooperation between them, the block diagram in Fig. 3 gives more explanation, where each user receives superposition signals that include first-and second-user signals.
Problem Formulation
Finding an optimal power allocation strategy while considering the interference amongst the superposition of users' signals in NOMA is challenging. We aim to allocate C subchannels and the transmitted power amongst U users.
Therefore, we propose an optimal power allocation based on the NBS game scheme in full-duplex cooperative beamforming based on SLR for MC of NOMA.
The mathematical model of the SLR beamforming approach is presented in this section. To understand the concept of SLR, we attempt to discuss the SLR analysis, which includes the maximization of SLR in Section A, then the SLR limit theorem will introduce in Section B, while the interference analysis will present in Section C as follow.
SLR Analysis
To understand the concept of SLR, we consider the single-user MRC shown previously in Fig. 3. Then, SINR is written as shown in Eq. (10). Using SINR in Eq. (10) for i = {1 … U} as an optimisation objective function for determining w i U i=1 leads to a problem with U coupled variables { w i }. according to [27] for MRC, Now where E i is desired signal power, while h E i the expectation of the desired signals power, while N is the number of transmitted antennas. Therefore, SINR i will be, The achievable rate of the ith user can be obtained as follows: Using this SINR expression for i = {1,..., U} as an optimization criterion for determining the { w i } would generally result in a problem with U coupled variables { w i }. In the sequel, in [20], they propose an alternative criterion to design the beamforming coefficients { w i }, which leads to a full characterization of the optimal solutions in terms of generalized eigenvalue problems.
Let us reconsider Eq. (16). The power of the desired signal H i w i is given by ||H i w i || 2 . At the same time, the power of the interference caused by this user i on the signal received by user u is given by ||H i w i || 2 . We define a quantity, called leakage for user i, as the total power leaked from this user to all other users: Then SLR maximisation is performed to compute the maximum beamforming ( w o i ) w o i w o i for each user according to [20].
As shown in Eq. (16), and following our previous work [21], the power constraint proposed by [20] has been updated to w 2 i = w H i w i = P i ∕E i , the reason for this updating, is noted that the norm of w i is irrelevant to the final solutions, or in other words, the norm of w i can be forced to be any value to achieve the best value for w i under the power constraint. P i ∕E i is the transmission power constraint at transmitter i, and it can be described as E w i s 2 i ≤ P i . The symbol si satisfies the power constraint as By carefully examining Eq. (16). A key feature of the above criterion is that the design procedure for w i , i = {1 … U} , involves U is a decoupled optimisation problem compared with Eq. (10).
Interference Analysis
It can be verified that the SLR expression in (16) can be rewritten as where Equation (18), is the channel matrix which excludes H i . where H i ∈ ℂ N×M represents the channel between the BS and user u and denotes the corresponding leakage channel. The channel has been assumed to be a flat Rayleigh fading channel with a spatially uncorrelated. Moreover, H i , and H i are assumed a full rank matrix beside the probability is one. The transmitted symbol intended for ith user s i ∈ ℂ M×L where L (≤ N) is the no. of data streams for ith user which is assumed identical for entirely the users. The s i vector is satisfying the power constraint E s i s * i = I L . s i is multiplied by a preceding matrix w i . Then, for a given user i, the received signal vector: The general solution of Eq. (19) that has been solved by [20] which obeys the Rayleigh-Ritz method [28]. Hence, we can solve Eq. (19) as: where max is the largest generalised eigenvalue. Equality occurs when w i is proportional to a generalised eigenvector that corresponds to the largest generalised eigenvalue; compactly written as: where w o i is the maximal SLR. In the next section, we refer to the maximal SLR ( w o i ) for each user by using the parameter i . i is the power allocation for the ith user.
SLR Limit Theorem
In (16), the SLR problem statement constraints [20] will allocate a fixed transmit power for each user, design w i , i = {1,..., U}, such that the signal-to-leakage ratio (SLR) is maximized for every user. In this paper, we update the SLR problem statement constrain to a new form as shown: The reason for this updating is the drawback of the constraint in the problem statement (16). When each user has multiple data streams, the effective channel gain for each stream can be severely unbalanced. If power control or adaptive modulation and coding cannot be applied, the overall error performance of each user will suffer significant loss [29].
It is noted that the norm of w i is irrelevant to the final solutions, or in other words, the norm of w i can be forced to be any value to achieve the best value for w i under the power constraint shows in the problem statement (16).
Power Allocation Based on NBS Game Theory
The power allocation based on the NBS game theory can be decomposed into three problems. The first subproblem is the optimization problem, where maximizing the utility function (which is a function of achievable rate) is achieved. The second problem is the existence of NBS, where the Hessian matrix will be used to approve the concave suffices of the utility function. The third problem is the power allocation scheme using NBS, where the Lagrangian method will be used to solve the constrained optimization problems. The power allocation based on the NBS game theory block diagram is shown in Fig. 4.
Optimization Problem
With the preliminaries of max-SLR ( i ) in the previous section, a new formula for optimization problem with the help of NBS has been formulated. As aforementioned, both users seek help from another user to enhance their performance. The utility of the user depends on two factors. One is the channel conditions of the cooperative link between cooperating users. The other one is how much leakage power would the selected user split for the relay. Intuitively, to maximize far user performance, BS users would like to invite the user who owns good channel quality to join the cooperation and expect him to support as much power as possible to support the information relaying. While the nearest user who is involved in the game, gains more power from leakage power from far users through cooperation, at the cost of sacrificing power to relay the signal of the far user and interference experienced from far user. Therefore, the utility Ut i for ith user is defined as [30]: where u is the max-SLR and u, c is the power allocated for uth user on subchannel c, where u,c + i,c = 1 , while R u is the achievable rate for the uth user. The achievable rate R i for the ith user, which is the sum of the rates in each subchannel, as follows [31]: By plugging Eq. (23) in Eq. (24) and replace u,c = 1 − i,c the utility for ith user will be: where i is the max-SLR and i,c is the power allocation of the ith user on subchannel c. while x i,c is subchannel assignment coefficient.
Resource allocation (optimal power and channel allocation) is our target. The resource allocation problem is to allocate C subchannels and the transmitted power amongst U users, so that the maximum throughput is achieved.
In NOMA, the nearest user (best user channel) is assigned with less power, while the far user (worst user channel) is assigned with more transmission power. More explicitly, the priority for subchannel assigning follows the channel condition H i−1 ≤ H i to achieve a certain fairness objective. Assume the subchannel assignment coefficient is ( x i,c ) [14]. x i,c is given as optimal joint power and subcarrier allocation for full-duplex multicarrier non-orthogonal multiple access systems.
H i+1 represent the nearest user channel condition (best user channel). H i represent the farther user channel condition (worst user channel). Assume the number of subcarriers is five, because NOMA implementation considers that the decoding complexity and signaling overhead increase with the number of subcarriers.
Unlike the OFDMA, which assumed each subcarrier is assigned for one user, NOMA assumed that all users use all available subcarriers at the same time and frequency by exploiting the distinguish in power level. More explicitly, NOMA enables each user to have access to all the subcarrier channels. Hence, the bandwidth resources allocated to the users with poor channel conditions can still be accessed by the users with strong channel conditions, which significantly improves the spectral efficiency. Still, the problem is who this BS should be assigned the available power between users, so the overall performance is optimized. For this reason, cooperative NOMA-based NBS is suggested to offer negotiation between users via the BS.
The bargaining problem is contained in the utility function Ut i of user ith as mentioned in Eq. (25), which is a function of the set containing all the feasible rates S, and Ut i,min is the minimum rate R i,min which is decided by the disagreement point. The NB solution can be derived by solving the following bargaining optimization problem: The objective function in (27) is the Nash function. The first constraint is the requirement to guarantee the available power for all users is bound by the total power constraints of the base station P max . The second constraint is to ensure the minimum rate requirement, while the fourth constraint states the subchannel will be assigned for each user if and only if H i−1 ≤ H i , therefore NB solution exists if and only if the far user are assigned with more power then all users can benefit from the NOMA-based cooperation. This optimisation problem is difficult to solve because it deals with both continuous and binary variables, so an approach is suggested to relax the condition in C 5 by permitting x i take values between [0, 1].
Existence of NBS
The main challenge of using NBS with NOMA is that the interference amongst the superposition of users' signals in the NOMA environment causes a non-convex utility space (26) (rate region) [19]. Orthogonal signaling converts the non-convex utility space to a convex one, limiting the use of NBS with NOMA. Hence, the Game Theory is used to allocate power to each user in the NOMA environment with SLR precoding.
Theorem The Nash bargaining (NB) exists if function (27) satisfies two main conditions:
1-The utility set S defined in function (27) is a closed and bounded convex subset.
2-The utility function R i is a concave down function and injective.
Proof It is straightforward to prove that the above conditions are satisfied with the following states.
(1) The set is convex because the constraints of the optimisation problem are linear. On another side, the maximal value of SLR improves the power level of the desired user while reducing the interference to other users from the desired user, thus leading to orthogonality and converts the non-convex utility space to a convex one. Hence the first condition is easily satisfied. (2) In order to show that the second state is also satisfied, then the definition of function (27) should be proved to be concave.
To show that the second condition is true, the following set of equations should be solved In the following: From solving the Hessian matrix in above Eq. (31) the first element is < 0, so that H x i,c , i,c is negative semidefinite. Therefore, utility function is concave.
Power Allocation Scheme Using NBS
One of the efficient methods that can be applied to solve the constrained optimization problems, which is shown in Eq. (27) is the Lagrangian method, where , v i,c , c is the multiplier, vector. Furthermore, the Karush-Kuhn-Tucker (KKT) conditions are applied in order to find the optimal solution of Eq. (27) [32], and by replacing u,c = 1 − i,c to find the optimal power allocated for user i,c : KKT condition are: Therefore, the solution of Eq. (27) produces the following power allocation formula i,c : Assume that subchannel c is assigned to user i according to constraint C 4 , x i,c = 1. As shown above, Eq. (45) has the familiar shape of a water filling equation with slight changes in the water level. Therefore, more power is allocated to the subchannels with smaller gains.
After the subchannels are allocated ( x i,c is known), from Eq. (45) and Eq.
The total bandwidth B is divided into C subchannels, each with bandwidth B C . Then, where C is the number of subcarriers assigned for each user.
The results of Eq. (49) are similar to those of a water filling equation. Searching over U × C, the subchannel to noise ratio matrix reduces the complexity from O(U 2 ) to U × C.
The finding of i,c , and x i,c from the solution of Eq. (49) and Eq. (56) provides an optimization problem solution for NP-hard problem, comparing with [32], the final solution of our work provides the formula for the optimum solution for NOMA. In contrast, the optimization problem of [32] provided a solution of OFDM, which is considered an OMA system. More explicitly in our work, the optimal power value, i,c will be used as a power level domain in the cooperative NOMA system by employing SIC estimator at the receiver, while [32] did not use the optimal power in any estimation technique. Furthermore, in our work, the SLR beamforming technique has been used to enhance system performance by employing the value of i (which was solved in Eq. 21) in the rate equation. On the other hand, our system considers a cooperative environment, while [32] considered a non-cooperative environment.
Equation (56) shows that the rate ratio by assigning one subchannel to the total rate should be the same for all users. This idea emphasizes the fairness of the optimal solution and gives us a metric for allocating subchannels.
BER Performance
The proposed NBS power allocation full-duplex cooperative BF for MC of NOMA (proposed scheme) introduced in Sect. 2 is simulated using Matlab codes. The simulation considers two time slots. In the first time slot, the BS broadcasts a superposition of individual (50) signals to multiple users' receivers over a Rayleigh fading channel with zero mean. In the second time slot, both users' channels cooperate with each other over a Rician fading channel with m mean (inter-user channels), unit variance and i.i.d complex Gaussian random variables. The summary of simulation parameters is shown in Table 1.
According to [23], the NOMA's BER performance of all the systems described are evaluated at a BER of 10 −5 . An acceptable BER performance for voice communication is 10 −3 while, that for reliable data transmission is at 10 −5 [33]. All the simulated results were carried out at β = 0.1, which offers better system performance, where B is a constant to meet the total transmitted power constraint [27]. Assuming the nearest user is encouraged to cooperate with the best line of sight (LOS), the Rician channel factor is taken at K = 30 dB in all simulated results [34]. While the inter-user channel is considered at SNR = 20 dB, which show an enhancement in system performance compared with lowest than 20 dB cases, since in second time slot interference due to concurrently communicating users will increase, which mean inter-user interference increase IUI, resulting in poor system performance signal [35] [36].
To evaluate the performance of the proposed scheme, we have implemented a previous multiple-input and multiple-output MIMO-NOMA [23] scheme with M = N = 4 antenna, also the BER performance of the proposed scheme is compared with that of the non-cooperative scheme, as shown in Fig. 5. Figure 5 shows the comparison of the BER performance of the proposed scheme, the non-cooperative scheme and MIMO-NOMA [23]. The result demonstrates that the performance of the proposed scheme is better than that of the non-cooperative scheme since our proposed scheme turns the interference signals (second user signal) into valuable signals after detecting and separating these signals by SIC. Specifically, to achieve a BER of about 10 −5 , the required SNR for the proposed scheme is about 2 dB less than that for the noncooperative scheme. Compared with the other work, the proposed scheme is better than MIMO-NOMA [23]. Specifically, to achieve a BER of about 10 −5 , the required SNR for the proposed scheme is about 3 dB less than that for MIMO-NOMA [23]. The improvement in dB was a result of interference reduction and information sharing in the network. Thus, with the cooperative scheme, there would have been an increase in the use of forward error correcting schemes of automatic repeat requests, as well as hybrid automatic repeat request (HARQ) in the face of loss of transmitted or corrupted signals. Figure 6 presented the performance of the proposed under different values of SNR interuser channel (5, 10, 15 and 20 dB) to show the effect of the inter-user channel on system performance. The system's performance will be enhanced when inter-channel SNR increases, as the increase in SNR of inter-user channel intended more average power of the desired signal are allocated to each cooperative user and the effect of Inter-user interference IUI will decrease. Figure 7 shows the system performance when the inter-user channel uses a LOS environment (over a correlated realistic Rician fading channel). The performance of the proposed scheme is enhanced as KdB increases. Specifically, in case KdB = 30, to achieve a BER of about 10 −3 , the required SNR for the proposed scheme is about 3 dB less than that for the non-cooperative scheme. In other words, when the inter-user channel LOS is reduced, the total proposed system performance is also reduced. since the user will not be LOS, which means the effect of IUI will increase.
Fairness Performance
In this part, we evaluate the performance of the proposed scheme in fairness. To evaluate the performance of the proposed scheme, we have simulated the previous work [9], which has been improved the "fractional transmit power control" FTPC and channel allocation model for both non-orthogonal (NOMA) and orthogonal frequency multiple access schemes (NOMAFTPC), (OFDMA-FTPC) respectively.
In power allocation based on FTPC [9], the user with lower channel condition assigned with more power to grantee the fairness between users, while the greed-based user technique has been employed to assign each subcarrier c for the users U. Moreover, the performance of the proposed scheme has been evaluated with the model of power and channel allocation in NOMA [11], that has been combined the Lagrangian duality and dynamic programming (LDDP) to find near-optimal N-LDDP solutions.
Based on the LTE Standard, the bandwidth of the OFDMA-FTPC has been fixed to 4.5 MHz, which content 25 subchannels, each subchannel has 180 kHz bandwidth. While in NOMA, 5 subcarriers are considered and used bandwidth of 900 kHz for each in NOMA-FTPC, where decoding complexity and signalling overhead will be increased with the no. of subcarriers [37]. In our proposed scheme, Following the NOMA setup in [9], we assumed 5 subcarriers are assigned for each user. In communications networks, the measure of fairness for user throughput follows Jain's fairness index [38]. The slandered form for calculating the Jain's fairness index is , the R 1 ,..., R u , denote the average users' rates. Jain's fairness index is examined between the value between 1 U and 1.0. The peak value means the system has a fairer throughput distribution. The maximum value, which is equal to 1.0 is achieved when all users achieve the same throughput. Note that the usage of this index, by himself, does not prevent the user from being served with low throughput (or zero throughputs), which, maybe lead down the value of the index. The fairness index in respect of U is shown in Fig. 9.
From Fig. 9, we observe that, first, the proposed scheme achieves the best performance, the fairness index increases in U. Since a larger U provides more flexibility in resource allocation among the users.
From the result, OFDMA-FTPC gives the lowest fairness index. The reason is that the FTPC channel and power allocation scheme are sub-optimal. This also explains the improvement enabled by the proposed power optimization algorithm in comparison to NOMAFTPC and N-LDDP. Table 2 presents the summary between our proposed algorithm and existing techniques.
Conclusions
A fair scheme of power and channel allocation based on NBS for full-duplex cooperative BF for MC of NOMA. is proposed. The proposed scheme assigns optimal power and channel allocation according to channel conditions while keeping a fair rate amongst cooperative users. Meanwhile, NBS offers a fair and optimum approach to maximise the total rate of the CNOMA system. The SLR precoding technique is used to design a beamforming vector for achieving the power domain between the superposition users' signals. NBS offers a fair and optimum approach to maximise the total rate of the NOMA system. Simulation results show that the NBS power allocation in the proposed Technology Ltd., U.K., for nine months beginning in 1997. His research interests include mobile, wireless networking, and radio resource management for the next generation wireless communication. | 2021-09-27T18:33:23.944Z | 2021-07-20T00:00:00.000 | {
"year": 2021,
"sha1": "8fd9a5a19ba56b299279ba617c81a5efb3ead00b",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-398467/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "SpringerNature",
"pdf_hash": "fc56a9f55c1f1e977cf9a9d6bc6fc36b83f57c3e",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
119206066 | pes2o/s2orc | v3-fos-license | Band folding, strain, confinement, and surface relaxation effects on the electronic structure of GaAs and GaP: from bulk to nanowires
In this paper we show how to link the electronic structures of two III-V systems, one a direct gap material, GaAs, and the other an indirect gap material, GaP, from their bulks right down to the shape of thin nanowires. GaAs and GaP bulk and nanowire systems are studied in the zincblende and wurtzite structures both free of strain and subjected to biaxial strains perpendicular to the [111]/[0001] direction. We provide an interpretation of the band structure of nanowires, grown along the [111] (zinc-blende structure) and the [0001] (wurtzite structure) directions, in terms of the bulk band structures of the corresponding binary compounds. The procedure reveals the origin of the valence and conduction valleys relevant to determine the nature (direct or indirect) of the band gaps and the kind (direct and pseudodirect) of the valence to conduction transitions. Thus, by calculating only the bulk bands it is possible to describe the behavior of the nanowire bands even for very thin nanowires. The effects on the band structures due to biaxial strain are analogously analyzed, providing for bulk GaP the first results in literature. The role of confinement, and surface relaxation, in determining the nanowire electronic structure of thin nanowires are analyzed separately revealing that the change in the nature of the band gap is due mainly to surface relaxation effects, not confinement. We show that the change for indirect/direct of the gap from the bulk to the 1D systems is mainly due to the competition between the energies of bulk conduction valleys which are differently inuenced by confinement and strain. While the main effect of confinement is to open all gaps it is not necessarily the main cause of the direct/indirect change in the nature of the electronic gap as instead is usually claimed in the literature.
Abstract
It is common to find materials that show strikingly different properties between its bulk and nanometric forms. In this paper we show how to link the electronic structures of two III-V systems, one a direct gap material, GaAs, and the other an indirect gap material, GaP, from their bulks right down to the shape of thin nanowires. The understanding of how these changes occur represents a scientific and technological challenge and is relevant for the design and prediction of novel nanostructured materials. GaAs and GaP bulk and nanowire systems are studied in the zinc-
I. INTRODUCTION
The comprehension of how different materials properties transform when the spatial dimensions are reduced is not an obvious task. Experimentally, it would require a systematic study that monitors the evolution of the properties of several samples over the whole range of sizes and shapes between the bulk and the smallest nanopieces of a given material. Further, is not easy for the experimentalist to separate out the influences of the different factors concurring to produce a given result for each analyzed sample. Some electronic properties, in particular, can only be properly understood through appropriate theoretical treatments, e.g., the nature and ordering of the electronic levels in each sample. Ab initio theoretical approaches allow for a precise description of materials properties for samples at the two extremes, the infinite size bulk system (with translational symmetry) and the nanosamples of a given system. In addition, it is possible for the theorists to isolate the different factors contributing to produce a given measured result. Hence, an appropriate theoretical study can allow for an unique understanding of how the properties of bulk and nano samples of a given material are related, and how these relationships change from material to material.
Semiconductor nanowires (NWs) are considered promising systems for different kinds of technological applications including, for instance, light emitting diodes (LEDs) [1], lasers [2], solar cells [3,4], and high electron mobility transistors (HEMTs). [5,6] One of the advantages in the use of these nanomaterials is related to their synthesis process, which allows a precise control of their characteristics. [7] In addition, the large surface to volume ratio of NWs enables an efficient strain relaxation that makes it possible to grow nanowire heterostructures (NWHs) from materials with mismatched lattice constants, a fact that would not be possible in the conventional 2D films. For the III-V NWs, in particular, the exploitation of the allowable degrees of freedom such as compositions, structural politypes, orientations, diameters, surface passivation, and doping, can be used to tune their electronic structure properties. It is therefore decisive to understand how these various variables influence the electronic structure of a NW, taking as the starting point the usually well known electronic structures of the bulk systems.
In this work we focus on GaAs and GaP NWs. The reason is that these NWs show a huge potential for nanophotonics and nano-optoelectronic applications. GaAs NWs are lattice matched to Ge while GaP NWs to Si, allowing for a nice integration with the micro-electronic circuits. GaAs NWs have been extensively studied whereas GaP NWs have been studied much less. GaP NWs have been shown to emit in the green with a very intense photoluminescence signal even if the expected first emission has been shown to be dark. [8] High quality heterostructured NWs built from GaAs, GaP and GaAsP were grown [9], both in the axial geometry as well as in the core-shell one. Also high quality GaAs NWs with mixed zinc-blende and wurtzite structure have been grown. [10] We are mainly interested on how the band edges around the main gap between occupied and empty electronic states develop starting from the electronic structure of the binary compounds. In a previous study Peng et al. have shown how the type of the band gap of GaAs NWs (direct or indirect) is determined by the competition between different valleys at the conduction band edge whose energies can be tuned by subjecting the NW to a given strain. [11] We investigate the origin and characteristics of the conduction and valence band valleys for both GaAs and GaP NWs. In their bulks GaAs is a direct gap semiconductor, whereas GaP is an indirect gap semiconductor. We analyze the effects on the conduction and valence edge valleys due to the structure (zinc-blende (ZB) or wurtzite ( WZ)), dimensions, confinement, sidewall atomic relaxation, and biaxial strain (the kind of strain arising when the materials are grown one on top of the other). We also compare the electronic structures of GaAs NWs with those of the much less studied GaP NWs.
II. METHODOLOGY
The first principles calculations are based on the Density Functional Theory (DFT) [12,13] as implemented in the open source package QUANTUM ESPRESSO (http://www.quantumespresso.org). [14] For the electronic exchange and correlation potential we used the local density approximation. [15,16] The interaction between the valence electrons and the atomic cores are described by separable norm-conserving core-corrected pseudopotentials. [17,18] These pseudopotentials have shown to be quite efficient (convergence at a small cutoff and prediction of good structural properties) in a number of occasions. [19,20] The Kohn-Sham (KS) wave functions are expanded in plane waves with a cutoff energy of 40 Ry. The equilibrium geometries are obtained when the atomic forces are smaller than 10 −7 Ry/Bohr and the total energy converges within 10 −6 Ry. k-point Monkhorst-Pack grids [21] are used for the Brillouin zone sampling. For GaAs and GaP binary compounds with the ZB (WZ) structures, a mesh of 4 × 4 × 4 (4 × 4 × 2) was used. For the NWs, a 1 × 1 × N mesh was used, with N = 8 in the case of WZ NWs and N = 6 in the case of ZB NWs.
A. GaAs and GaP binary systems
Since the DFT-LDA scheme leads to a general underestimation of the band gaps the aim of this section is to investigate the possibility to obtain meaningful trends for the electronic structure of the GaAs-GaP compound systems. Calculations are carried out for the binary compounds and, wherever possible, comparisons are made with other theoretical works and experimental results.
For the ZB structures, we have calculated the total energy as a function of the lattice parameter using a 40 Ry cutoff on the plane waves expansion, and have fit the results to the Murnaghan equation. [22] For the WZ structures, instead, to obtain the equilibrium geometry (lattice parameters a and c need to be optimized simultaneously), we have calculated the total energy using a cutoff energy of 150 Ry and an optimization procedure for both the cell parameters and the atomic positions. The results for the structural parameters and the band gaps of the bulk materials are shown in Table I. The WZ compounds do not exist at normal ambient conditions and their data have been calculated only as a reference to the results obtained on the corresponding NWs (where the dominant phase at relatively small diameters is actually the WZ phase).
As we can see in Table I, the equilibrium lattice constants for the binaries in the ZB structure are in good agreement with the experimental values. The LDA underestimation of the lattice parameters (∆ a ) is 0.047Å for GaAs and 0.051Å for GaP, similar for the two The calculated value of the c/l ratio is larger than the ideal one 8 3 = 1.6329932, which suggests that the WZ phase is not the stable polymorph of GaAs and GaP, in agreement with the rule stated by Yeh et al.. [24] We notice that the lattice parameters a = l In the WZ phase, the band gap value is similar to that of the ZB phase in the case of GaAs, while for GaP it becomes direct and smaller. Similar trends were found by Belabbes et al. [27] with the band gaps calculated using the LDA-1/2 method. [28,29] Actually, Yeh, Wei and Zunger [30] established three rules to predict the band structure of a WZ compound from its ZB energy levels. For GaAs, the authors found that the corresponding WZ system will be direct with a slightly larger gap. For GaP, on the other hand, the band gap becomes pseudodirect in the WZ phase. This indirect-pseudodirect transition occurs because the L 1C state is close to the CBM at X 1C . A direct experimental comparison between ZB and WZ bulks is not possible since WZ bulks are not available. There is no a definitive conclusion in the literature about the values obtained for the band gaps of GaAs in the WZ and ZB phases. [26,[31][32][33][34][35][36][37][38][39][40][41][42][43][44][45] In Table II we report the calculated and experimental energy band gaps at different k-points. For GaAs, we have found differences between these two quantities of 0.72, 0.61 and 0.73 eV at the G, X and L points. For GaP, these quantities are 0.97, 0.90 and 1.07 eV. In percentage terms, the obtained values for these differences were 47% (34%) at the G-point, 31% (38%) at the X-point and 40% (39%) at the L-point for GaAs (GaP). The underestimation of the gaps is seeing to have almost the same percentual magnitude for GaAs and GaP over all the Brillouin zone. The results are given in Figure 2, which shows the band edge energies of the GaAs and GaP systems at different k points of their Brillouin zone, as a function of the in-plane lattice constant a . We can see from Fig. 2 that in general the compression of the in-plane lattice parameters leads to a shift of the eigenvalues towards higher energies, while a dilation leads to a lowering of the eigenvalues. Furthermore, we notice that the trends are almost linear in all cases, albeit with a different slope for each level. Fig. 2 (a) shows that ZB GaAs switches from a direct to an indirect gap when its in-plane lattice parameter squeezes circa 3.2%. The On the contrary the binaries in the WZ structure Fig. 2 (c),(d) remain always direct gap materials. The only relevant difference between GaAs and GaP is the steeper slope of the conduction minimum at G in GaP than in GaAs (a feature presented also by the ZB structure). Also, in all cases the A point state is the most sensitive to the biaxial strain.
In Table III Figure 3 shows representative cross sections and atomic positions along the growth directions of the largest studied NWs.
We used tetragonal supercells having the minimum size along the growth direction, i.e., √ 3a for the ZB and c for WZ where a and c are the bulk lattice parameters. In order to avoid interactions between the images at different cells, the lateral dimensions of the tetragonal cells were adjusted to accommodate a vacuum layer of approximately 10Å. Table IV shows the equilibrium lattice constants and the band gaps obtained for the ZB and WZ NWs.
The calculated band structures for the largest GaAs and GaP NWs with ZB and WZ structures are shown in Fig. 4. The band structures for the thinner NWs are very similar and will not be shown here. As expected, the lateral quantum confinement increases the energy separation between the valence and conduction bands with respect to their bulk counterparts, increasing their band gaps. Figure 5 shows the behavior of both direct and indirect band gaps (C A and C B in Fig.4, respectively) as a function of the NW diameters. We first emphasize that, for the range of studied diameters, there is no change in the band gap character (direct or indirect) for both GaAs and GaP NWs. This allows us to restrict the analyzes hereafter to only a given diameter for each NW.
The main fact to highlight here is that the character of the band gaps in both NWs and for each structure (ZB, WZ) is opposite to those of the respective bulk counterparts. This The reasons to these differences are related to the reduced dimensionality of the NWs, which will be evidenced through (i) band folding, (ii) confinement, and (iii) surface relaxation effects. Here we will not analyze surface reconstruction effects since the NWs have their surface bonds saturated by hydrogens with fractional charges.
Contributions due to band folding effects
Comparing the band structures of the ZB NWs ( Fig. 4 (a) and (b)) and ZB binary systems along the G-L direction (Figure 1 (a) and (b)) which corresponds to the G-L [111] direction of the NWs, it is possible to see that in both cases we have a conduction valley around the G-point, much less dispersive in the case of the NWs than in the case of bulk GaAs and GaP systems, and an additional conduction valley around the L point. In GaAs the valley at G is lower in energy than the valley at L. For GaP the opposite is true. On the other hand, for the WZ NWs, the conduction edge (see Fig. 4 (c) and (d)) with its two valleys, one at G and the other before Z, is completely different from the band states at the conduction edge of the WZ bulks along the G-A [0001] direction ( Fig. 1 (c) and (d)) where only one valley is present at the G point, even considering that the NWs share the hexagonal symmetry of a WZ bulk more than that of the cubic ZB bulk.
In order to understand the origin of the two valleys at the bottom of the conduction bands in the NW systems and link them to the states in the corresponding binary systems we studied the band folding upon the [111] and [0001] directions due to confinement effects in the NWs.
In the NWs there will be a discrete set of periodic motifs at directions perpendicular to the NW axis. This leads to the folding of discrete levels, parallel to the NW axis, onto to the NW band structure along its axial direction. As the NW diameter increases, more periodic motifs will appear, leading to an increasing number of folded states (that are parallel to the NW axis), along the translational symmetry direction of the NW. In the bulk limit all the bands along all the directions parallel to the NW axis will fold exactly onto the considered symmetry direction, generating the bulk band structure along that direction.
We first analyze what happens to the band structure of the binary compounds when the to the G-A direction but along the BZ border edge, as shown in Fig. 7. We can see that the resulting dispersion of the superposed band structures is the same of that obtained doubling the in-plane lattice vectors, as shown in Fig. 6 (a) and (b).
In the case of the WZ band structures the C A valley is related to the original valley at the G point for both GaAs and GaP, and the C B valley is related to a folded band. For the ZB band structures we observe a different behavior. In the case of GaAs, the C A valley is the same G valley of the unfolded dispersion, and the C B valley is also due to the original dispersion along the G-A direction, when the ZB phase is described using the hexagonal unit cell. No contributions from the folding are apparent. In ZB GaP, instead, the C A valley derives from a folded band while the C B valley is related to the original dispersion along G-A. This shows that the direct gap at G in ZB GaP, obtained by this folding procedure, is in fact a pseudo-direct gap. We should note also that the bottom edge of the conduction bands along G-A could be lower in energy and with a slightly different shape than that estimated by simply folding only the M-L bands onto the G-A bands since not always the lowest conduction states occur along the M-L or G-A directions but they could occur along others of the directions parallel to G-A.
Contributions due to confinement and surface relaxation effects
We have seen that we can roughly reproduce the behavior of the valence and conduction band edges in the NWs just folding the binary bands in the appropriate way. This result can be useful since it allows to estimate the modifications in the band structures of the NWs just using the binary band structures which are much easier to calculate and often experimentally known. We will now look at the contributions to the electronic structure of the GaAs and GaP NWs (ZB and WZ) arising from the spatial confinement and surface relaxations. Starting from the bulk materials, we first apply biaxial stress and look at the folded band structures.
We then build NWs by cutting the biaxially stressed bulk materials. Finally, the atomic positions of the NWs are relaxed. This will allow us to analyze the electronic effects in the band structures arising from confinement and surface relaxation separately and in an unified approach for both materials.
To see how the biaxial strain would change the relative energies of the C A and C B conduction valleys of the bulk materials we have calculated the folded bands for the strained GaAs and GaP binaries in both WZ and ZB phases. The results for GaAs are shown in belong always to the original dispersion.
We will now look at the confinement effects on the electronic structure of the GaAs and GaP NWs starting from the strained binary systems. We have constructed GaAs and GaP NWs, with diameters of approximately 1.5 (2.0) nm for the ZB (WZ) structure by appropriately cutting the corresponding strained binary compounds with a = a ave . The dangling bonds were saturated with pseudohydrogen atoms. With these geometries we have
IV. SUMMARY AND CONCLUSIONS
In this work we have studied the electronic properties of GaAs and GaP NWs. The calculations are based on the Density Functional Theory. The NWs sidewalls were passivated using pseudohydrogen atoms.
We have followed the evolution of the band edge states from their dispersion in the This results show how important could be the engineering of the NW sidewalls in thin NWs, other than the strain, to tune the NW band edge dispersion. | 2018-08-08T20:58:51.000Z | 2018-08-08T00:00:00.000 | {
"year": 2019,
"sha1": "799be5241da61a58d862a2334d0801a2fa3f5b7c",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1808.02938",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "799be5241da61a58d862a2334d0801a2fa3f5b7c",
"s2fieldsofstudy": [
"Physics",
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
86308096 | pes2o/s2orc | v3-fos-license | Molecular approach to evaluate the genotoxicity of glyphosate ( roundup ) using mosquito genome
Glyphosate, an active ingredient in Roundup is a broad spectrum, systemic and non -selective herbicide which is commonly used for eliminating weeds in agriculture and forest landscapes. The present studies deal with the evaluation of the genotoxic potential of Glyphosate with two different dose concentration of LD20 and LD40 on a mosquito Culex quinquefasciatus taken as an experimental model. For this, polymerase chain reaction technique (PCR) was used for detecting DNA damage by amplifying ribosomal DNA internal transcribed spacer 2 (ITS 2) region. The amplified products were sequenced and the results of treated and non-treated controls were compared by using Clustal W software programme. The results were studied in the form of transitions, transversions, deletions and additions of bases. The DNA band amplified from control stocks consisted of 440 bases while those from LD20 and LD40 treated individuals were comprised of 423 and 468 bases respectively. The total number of mutations caused in LD20 treated stock was 205 out of which 68 were transitions, 90 transversions, 32 deletions and 15 additions. In case of LD40 treated individuals, as many as 221 bases had suffered mutations, out of which 66 were transitions, 90 transversions , 12 deletions and 41 additions. In both the cases the rate of transversions was higher than transitions. From these results it was evident that glyphosate has a potential to promote gene mutations in the individuals exposed to its semilethal doses.
INTRODUCTION
Glyphosate is a non-selective broad spectrum herbicide commonly sold as a commercial formulation named Roundup.Since its introduction in 1970s, it has been widely used for killing unwanted plants both in agriculture and non -agriculture landscapes (Williams et al., 2000).It is a combination of the active ingredients glyphosate and various adjuvants in different concentrations.One of the major adjuvants is a surfactant polyethoxylated tallowamine (POEA) along with minor components including antifoaming and colouring agents, biocides and inorganic ions for pH adjustment.The POEA itself causes ocular burns, redness, swelling and blisters, short term nausea and diarrhoea.In combination with these components glyphosate becomes more effective in its action as a pesticide due to increased stability and bioaccumulation ( Cox 1998;Richard et al., 2005;Benachour et al., 2007).Its action starts with penetration through plasmatic membranes followed by inhibition of the enzyme 5-enolpyruvoyl -shikimate 3-phosphatesynthase, which is essential for the synthesis of aromatic amino acids in plants.This ultimately leads to the inhibition of nucleic acid metabolism and protein synthesis that are required for its growth and survival (Steinrucken and Amrhein 1980;Malik et al., 1989).A variety of toxic effects of glyphosate have also been observed on various stages of reproduction and genetic material of the animals exposed to it (Bolognesi et al., 1997;Peluso et al., 1998;Walsh et al., 2000;Daruich et al., 2001;El Demerdash et al., 2001).There are a number of techniques to assess the genotoxicity of pesticides on genetic material which involves the use of a number of tests or protocols (Gillet 1970;Sobels 1974;Evans 1977;Gaulden and Liang 1982;Menzer 1987;Zaman et al., 1994;Chaudhry andAnand 2004 2005).In the last few years the development of new assays, such as comet assay (McKelvey et al., 1993;Pandrangi et al., 1996), automatic scoring techniques for micronuclei ( OCDE, 1998 ) and 32 P-post labeling assay for the detection of DNA adducts ( Phillips 1997).Some of the recent advances in the field of molecular biology, like gene amplification and DNA fingerprinting with PCR technique, offer new possibilities for detecting DNA damage even at the level of single nucleotide.Jones and Kortenkamp (2000) demonstrated that the genomic alterations in the nucleotide sequence can be detected with PCR assay even if 2% of the cells are affected by the mutagens.In the present study rDNA internal transcribed spacer 2 (ITS 2) sequence was selected to assess the genotoxic effect of glyphosate.This spacer lies between 5.8s and 28.5s rRNA coding sequence.It is a phylogenetic marker which is highly conserved within all eukaryotes and carry some of the unique nucleotide sequences of rDNA, therefore any change occurring in them in the form of deletions, additions, transitions and transversions are considered significant.The present set of investigations is a first ever attempt in recording the glyphosate induced sequence alterations in rDNA domain of Culex quinquefasciatus taken as an experimental insect.In relevance to this, two different concentrations LD 20 and LD 40 of glyphosate were used in evaluating the mutagenic consequences in the genome of Culex quinquefasciatus.
Glyphosate [N-(phosphonomethyl)glycine]
) is commonly sold in the form of a formulation named Roundup (Monsanto Company, St. Louis, MO) under CAS no.1071-83-6 , with a molecular formula C 3 H 8 NO 5 P (Fig. 1) and molecular weight of 169.08.For the present purpose, LD 20 and LD 40 were calculated by probit analysis (Finney 1971) had the values of 0.064 µl/ ml and 0.275 µl/ ml respectively, (Figs. 2 and 3 ).The gravid females of Culex quinquefasciatus were collected from inhabitation in the village Nadasahib along a rivulet, 20 kms East of Chandigarh.They were allowed to lay eggs in water filled petridishes placed in the breeding cages.The egg rafts obtained in this way were allowed to hatch and the larvae were reared on a protein rich diet consisting of a mixture of finely powdered dog biscuits and yeast powder in the ratio of 6 : 4 respectively.A colony was raised under suitable conditions of temperature and humidity in mosquito rearing laboratory (Krishnan 1964;Singh et al., 1975, Clements 1994).Fixed number of freshly hatched healthy fourth instar larvae were treated with selected doses of the pesticide by rearing them in glyphosate containing rearing medium for 24 hours after which they were transferred to pesticide free water and allowed to grow upto adult stages.The desired number of control
DNA extraction and amplification:
The DNA extraction was carried out as per the standard protocol of Ausubel et al., (1999) with minor modifications for mosquito genome by Chaudhry et al., (2004) and Chaudhry and Sharma. (2006).The integrity of the DNA sample was tested by following the procedure of Sambrook et al., (1989) while the concentration and purity were determined by ultraviolet absorption spectroscopy.The two specific primers viz: forward primer (FP) 5'-TGTGAACTGCAGGACACAT-3', and reverse primer (RP): 5' -TATGCTTAAATTCAGGGGGT-3' were used for amplifying the ITS 2 region of the control and treated stocks of Culex quinquefasciatus.The amplification reactions were carried out according to the procedure of Williams et al., (1990) according to which the reaction mixture was prepared by mixing 16.8 µl of distilled water, 3 µl Taq buffer, 3 µl DNTP's, 1.2 µl forward primer, 1.2 µl reverse primer, 1.2 µl Taq polymerase, 1.2 µl MgCl 2 and 2.4µl genomic DNA.After loading this reaction mixture in the thermocycler, the reaction was programmed for initial denaturation at 94 0 C for 5m, followed by 37 cycles of denaturation, annealing and extension at 94 0 C for 1m, 59 0 C for 1m, 72 0 C for 1m respectively followed by one cycle of final extention at 72 0 C for 5m.The end products of PCR were resolved on 2% agarose gel containing ethedium bromide dye using 1X TAE buffer at a constant voltage of 75V.The gel was visualized over long wave UV transilluminator and photographed using Polaroid camera.A 100 bp DNA ladder (gene ruler) was also run along with all the amplification reactions for calculating the number of base pairs in each DNA band.
RESULTS AND DISCUSSION
In figure 4 marked with asterisk ( * ) are the regions where bases were identical in the normal and treated mosquitoes while dashes ( -) indicate the loci differing due to deletion and addition of bases (Fig. 5, 6).In addition to the places marked with asterisk and dashes, there were some regions which showed differences in the complementary bases in the sequence of the treated mosquitoes.These were the regions where transitions and transversions had taken place.In LD 20 treated sequences, 205 bases had suffered these mutations in which 68 were transitions, 90 transversions , 32 deletions and 15 additions (Table 1).Similarly, in case of LD 40 treated sequences a total of 221 bases had suffered such point mutations, out of which 66 were transitions, 90 transversions , 12 deletions and 41 additions (Table 2).In both the cases the rate of transversions was higher than transitions.Traditionally, pesticide induced mutations in the integrity of DNA have been studies in the form of numerical and structural changes in the chromosomes, production of micronuclei, errors in the organization and functioning of spindle apparatus, substitutions by base analogues, DNA adducts and dislodging of phosphodiester bonds.While Mamta Bansal and Asha Chaudhry / J. Appl.& Nat.Sci. 2 (1): 96-101 (2010) studying the effect of glyphosate Bolognesi et al., (1997) reported an elevation in the frequency of sister chromaid exchanges in human lymphocytes while Lioi et al., (1998) observed different types of chromosomal aberrations.In the same way Peluso et al., (1998) demonstrated dose dependent formation of DNA adducts in the cells of kidney and liver of mice.Atienzer et al. (1999) while working on Dephnia magna concluded that DNA damage and mutations were the main causes which influenced that the RAPD pattern variations between benzo{a}pyrene exposed and non-exposed individuals, provided sufficient number of cells got affected due to genotoxicity of the agents.In some of the related studies Rank et al., (1993) and Grisolia (2002) found that the commercial formulations of glyphosate were more toxic than its pure form due to various adjuvants present in it.
The present results of the limited scope tend to raise a point of caution about the use of glyphosate as exposure to such directly acting pesticides can also prove deleterious to the genome of other living systems including man and animals of economic importance.
Fig. 4 .Fig. 3 .Fig. 2 .
Fig. 4. PCR amplification of rDNA ITS 2 of treated and nontreated individuals of Culex quinquefasciatus.Lane M: Gene ruler (DNA ladder), Lane A: DNA band from non-treated individual, Lane B: DNA band from LD 20 treated individual, Lane C: DNA band from LD 40 treated individual.
Fig. 5 .
Fig. 5. Analysis of multiple sequence alignment in the rDNA ITS 2 of control and LD 20 treated individual of Culex quinquef asciatus (* complementary bases, -missing bases ).
Fig. 6 .
Fig. 6.Analysis of multiple sequence alignment in the rDNA ITS 2 of control and LD 40 treated individual of Culex quinquefasciatus (* complementary bases,-missing bases). | 2018-12-07T06:39:56.212Z | 2010-06-01T00:00:00.000 | {
"year": 2010,
"sha1": "b3b090363be5ade16bce3b8803cc71a88c5d1245",
"oa_license": "CCBYNC",
"oa_url": "https://journals.ansfoundation.org/index.php/jans/article/download/105/83",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "b3b090363be5ade16bce3b8803cc71a88c5d1245",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
246053096 | pes2o/s2orc | v3-fos-license | Protective effect of homogeneous polysaccharides of Wuguchong (HPW) on intestinal mucositis induced by 5-fluorouracil in mice
Background In hospitalized patients, drug side effects usually trigger intestinal mucositis (IM), which in turn damages intestinal absorption and reduces the efficacy of treatment. It has been discovered that natural polysaccharides can relieve IM. In this study, we extracted and purified homogenous polysaccharides of Wuguchong (HPW), a traditional Chinese medicine, and explored the protective effect of HPW on 5-fluorouracil (5-FU)-induced IM. Methods and results First, we identified the physical and chemical properties of the extracted homogeneous polysaccharides. The molecular weight of HPW was 616 kDa, and it was composed of 14 monosaccharides. Then, a model of small IM induced by 5-FU (50 mg/kg) was established in mice to explore the effect and mechanism of HPW. The results showed that HPW effectively increased histological indicators such as villus height, crypt depth and goblet cell count. Moreover, HPW relieved intestinal barrier indicators such as D-Lac and diamine oxidase (DAO). Subsequently, western blotting was used to measure the expression of Claudin-1, Occludin, proliferating cell nuclear antigen, and inflammatory proteins such as NF-κB (P65), tumour necrosis factor-α (TNF-α), and COX-2. The results also indicated that HPW could reduce inflammation and protect the barrier at the molecular level. Finally, we investigated the influence of HPW on the levels of short-chain fatty acids, a metabolite of intestinal flora, in the faeces of mice. Conclusions HPW, which is a bioactive polysaccharide derived from insects, has protective effects on the intestinal mucosa, can relieve intestinal inflammation caused by drug side effects, and deserves further development and research.
but can also be caused by drug side effects. For example, according to research statistics, 40% of cancer patients receiving chemotherapy develop lower gastrointestinal mucositis [3], which leads to malnutrition. 5-fluorouracil(5-FU)is an anti-metabolic anticancer drug that is widely used in the treatment of cancer. However, a large proportion of patients using 5-FU develop intestinal mucositis (IM) [4]. Experimental studies have shown that 5-FU can decrease crypt and villi length by triggering apoptosis in intestinal epithelial cells [5]. In addition, after 2 days of 5-FU treatment, the nuclear factor kappa-B (NF-κB)was highly overactive in the small intestine. Various inflammatory mediators, including tumour necrosis factor-α (TNF-α), are involved in the process of IM [6]. Furthermore, among the molecular events that cause intestinal mucosal inflammation, chemotherapeutic drugs lead to an imbalance of intestinal flora, which in turn leads to intestinal mechanical barrier and mucosal barrier dysfunction [7].
The treatment of small IM remains difficult, and the traditional zinc derivatives, loperamide or mesalazine do not have the desired effectiveness [8]. Therefore, searching for natural products that can treat IM is essential to protect against gastrointestinal damage and reduce inflammation. It has been shown that natural dietary polysaccharides can cure IM by promoting the development of epithelial cells and mucosal immune cells, enhancing intestinal barrier function, and thus promoting nutrient absorption.
For example, homogenous polysaccharides extracted from Dendrobium huoshanense have regulatory effects on both intestinal and systemic immunity, and it can improve IM by improving mucosal barrier function and microbial composition in different regions of the intestine [9]. Sea cucumber fucoidan (SC-FUC) improves intestinal tissue architecture, including indicators such as villi height and crypt depth, and ameliorates immune imbalance by regulating the Th1/Th2 ratio to counteract small intestinal mucosal damage [10]. However, the biological activity of insect polysaccharides has not been properly explored compared with those of many natural organisms, such as plants, fungi and marine organisms. Previously, glycosaminoglycan from dung beetles exhibited anticancer properties, and Huechys sanguinea glycosaminoglycan has been used to treat tuberculous amenorrhea and scabies [11]. A novel polysaccharide extracted from the larvae of the black soldier fly (BSF) (Hermetia illucens) acts as an immune activator by stimulating RAW264.7 cells through the TLR signalling pathway [12].
The traditional Chinese medicine Wuguchong is a kind of natural medicine produced by maggots; it is the dried larva of Chrysomya megacephala or other related insects of Calliphoridae. Ancient Chinese medicine books recorded that its flavour is salty and sweet and that its nature is cold. Using this medicine invigorates the spleen, eliminates food accumulation, clears heat and eliminates infantile malnutrition. Previous studies have shown that polysaccharides extracted from Wuguchong (PEW) could be used as bioactive agents to prevent obesity [13]. Moreover, in that study, researchers tentatively explored the role of PEW in regulating intestinal microbial composition and maintaining intestinal epithelial integrity, which can reduce the ratio of Firmicutes to Bacteroides and the relative abundance of Proteobacteria in high-fat fed mice and improve the expression of tight junction proteins. On the other hand, the fatty acid extract of Wuguchong has been shown to promote wound healing on the surface of the body and promote the proliferation and migration of endothelial cells [14].
In this study, the polysaccharides were further purified, homogeneous polysaccharides of Wuguchong (HPW) was obtained by gel column chromatography, and its protective effect on IM induced by 5-FU was studied in mice. HPW protects the mechanical and immune barriers of the small intestine by improving the morphology of small intestinal villi and promoting goblet cell proliferation and tight junction protein expression. In addition, in the current study, we assessed the amount of short-chain fatty acids (SCFAs) in the faeces of mice. Based on previous studies [13], we hypothesized that HPW may protect against chemotherapeutic drug-induced intestinal mucosal damage through the intestinal flora and metabolic pathways to promote nutrient absorption.
Extraction of HPW
The crude extract of polysaccharides from Wuguchong was prepared by water extraction and alcohol precipitation according to our previous method [13]. Briefly, dried insect powder was boiled three times in 95% alcohol in a Soxhlet reflux machine to remove excess lipids. A solidliquid ratio of 20 mg/L was kept slightly boiling at 110 °C, aqueous extraction for 6 h. The concentrated solution was precipitated with 3 times ethanol for 2-3 days, the supernatant was poured, frozen and dried into powder form, and the crude polysaccharide extract was obtained.
After protein removal by the Sevage method, the crude extract was separated and purified by the DEAE-cellulose column chromatography method to obtain homogenous polysaccharides with similar molecular weights and the same polarity. DEAE Sepharose Fast Flow packing was eluted with distilled water to a neutral pH condition, the flow rate was adjusted to 5 mL/min, and this procedure was maintained for 2 h. The crude polysaccharides were dissolved in distilled water, followed by stepwise elution with distilled water and 0.2, 0.5 and 2.0 M NaCl solutions at a flow rate of 15 mL/min. The phenol-sulfuric acid method was used for tracking and detection. A microplate tester was used for detection at 490 nm, and a scatter diagram was drawn (Fig. 1A).
According to the peak shape, each component was collected, concentrated, dialysed in a 3.5 kDa molecular weight cut-off membrane, and freeze-dried. The fractions that eluted with 0.2 M NaCl were further purified on a Sephacryl S-200 column (1.6 × 80 cm) and were eluted with potassium phosphate buffer (PBS, 0.1 M, pH 7.2) at a flow rate of 0.5 mL/min [15].
Finally, a component with a relatively high concentration was obtained, which was named HPW. The extraction rate was 1.2%, and the purity was 87% by the phenol-sulfuric acid method.
High-performance anion-exchange chromatography (HPAEC) was used to identify the monosaccharide composition of HPW. The chromatographic system used a Thermo ICS5000 ion chromatography system (Thermo Fisher Scientific, USA), and an electrochemical detector was used to analyse the monosaccharide components with the following parameters: flow rate, 0.5 mL/min; injection volume, 5 μL; solvent system, 0.1 M NaOH: The data showed that HPW was mainly composed of galactose (21.55%), glucose (20.77%), rhamnose (7.05%), mannose (7%), arabinose (5.02%) and xylose (3.21%). An ion chromatogram of the samples is shown in Fig. 1B, and the monosaccharide composition is detailed in Fig. 1C.
Animals
SPF-grade C57BL/6 male mice weighing 18-22 g and aged 6-8 weeks were purchased from the Experimental Animal Centre of Dalian Medical University, China. All mice were adaptively fed for 7 days for subsequent experiments. The ambient temperature was 20-25 °C, and the relative humidity was 40-60%. Mice were maintained on a 12-h light-dark cycle and were randomly given chow and drinking water. The procedures for animal experiments were performed strictly in accordance with the standard guidelines for laboratory animals and approved by the Ethics Committee of Dalian Medical University (Ethical Approval Number: AEE19074).
Experimental procedure
Forty C57BL/6 male mice were randomly allocated into 5 groups with 8 mice in each group. The groupings were as follows: To obtain a stable experimental animal model of IM, we used a previously reported protocol [17]: normal saline and 5-FU (50 mg/kg) were injected intraperitoneally during the first three days. Water, HPW or mesalazine was administered orally 1 h before 5-FU administration for one week.
Physical manifestations and tissue collection
The body weight and diarrhoea score of each mouse and the food intake of each group were recorded every day during the experiment. Diarrhoea severity was scored daily by an uninformed researcher based on criteria in previous studies [18]. The scoring criteria for diarrhoea severity were as follows: 0: normal stool; 1: slight (wettish and soft stool); 2: moderate (unformed stool, wet crissum and stained coat); and 3: severe (watery stool).
At the end of the experiment (7 days after treatment), fresh faeces were collected and quickly placed into liquid nitrogen for preservation and used for SCFA analysis. Mice were deprived of water and food for 12 h. Blood was collected after pentobarbital anaesthesia for enzymelinked immunosorbent assay (ELISA) analysis. The entire small intestine was rapidly dissected. A 2-cm intestinal segment of the jejunum was taken 15 cm behind the pylorus and fixed with 4% paraformaldehyde for haematoxylin-eosin (HE) staining and periodic acid-Schiff (PAS) staining. The rest of the small intestine was used for other molecular biological experiments.
Histological analysis
The 2-cm jejunum segment was divided into two portions and placed in 4% paraformaldehyde fixative and Karnovsky fixative. After paraformaldehyde fixation for 12-24 h, the tissues were subjected to procedures such as dehydration and paraffin embedding. Slices with a thickness of 5 μm were cut and rehydrated with graded ethanol after being dewaxed with xylene. HE staining was then performed, the dye was washed off with water, and the slices were dehydrated and sealed. Villus length and crypt depth, which are specific indicators of intestinal barrier function and absorption, were measured under a microscope (Olympus BX-40, Japan) using Image-Pro Plus 6.0 software.
The other part of the jejunal tissue was fixed in Karnovsky buffer and then similarly processed according to dehydration, embedding and sectioning procedures. The sections were stained with Schiff 's reagent for 20 min, followed by haematoxylin for 20 min. The staining solution was washed away, and the slices were subsequently dehydrated and sealed. The number of goblet cells on each villus was counted by Image-Pro Plus 6.0 software. Goblet cells appear purplish red under a microscope and are an important indicator of the small intestinal mucosal barrier. Reagents were provided by Wuhan Servicebio Biotechnology Co., Ltd.
Reagents and antibodies
A Diamine oxidase activity (DAO) detection kit was purchased from Beijing Solarbio Technology Co., Ltd. A total antioxidant capacity (T-AOC) test kit was obtained from Nanjing Jiancheng Biological Engineering Research Institute Co., Ltd. ELISA kits (Shanghai Langton Biotechnology Co., Ltd.)were used to measure D-lactate, a mechanical barrier marker, and SIgA, a mucosal barrier marker. Claudin-1, proliferating cell nuclear antigen (PCNA), TNF-α and GAPDH antibodies were provided by Wuhan Proteintech Biotechnology Co., Ltd. Occludin, NF-κB (P65) and COX-2 antibodies were purchased from Abcam (Cambridge Science Park in Cambridge, UK). Mouse IL-10 and IL-1β ELISA kits was purchased from Beijing Solarbio Technology Co., Ltd.
Measurement of SCFAs
The SCFA levels in mouse faecal samples were measured according to a previously reported method [19]. An appropriate amount of the sample was added to 0.3 mL of water, 100 μL of 50% sulfuric acid, 25 μL of 500 mg/L internal standard (cyclohexanone) solution and 0.5 mL of ether, after which the mixed solution were homogenated for 1 min and centrifuged at 12,000 rpm at 4 °C for 10 min. The supernatant was placed on the instrument for testing (Gas Chromatograph-Mass Spectrometer, Shimadzu GCMS QP2010-Ultra, Japan). The chromatographic system was as follows: Agilent DB-WAX capillary column (30 m × 0.25 mm × 0.25 μm). The carrier gas was high purity helium (≥ 99.9%), and the flow rate was 1.0 mL/min. The inlet temperature was 220 °C, the injection volume was 1 μL, and the solvent delay time was 2.5 min for splitless injection. For the mass spectrometry system, an electron bombardment ion source (EI) was used, the ion source temperature was 230 °C, and the interface temperature was 220 °C. The chromatograph was connected to a microcomputer with a detector for collecting the results of the chromatographic analysis with the GC Solution program (Shimadzu, Japan).
Statistical analysis
The data are presented as the means ± SEM and were analysed using one-way ANOVA with GraphPad Prism 8.0 followed by Dunnett's test. We considered the data significant when p < 0.05.
Effect of HPW on the physiological manifestations of mice
Body weight changes, the diarrhoea index, and food intake are important phenotypic indicators of intestinal mucosal inflammation. In the present study, no significant weight loss, decline in food intake, diarrhoea, or death was observed in the control group (water + saline). Compared with the control group, the modelling group (water + 5-FU) experienced significant weight loss from days 2 and 3, accompanied by diarrhoea (unformed or even watery stools) and reduced food intake (Fig. 2). However, after HPW administration, the conditions improved. From the fourth day onwards, weight loss ( Fig. 2A) and diarrhoea scores (Fig. 2B) were relieved in the HPW + 5-FU group, and food intake gradually resumed (Fig. 2C). Therefore, we hypothesized that HPW protects against IM. We also evaluated the physiological status of mice that were intragastrically administered HPW alone. As shown in Fig. 2, there was no significant difference between the HPW + saline group (the light blue curve) and the control group, indicating that supplementation with HPW alone did not have adverse effects. In this part of the experiment, we used mesalazine as a positive control, and the physical status of the mice in the mesalazine + 5-FU group improved, as anticipated. Figure 3A shows that the cross sections of the jejunum in the fluorouracil-treated group (water + 5-FU) exhibited severely damaged pathological structures, showing significantly decreased villus height, decreased crypt depth and morphological dysplasia. Vacuolar oedema and inflammatory cell infiltration were observed in the submucosa and muscularis. HPW and mesalazine treatment alleviated the histopathological damage to different degrees. The intestinal villus height, crypt depth, and villus crypt ratios were markedly enhanced in the HPW + 5-FU group (Fig. 3B-D), demonstrating a protective effect against chemotherapeutic drug-induced mucosal injury. Compared with the response in the control group, intragastric administration of HPW alone had no effect on the small intestinal micromorphology.
Analysis of goblet cell counts and sIgA secretion
Compared with the effect on the water + saline group, intraperitoneal injection of 5-FU markedly lowered goblet cell counts on each small intestinal villus (Fig. 3E). After with the administration of HPW and mesalazine, the cupped cell counts returned, and alignment returned to normal with statistically significant differences. There was no statistically significant difference between the control group and the HPW alone group (Fig. 3F). The same trend was also observed in the levels of intestinal secreted IgA, as detected by ELISA; that is, fluorouracil reduced the secretion of sIgA, and mesalazine and HPW significantly reversed these effects (Fig. 4C). The diarrhoea scores of the mice. C Food intake of mice in each group. The data are presented as the means ± SEM and were analysed using one-way ANOVA followed by Dunnett's test (n = 8). "*" represents the comparison with the model group (water + 5-FU), and "#" represents the comparison with the control group (water + saline). One tag means p < 0.05, two tokens represent p < 0.01, and three indicates p < 0. The data are presented as the means ± SEM and were analysed using one-way ANOVA followed by Dunnett's test (n = 8). "*" represents the comparison with the model group (water + 5-FU), and "#" represents the comparison with the control group (water + saline). One tag means p < 0.05, two tokens represent p < 0.01, and three is p < 0.001 The data are presented as the means ± SEM and were analysed using one-way ANOVA followed by Dunnett's test (n = 8). The "*" represents the comparison with the model group (water + 5-FU), and "#" represents the comparison with the control group (water + saline). One tag means p < 0.05, two tokens represent p < 0.01, and three are p < 0.001
Effects of HPW on the inflammation-associated proteins NF-κB, COX-2, TNF-α and the inflammtory cytokines IL-1β, IL-10
The effects of HPW on the intestinal expression levels of NF-κB, COX-2 and TNF-α are shown in Fig. 5. Compared with those in the control group, the expression levels of these three key targets in the water + 5-FU group were significantly enhanced. This finding suggests activation of inflammatory pathways at the molecular level. HPW reversed the negative effects of 5-FU (Fig. 5A-D). Activation of NF-κB promotes the secretion of inflammtory cytokines. Figure 5E, F showed that HPW could not only considerably decrease the levels of IL-1β but also increase IL-10 expression in IM mice. Figure 6A shows the changes in total SCFAs: the levels of faecal SCFAs in mice decreased after 5-FU treatment and increased after HPW supplementation. The same trend occurred in the acetic acid levels and was significant (Fig. 6B). Consistent with the previous two indices, the addition of 5-FU reduced the levels of propionic acid and butyric acid (Fig. 6C, D). However, the level of propionic acid (Fig. 6C) did not change after HPW gavage, while the level of butyric acid (Fig. 6D) was statistically rised. There was also no significant variation in mice administered HPW alone compared to control mice.
Discussion
It has been extensively demonstrated that the biological activity of polysaccharides is related to their chemical characteristics, monosaccharide composition and the binding structure of glycosidic bonds [20]. Oral natural nonstarch polysaccharides can be rapidly degraded into low-molecular-weight polysaccharide fragments in the gastrointestinal tract, which are stably retained in the fluid of the stomach and small intestine to participate in the digestion and absorption process and are absorbed into the blood to participate in systemic circulation [21]. Our data show that there is 20.77% glucose in HPW, which is similar to many polysaccharides that exhibit intestinal mucosal protection. Glucose can be absorbed into the blood through internalization by intestinal epithelial cells, exerting anti-inflammatory and immunoregulatory effects [22].
The extent of intestinal injury induced by 5-FU was reported to be dose dependent, with moderate weight loss and diarrhoea in mice that were intraperitoneally injected with 50 and 100 mg/kg 5-FU. At this dose range, mouse mortality was low, and activity was stable [23]. In our experiment, typical pathogenic changes of IM in mice were observed after intraperitoneal injection of 5-FU (50 mg/kg) for three consecutive days (Day 1-Day 3), which was consistent with previous studies [17]. Clinically, cancer patients lose weight and appetite, which is associated with nausea and vomiting associated with chemotherapy [24,25]. Based on this mouse model of IM, our experimental results showed that HPW could improve 5-FU-induced diarrhoea and gradually restore body weight and food intake without adverse effects.
In the present study, intraperitoneal administration of 5-FU caused significant structural damage to the small intestine. Villus atrophy inevitably affects intestinal absorption, which may be partly responsible for weight loss, as shown in Fig. 2A. HPW treatment significantly increased villus height and crypt depth in the jejunum and restored the morphology of villi and crypts, which may indicate that HPW promoted crypt cell regeneration and increased crypt cell migration to villi. Reconstruction of intestinal micromorphology promotes the recovery of absorption, which may lead to increased food intake, as shown in Fig. 2C. Other studies have indicated that the administration of Sijunzitang polysaccharides significantly altered the appearance and histopathological results during delayed healing of gastrointestinal ulcers, which is a process associated with polysaccharide-mediated promotion of crypt epithelial cell migration [26]. In addition, the water-soluble polysaccharides of rhubarb also protect intestinal mucosal cells from apoptosis by mediating antioxidant effects and maintaining the structure of villi and crypts [27]. Further investigations are therefore required to elucidate the protective mechanism by which HPW improves intestinal inflammation and mucosal damage.
Goblet cells, which are markers of the intestinal mucosal epithelium and secretory components of the intestinal mucosal barrier, are observed by PAS staining [28]. A reduced number of these cells represents damage to the mucosal layer of the small intestine, exposing the epithelial surface of the intestinal lumen to bacterial translocation [29]. Goblet cells and other small intestinal mucosal Analysis of E IL-1β, F IL-10 in tissue homogenate. The data are presented as the means ± SEM and were analysed using one-way ANOVA followed by Dunnett's test (n = 8). "*" represents the comparison with the model group (water + 5-FU), and "#" represents the comparison with the control group (water + saline). One tag means p < 0.05, two tokens represent p < 0.01, and three are p epithelial cells produce mucin, which covers the intestinal mucosal layer, is similar to the mucus gel that protects the gastrointestinal tract and is a vital component of the mucosal barrier [30]. SIgA, a major component of mucin, is the principal protective molecule of specific (acquired) immunity that is secreted to mucosal surfaces, which optimizes microbial groups, prevents them from adhering to mucosal surfaces, reduces toxin expression by intestinal pathogens and effectively prevents bacterial translocation [31]. Our data suggested that HPW could significantly upregulate the quantity of goblet cells and sIgA levels, suggesting that Wuguchong polysaccharide could alleviate small intestinal mucosal inflammation by enhancing mucosal barrier function. We found histological evidence that HPW alleviates 5-Fu-induced IM by improving impaired intestinal barrier function and then provided biochemical evidence to support these results. DAO is an enzyme in intestinal Fig. 6 Effect of HPW on SCFAs. Analysis of A total SCFAs, B acetic acid, C propionic acid, and D butyric acid. The data are presented as the means ± SEM and were analysed using one-way ANOVA followed by Dunnett's test (n = 8). "*" represents the comparison with the model group (water + 5-FU), and "#" represents the comparison with the control group (water + saline). One tag means p < 0.05, two tokens represent p < 0.01, and three are p < 0.001 epithelial cells that suppresses cell proliferation by reducing polyamine concentrations, whereas D-lac is a bacterial metabolite produced by the intestinal flora [32]. Basal levels of both factors are generally low in normal mammalian systemic circulation and are usually observed only in the gut. During intestinal infection and inflammation, intestinal wall permeability increases, the translocation of numerous microorganisms from the intestine to the circulation increases, and intraluminal DAO and D-lac easily enter peripheral blood through the intestinal mucosa [33]. Moreover, the intestine is the largest contact surface between the human body and the external environment and is the central organ associated with the stress response under stressful conditions [33]. Chemotherapy enterotoxic drugs can attack the gastrointestinal tract with excessive free radicals, leading to impaired metabolism in intestinal epithelial cells, damaged cell function and an inflammatory response [34]. Therefore, we also examined the oxidative stress-related indicator T-AOC.
The results of our current study were consistent with previous findings that the serum levels of DAO and D-LAC in the 5-Fu-induced mucositis mouse model were higher than those in the control group, indicating that the intestinal permeability of mice was increased. The levels of T-AOC were also higher than those in the control group, suggesting an oxidative stress reaction. Moreover, HPW supplementation alleviated all three indicators, suggesting that HPW may alleviate IM by re-establishing intestinal barrier function and reducing the oxidative stress response.
Previously, we discussed the mechanism by which HPW alleviates IM from the aspects of the intestinal mucosal barrier and permeability. However, intestinal homeostasis is a dynamic process between internal and external environments, and the apical junctional complex plays a crucial role in maintaining intestinal homeostasis, similar to intestinal epithelial cells. These tight junction proteins, including the representative transmembrane proteins Claudin and Occludin, bind endothelial cells together by means of scaffold proteins and actin, forming the mechanical barrier of the intestinal tract [35]. Claudins prevent the unlimited flow of water and solutes, as well as the invasion of luminal antigens [36]. The interaction of claudin-1 with integrin in local adhesions is involved in regulating transport between cells and the extracellular matrix. Claudin-1 can regulate normal cell homeostasis under physiological conditions and promote the adhesion of migrated cells under pathological conditions [37,38]. Occludin, which is the first tight junction protein discovered, is thought to regulate extracellular permeability by sealing adjacent cells [39]. In vitro studies reported that the restoration of high Occludin expression improved the molecular barrier function of pig intestinal epithelial cells (IPEC-J2 cells) [40]. A previous investigation confirmed that chemotherapy-induced intestinal barrier damage in IM mice as associated with decreased expression of occludin [41]. Our present study revealed decreased expression of Claudin-1 and Occludin in the intestines of mice treated with 5-FU, while the expression of these two tight junction proteins was restored in mice that were administered HPW. These results strengthen the relevant findings on the role of tight junction proteins in intestinal barrier function and confirm that HPW can promote the recovery of the intestinal mechanical barrier and ameliorate IM by regulating intestinal tight junction proteins. In addition, we investigated the expression of PCNA, which is involved in DNA replication and double DNA strand reconstruction. PCNA is considered a signature of cell cycle dynamics and proliferative activity [42]. Continuous proliferation of intestinal epithelial cells and subsequent enhancement of tissue recovery can attenuate intestinal inflammation [43]. Therefore, PCNA is a molecule for evaluating the intestinal epithelial barrier. In our current study, PCNA expression was reduced in the model group, and HPW administration upregulated this indicator. It was further confirmed that HPW could promote the recovery of the intestinal barrier.
A better understanding of the molecular mechanism that leads to IM could provide therapeutic methods for curing these drug side effects and boosting the absorption of the small intestine. In 2004, Sonis et al. proposed an overlapping five-step model to summarize the biological phases of mucositis: initiation, primary damage response, signal amplification, ulcer formation, and healing [44]. The formation of reactive oxygen species, inflammation, and apoptosis are the most important molecular events in the early stages of injury. Among these sequential molecular events, NF-κB has been considered to be one of the most important transcription factors associated with tumour toxicity and therapeutic resistance [45]. Activation of NF-κB can upregulate more than 200 different genes, and many of these genes may have mucosal toxicity [46]. TNF-α and COX-2, both of which are important target genes of NF-κB, are involved in the immune response to stress in the inflammatory cascade [47]. Thus, we preliminarily measured the expression of these three key molecules at the protein level. The data suggest that HPW can ameliorate the downregulated expression caused by 5-FU administration, which may be one of the molecular mechanisms by which HPW can improve inflammation in the small intestinal mucosa. In addition, other studies have demonstrated that TNF-α may initiate the infiltration of inflammatory cells into the intestine by decreasing the expression of occludin, leading to structural changes in tight junctions [48]. COX-2 destroys the collagen subepithelial matrix and epithelial basement membrane by activating matrix metalloproteinases, which further damages the small intestinal mucosal barrier [49]. Activation of NF-κB promotes the secretion of inflammtory cytokines. IL-1β is essential pro-inflammatory cytokines broadly involved in inflammatory processes. In contrast, anti-inflammatory cytokines IL-10 exert effect by a specific pathway in inflammation. Our experimental results showed that HPW could not only considerably decrease the levels of IL-1β but also increase IL-10 expression in IM mice.
For the past few years, an increasing number of studies have confirmed that the intestinal microenvironment maintains a dynamic balance between organisms and the microbiota. The intestinal microbiota contains multiple carbohydrate active enzymes (CAZymes), which participate in the metabolism of dietary polysaccharides and produce SCFAs that are beneficial to host health [50,51]. Research shows that SCFAs (i.e., acetate, propionate, and butyrate) can be resorbed by intestinal epithelial cells and protect the intestinal barrier [52]. In addition, SCFAs can also improve 5-FU-induced small intestinal mucosal inflammation by improving the intestinal mucosal barrier and reducing the level of inflammation [53]. Our results are consistent with those of Flavia et al. [54], who found that 5-FU reduced SCFA levels in mouse faeces. After HPW administration, the levels of total SCFAs, acetic acid and butyric acid increased, indicating that microbial activity gradually recovered. Some studies have shown that Bacteroides can participate in the metabolism of polysaccharides to form succinic acid, which can be used as a single carbon source to produce acetic acid and other SCFAs [55]. In our previous study, it was found that supplementation with polysaccharides from Wuguchong could modify the obesity status of mice fed a high-fat and high-sugar diet by increasing the abundance of Bacteroidetes and decreasing that of Firmicutes [13]. Therefore, we hypothesized that the beneficial effects of HPW on Bacteroidetes in mice might be one of the reasons for the increase in SCFAs in the current study. Some intestinal probiotics can degrade xylose and glucose in polysaccharides to form propionic acid [56]. However, there was no significant change in propionic acid levels in our study. We hypothesize that this is related to the fact that HPW comes from insects and has low xylose levels (3.21%, Fig. 1C). Moreover, butyrate acts as a health-promoting SCFA, providing energy to epithelial cells, enhancing mucosal barrier function and reducing inflammation [57]. In the present study, HPW could improve butyrate levels, but there was no statistical significance. This outcome is probably related to the dose and time of our intervention, which is also the focus of our follow-up study. There is moderate evidence [58] also revealed that the production of butyric acid was related to the metabolism of galactose and glucuronic acid. In our study, the proportions of these two monosaccharides reached 21.55% and 5.90%, which were relatively high levels. Therefore, the content of butyric acid increases after HPW treatment.
Conclusions
Our results suggest that HPW, which is a bioactive polysaccharide extracted from insects, has a palliative effect on IM caused by drug side effects. HPW can alleviate 5-FU-induced mechanical damage to the intestinal barrier in mice, improve the expression of tight junction proteins, and reduce the activation of inflammatory pathways. Furthermore, HPW can also promote the levels of SCFAs, which are metabolites of the intestinal flora, and regulate intestinal microecology. In future studies, we will further purify this natural product and try to examine the mechanism of action in vitro. | 2022-01-20T16:10:21.957Z | 2022-01-18T00:00:00.000 | {
"year": 2022,
"sha1": "23d6f080e0f14b33d34785911934cdc683827e2d",
"oa_license": "CCBY",
"oa_url": "https://nutritionandmetabolism.biomedcentral.com/track/pdf/10.1186/s12986-022-00669-1",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3baf6ecfe5e03f99eafa9163868a47a1d4ccbf93",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
1069044 | pes2o/s2orc | v3-fos-license | Distribution and habitats of Melanoides tuberculata ( Müller , 1774 ) and M . victoriae ( Dohrn , 1865 ) ( Mollusca : Prosobranchia : Thiaridae ) in South Africa
An account is given of the geographical distribution and habitats of Melanoides tuberculata (Müller, 1774) and M. victoriae (Dohrn, 1865) as reflected by the samples on record in the database of the National Freshwater Snail Collection (NFSC) of South Africa. About 30 species of Melanoides occur in Africa of which only M. tuberculata is widespread. Melanoides tuberculata is also indigenous to India and the south-east Asian mainland to northern Australia and was widespread in the present-day Sahara during the late Pleistocene-Holocene, but M. victoriae seems to be restricted to Southern Africa. Details of the habitats on record for each species, as well as mean altitude and mean annual air temperature and rainfall for each locality, were processed to determine chi-square and effect-size values. An integrated decision-tree analysis indicated that temperature, altitude and type of substratum were the most important factors of those investigated that played a significant role in establishing the geographical distribution of these species in South Africa. In view of the fact that M. tuberculata can serve as intermediate host for a number of trematode species elsewhere in the world, it is recommended that the ability of the 2 local Melanoides species to act as intermediate hosts should be investigated. Due to the fact that the majority of sites from which these species were recovered were not since revisited, it is recommended that efforts should be made to update their geographical distribution and the results compared with the data in the database. The conservation status of these 2 species and the possible influence of global warming and climatic changes on their geographical distribution are briefly discussed.
Introduction
The genus Melanoides is evidently restricted to the Old World tropics (Pilsbry and Bequaert, 1927) and about 30 species occur in Africa of which only M. tuberculata (Müller, 1774) is widespread (Brown, 1994).Melanoides tuberculata was described from the Coromandel coast of India in 1774 and its present-day distribution is the Indo-Pacific region, Southern Asia, Arabia, northern Australia, Near East and much of Africa (Appleton, 2002) and was also introduced into the Caribbean area (Brown, 1994).With regard to South Africa, only 2 species, namely M. tuberculata and M. victoriae (Dohrn, 1865) have been reported of which the former is the most widespread according to the records of the National Freshwater Snail Collection (NFSC).While M. tuberculata was also widespread in the present-day Sahara (Van Damme, 1984) M. victoriae seems to be restricted to Southern Africa (Brown, 1994;Appleton, 2002).
Melanoides tuberculata has proved to be a compatible intermediate host for several trematode species elsewhere in the world and shedding of cercariae of a number of trematode families has also been recorded for this snail species elsewhere in Africa (Frandsen and Christensen, 1984).It has become invasive after its introduction into new territories such as Martinique Island (Pointier, 2001) and Brazil (Rocha-Miranda and Martins-Silva, 2006) but also proved to be an efficient and sustainable bio-control agent of Biomphalaria glabrata (Say, 1818) the intermediate host snail of the intestinal schistosome parasite in these areas.
This paper focuses on the geographical distribution and habitat preferences of M. tuberculata and M. victoriae as reflected by the data in the database of the NFSC.In view of the fact that the records in the NFSC span a period of several decades the possible influence of global warming and climatic changes on the geographical distribution of these species in South Africa and their conservation status is briefly discussed.
Methods
Data from 1956 to the present (2009) on the geographical distribution and habitats of M. tuberculata and M. victoriae as recorded at the time of the survey were extracted from the NFSC database.Only those samples that could be located on a 1:250 000 topo-cadastral map series of South Africa were included in the analyses.The majority of these samples were collected during surveys conducted by government and local health authority staff, as well as staff of the former Snail Research Unit at the Potchefstroom University (now the North-West University).The number of loci ( 1 / 16 degree squares) in which the collection sites were located, was distributed in intervals of mean annual air temperature and rainfall, as well as intervals of mean altitude to illustrate the frequency of occurrence of these species in water-bodies falling within specific intervals.Rainfall, temperature and altitude data were obtained in 2001 from the Computing Centre for Water Research (CCWR), University of KwaZulu-Natal (disbanded since).All mollusc species in the database were ranked in order of their association with low to high climatic temperatures according to a temperature index calculated from their frequencies of occurrence within selected temperature intervals.The method of calculation is dealt with in detail in our earlier publications (De Kock and Wolmarans, 2005a;b).To determine the significance in differences between frequency of occurrence in, on, or at the range of options for each factor investigated, chi-square values (Statistica, Release 7, Nonparametrics, 2X2 Tables, McNemar, Fischer exact) were calculated.An effect size was also calculated (Cohen, 1977) for each parameter investigated to evaluate the importance of its contribution towards establishing the geographical distribution of this species as reflected by the samples in the NFSC database.The method of calculation is explained with reference to the 14 different water-body types represented in the database.The first step is to determine the total number of times each water-body type, for instance rivers (7 507), was reported for all the different mollusc species in the database and then to sum the total number of records of all the waterbodies reported for all the species in the database (28 956).To determine the p value for each of the different water-body types, for instance for rivers as such, the frequency of occurrence of all species in rivers (7 507) is divided by the total number of times (28 956) all the water-bodies were recorded in the database.The total number of times a specific mollusc species was reported from all 14 water-bodies together is then summed (this figure for M. tuberculata, for instance, was 228).The number of times a specific species was reported from a specific water-body type is then designated as 'A'.To determine a value designated as 'B' the number of times a specific species was reported from a specific water-body, for instance rivers (79), is multiplied by the p value calculated for rivers (0.259).This is done for all the different water-body types from which this specific species was reported.Chi-square values (x 2 ) for each type of water-body are then calculated as follows: The chi-square values calculated for all the different waterbody types are then summed and the effect size (w) for waterbodies as such is then calculated as follows: square root of Σx 2 divided by ΣA.Values for this index in the order of 0.1 and 0.3 indicate small and moderate effects respectively, while values of 0.5 and higher point to practically significant and large effects (Cohen, 1977).More details of the significance and interpretation of specific values calculated for this statistic in a given situation, are discussed in our earlier publications (De Kock and Wolmarans, 2005a;b).A decision tree which is a multivariate analysis (Breiman et al., 1984) was also constructed from the data which enables the selection and ranking of those parameters that played the most important role in establishing the documented geographical distribution of these species, based on the data in the database.The frequencies of occurrence within the different options for a specific parameter which do not differ significantly from one another, are grouped together in the decision-tree analysis.If, for instance, the frequency of occurrence in rivers does not differ significantly from that in streams, these 2 options for water-bodies are grouped together in the decision tree analysis.In addition, the total number of times any other mollusc species in the database was recorded under a specific condition is also displayed in the results of the decision tree analysis.This analysis was done with the SAS Enterprise Miner for Windows NT Release 4.0, April 19, 2000 programme and Decision Tree Modelling Course Notes (Potts, 1999).
Results
The collection sites of the 305 samples of M. tuberculata fell within 85 and the 53 sites of M. victoriae within 21 different loci (Fig. 1).The former species was recovered from 12 of the 14 water-body types represented in the database while the latter species were found in only 6 (Table 1).Although the majority of samples of both species were recovered from rivers the highest percentage occurrence (5.9%) in the total number of collections in a specific water-body for M. tuberculata (0.9%) was realised in channels and that for M. victoriae in concrete dams (Table 1).An effect-size value of larger than 0.5 was calculated for both species for water-bodies as such (Table 1).The majority of samples of both species were recovered from waterbodies described as perennial, with clear, freshwater (Table 2).While the largest number of samples of M. tuberculata came from habitats with standing water, M. victoriae was more frequently collected in slow running water and a relatively large effect size was calculated for this parameter (Table 2).
The majority of samples of M. tuberculata was recovered from water-bodies of which the substratum was described as either muddy or sandy while equal numbers of samples of M. victoriae were collected on stony and sandy substrata (Table 3).A large effect size (w = 0.5) was calculated for substratum types for both species (Table 3).
With regard to the frequency of occurrence within the different temperature intervals, the highest percentage of samples of M. victoriae was recorded from the 16°C to 20°C interval, while habitats falling within the 21°C to 25°C interval yielded the highest number of samples of M. tuberculata (Table 4).There was, however, no significant difference (p < 0.05) between the frequency of occurrence of M. victoriae in habitats falling within the 16°C to 20°C and 21°C to 25°C intervals.The temperature indexes calculated for all the species in the database and statistical analysis of the data are presented in Table 5.More than 80% of the samples of both species were collected in sites which fell within the 2 rainfall intervals ranging from 301 to 900 mm (Table 4).While the largest number of samples of M. tuberculata was collected in sites which fell within the 0 to 500 m altitude interval, the majority of samples of M. victoriae came from sites which fell within the 501 to 1 000 m interval (Table 4).There was, however, no significant difference (p < 0.05) between the frequency of occurrence of M. victoriae in habitats falling within the 0 to 500 and the 501 to 1 000 m intervals.
The results of the decision tree analyses for M. tuberculata and M. victoriae are depicted in Figs. 2 and 3 respectively.
Discussion
The 85 loci from which the 305 samples of M. tuberculata were recovered, display a continuous distribution all along the eastern border of South Africa from Limpopo Province down to the southern border of KwaZulu-Natal Province (Fig. 1).It is discontinuously spread through the north-western part of Limpopo and The occurrence of this species in 2 isolated loci in the Northern Province far outside its endemic range of distribution seems rather unusual.However, samples of M. tuberculata, closely associated with Biomphalaria pfeifferi (snail intermediate host of Schistosoma mansoni) were recovered on more than one occasion from the Kuruman River and its eye (source) situated in these loci in the Kuruman district.The presence of freshwater snails in dolomitic springs in South Africa far outside their endemic range of distribution is discussed in detail in De Kock and Wolmarans 2004a).These springs usually have a stabilising effect on both water temperature and water supply, factors which play an important role in making water-bodies suitable for colonisation by freshwater snails outside their endemic range of distribution.
Owing to the fact that the geographical distribution of both M. tuberculata and M. victoriae displays a westerly arm extending from the eastern part of South Africa, they are classified as broadly tropical by Brown (1978) compared to narrowly tropical species having no westerly arm.However, from the effect sizes calculated for the temperature indexes (Table 5) it is evident that M. tuberculata did not differ significantly in respect of its association with warm climatic temperatures (d < 0.5) from 10 of the 12 species classified as narrowly tropical by Brown (1978)
718
According to Brown (1994) the southern limit of the distribution of M. tuberculata in the eastern part of South Africa lies near Port Elizabeth.However, despite the fact that we have many records of other freshwater mollusc species in the database of which the southern limits of distribution extend even further than Port Elizabeth (De Kock et al., 1989;De Kock et al., 2001;De Kock et al., 2002a;De Kock and Wolmarans, 2004b;De Kock and Wolmarans, 2005c;De Kock and Wolmarans, 2007), we have none for this species extending further southwards than the southern border of KwaZulu-Natal.Twelve of the 21 loci on record for M. victoriae are shared with M. tuberculata; however, it is not as widespread as the latter species (Fig. 1).Appleton ( 2002) mentions that M. victoriae is not known from KwaZulu-Natal; however, we have 4 samples on record from this Province collected during 1965 and 1966 which is now reported for the first time.
The fact that M. tuberculata was recovered from 12 of the 14 water-body types represented in the database (Table 1) confirms the report by Brown (1994) that it can utilise various permanent water-bodies including rivers, shallow seepages and man-made habitats.In contrast to this, M. victoriae was reported from only 6 different water-body types and obviously seemed to prefer perennial rivers (Tables 1 and 2) which is the only water-body type mentioned for this particular species for the Mpumalanga Lowveld by Brown (1994).The 5 samples on record for M. tuberculata from habitats with brackish water also support the report by Brown (1994) that this species is tolerant of moderate brackishness in coastal localities.According to this author M. tuberculata is not found in temporary waters; however, we have 14 samples on record in the database reported from seasonal habitats for this species and also 1 sample of M. victoriae from a temporary habitat (Table 2).Although more samples of M. tuberculata were reported from water-bodies with standing water than with slow running water (Table 2) no significant differences could be indicated between these alternatives.In contrast more samples of M. victoriae were recovered from water-bodies with slow running water than with standing water (Table 2) and in this instance a significant difference (p< 0.05) could be indicated.From the effect values calculated for water velocity it is evident, however, that this factor played a much more important role in determining the presence, or not, of M. victoriae in a specific water-body.The majority of samples of both species were reported from water-bodies with water described as clear (Table 2) but no significant differences were found between their occurrence in habitats with clear or muddy water and the effect sizes calculated for this parameter also indicated that turbidity did not play an important role in determining the suitability of a given water-body.
Nearly 78% of the samples of M. tuberculata were recovered from loci which fell within the temperature interval ranging from 21˚C to 25°C while the interval ranging from 16˚C to 20˚C yielded the largest number of samples of M. victoriae (Table 4).These results are supported by the temperature indexes calculated for these 2 species which indicated that the former species not only seemed more closely associated with warmer climatic temperatures but the effect sizes calculated for these indexes also showed that it differed significantly (d > 0.5) from M. victoriae in this respect (Table 5).Although only 4 samples of M. tuberculata were recovered from sites which fell within the temperature interval ranging between 26˚C and 30˚C it represented 10.8% of the total number of collections of all molluscs in the database from sites falling within this specific temperature interval (Table 4 and Fig. 2).This also points to a relatively close association with higher climatic temperatures.
From the effect-size values calculated for the various parameters investigated (Tables 1 to 4) it can be deduced that temperature, altitude, substratum and water-bodies, played an important role in establishing the geographical distribution of both species as reflected by the data in the database of the NFSC.This deduction is supported by the results of the decision tree analyses (Figs. 2 and 3) which selected temperature, altitude and substratum as the most important factors which had significantly influenced the geographical distribution of both species.From the decision tree analyses it can further be seen that a substratum consisting of mainly decomposing material played a significant role in the habitats from which samples of M. tuberculata were recovered (Fig. 2).With regard to their habitat preferences it can be concluded that both species seemed to prefer perennial rivers in areas which fell within the temperature intervals ranging from 16˚C to 25°C and altitude intervals ranging from 500 to 1 500m a.m.s.l.However, the results in Table 1 suggest that M. victoriae is considerably more stenoecious than M. tuberculata.Current velocity in a water-body and mean yearly rainfall also seemed to play a significant role in the presence or not of the former species in a specific area (Tables 2 and 4).
As mentioned earlier M. tuberculata has become invasive after introduction into new areas such as Martinique Island (Pointier, 2001) andBrazil (Rocha-Miranda andMartins-Silva, 2006) but fortunately in both these cases it proved to be an efficient and sustainable control agent of intermediate host snails responsible for the transmission of schistosomiasis to humans.Apparently this is not the case in South Africa because we have a number of samples on record in the database of the NFSC, amongst others from the Kruger National Park, where persistent populations of both the local schistosome intermediate host snail species and populations of M. tubercuata have co-existed in the same water-body through several decades.
Although numerous cases of M. tuberculata becoming a nuisance species in tropical fish aquaria have been reported in literature, we are not aware of any case recorded in literature of this species causing problems in natural water-bodies in South Africa.According to Appleton ( 2002), however, it has become plentiful in rice paddies in KwaZulu-Natal and we were recently approached for advice on a case where M. tuberculata had proliferated to such an extent after invading the heat exchanger of an electric power-plant that it caused complete clogging of the filters, resulting in malfunctioning of the entire system.
Countrywide surveys for freshwater molluscs was terminated during the early 1980s and on account of the fact that many of the positive sites were not revisited, comments on the conservation status of our mollusc fauna should be made with circumspection.However, Melanoides localities reported from the Kruger National Park by Oberholzer and Van Eeden (1967) have since been revisited in surveys conducted by ourselves in 1995 (De Kock and Wolmarans, 1998), 2001(De Kock et al., 2002b) and 2006(Wolmarans and De Kock, 2006) and a marked decline in positive localities, as well as in population size, were evident for both species.Whereas Oberholzer and Van Eeden (1967) reported 34 and 20 positive sites for M. tuberculata and M. victoriae respectively, only 4 sites and 1 site for these species, respectively, were found positive during our extensive survey in 2006.The only prosobranch snail that was encountered in large numbers in some of the sites during our 2006 survey was the exotic invader species Tarebia granifera which was reported for the first time in Africa by Appleton and Nadasan (2002).According to Pointier and McCullough (1989) this species has demonstrated its capacity to invade and rapidly colonize a wide range of water-body types on numerous islands and countries in the Neotropical area and succeeded in reducing and even eliminating populations of other mollusc species.Whether the invasion of water-bodies in the Kruger National Park by this exotic species could have a bearing on the observed decline in positive sites of both Melanoides spp.needs further investigation.
From the literature it is clear that M. tuberculata can serve as intermediate host for several trematode species which can be harmful to a number of vertebrate species, including man.These include amongst others, Clonorchis sinensis, the Oriental liverfluke (Lun et al., 2005) and Philopthalmus gralli, a trematode infecting the eyes of bird species but also reported infecting humans (Díaz et al. 2002).Melanoides tuberculata was also proved to be a compatible intermediate hosts for Gastrodiscus aegyptiacus, the fluke responsible for gastrodiscosis in equine populations in Zimbabwe (Mukaratirwa et al., 2004) and Calicophoron microbothrium another trematode fluke of veterinary importance in that country (Chingwena et al., 2002).Furthermore, specimens of M. tuberculata infected with larval stages of economically important intestinal flukes of the family Heterophyidae were reported from the Rio de Janeiro metropolitan area, Brazil (Bogéa et al., 2005).Melanoides tuberculata was also reported from Australia as the intermediate host of the trematode, Transotrema licinum an ectoparasite of several fish species (Manter, 1970) and evidence was also put forward by Frandsen and Christensen (1984) that M. tuberculata could be an important intermediate host for several fluke species.Shedding of non-schistosome cercariae was also reported for M. tuberculata from the Msambweni area, Coast Province, Kenya (Kariuki et al., 2004).
Due to the fact that M. tuberculata is relatively easy to cultivate and maintain in the laboratory, it has been utilised locally as bio-indicator to assess biological effects of diffuse sources of pollutants in a wetland system (Wepener et al., 2005) and in comparative studies in the laboratory on the uptake and effects of heavy metals on cellular energy and allocation (Moolman et al., 2007).Studies on the life cycle and growth of M. tuberculata were also conducted in a natural habitat in Mpumalanga (Appleton, 1974).To our knowledge, however, the capacity of representatives of the 2 local Melanoides species to serve as intermediate hosts for parasitic flukes has not yet been investigated.However, after eggs resembling those of Paragonimus kellicotti, a lung fluke infecting cats and dogs, were reported from humans and cats in KwaZulu-Natal (Proctor and Gregory, 1974), circumstantial evidence implicated M. tuberculata as the intermediate host because it was the only prosobranch snail that could be found in the area at that stage.
In view of the important role played by M. tuberculata in the epidemiology of a number of trematode species of medical and veterinary importance elsewhere in the world, it is recommended that the ability to act as intermediate hosts for economically important trematode flukes of both the Melanoides species occurring in South Africa should be investigated.At the same time efforts should be made to update the geographical distribution of both species and to compare the results with existing records in the database of the NFSC to evaluate their conservation status.The ability of M. tuberculata to aestivate was listed as a poor by Brown (1994) and the fact that perennial rivers seemed to be the water-body of preference for both species could be a disadvantage for their long-term survival.Increased evaporation of surface water due to global warming could have a detrimental effect on the permanency of such water-bodies and suitable habitats could become less available which in turn could impact negatively on their geographical distribution and conservation status in this country.As mentioned earlier M. victoriae seemed to be considerably more stenoecious than M. tuberculata and therefore more prone to be affected by changes in environmental conditions.Taking into account the relatively limited geographical distribution reported for M. victoriae and the results of our recent surveys in the Kruger National Park, the conservation status of this species could justifiably be considered as vulnerable.
Figure 1
Figure 1The geographical distribution of Melanoides tuberculata and M. victoriae in 1/16 square degree loci in South Africa as reflected by the records in the database of the National Freshwater Snail Collection
Table 1 Water-bodies from which Melanoides tuberculata and M. victoriae were recorded out of the 14 different types represented in the database of the National Freshwater Snail Collection
A Number of times any mollusc was collected in a specific water-body B Number of times collected in a specific water-body C % of the total number of collections (M.tuberculata 305; M. victoriae 53) on record for each species D % occurrence of each species in the total number of collections in a specific water-body
Table 2 Water conditions in the habitats of Melanoides tuberculata and M. victoriae as described during surveys
Available on website http://www.wrc.org.zaISSN 0378-4738 (Print) = Water SA Vol.35 No. 5 October 2009 ISSN 1816-7950 (On-line) = Water SA Vol.35 No. 5 October 2009 716 a focus of 6 loci occurs on the border of North West and Gauteng.
Table 4 Frequency distribution of the collection sites of Melanoides tuberculata and M. victoriae in selected intervals of mean annual air temperature and rainfall and mean altitude in South Africa Species Temperature intervals °C Rainfall intervals (mm) Altitude intervals (m) 16 -20 21 -25 26 -30 0 -300 301 -600 601 -900 901
D % occurrence of each species in the total number of collections within a specific interval E Effect size values calculated for each factor * Number of collections on record for each species .
Table 5 Frequency distribution in temperature intervals and temperature index of Melanoides tuberculata as compared to all mollusc species in the database of the National Freshwater Snail Collection
1Index Temperature index; 2 SD Standard deviation; 3 CV Coefficient of variance; 4 Narrowly tropical species (Brown, 1978) Available on website http://www.wrc.org.zaISSN 0378-4738 (Print) = Water SA Vol.35 No. 5 October 2009 ISSN 1816-7950 (On-line) = Water SA Vol.35 No. 5 October 2009 Decision tree of the frequency of occurrence of Melanoides victoriae for each variable as compared to the frequency of occurrence of all the other species in the database of the National Freshwater Snail Collection.0: percentages and frequencies of all other species, 1: percentages and frequencies of M. victoriae. | 2017-09-14T14:01:32.903Z | 2009-10-01T00:00:00.000 | {
"year": 2009,
"sha1": "8daf5ad8dbc781470d44a9fbcf6689c39e9025a4",
"oa_license": "CCBY",
"oa_url": "https://www.ajol.info/index.php/wsa/article/download/49197/35540",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8daf5ad8dbc781470d44a9fbcf6689c39e9025a4",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
244932619 | pes2o/s2orc | v3-fos-license | Heavy Metals Assimilation by Native and Non-Native Aquatic Macrophyte Species: A Case Study of a River in the Eastern Cape Province of South Africa
There is continuous deterioration of freshwater systems globally due to excessive anthropogenic inputs, which severely affect important socio-economic and ecological services. We investigated the water and sediment quality at 10 sites along the severely modified Swartkops River system in the Eastern Cape Province of South Africa and then quantified the phytoremediation potential by native and non-native macrophyte species over a period of 6 months. We hypothesized that the presence of semi and permanent native and non-native macrophytes mats would reduce water and sediment contamination through assimilation downriver. Our results were variable and, thus, inconsistent with our hypotheses; there were no clear trends in water and sediment quality improvement along the Swartkops River. Although variable, the free-floating non-native macrophyte, Pontederia (=Eichhornia) crassipes recorded the highest assimilation potential of heavy metals in water (e.g., Fe and Cu) and sediments (e.g., Fe and Zn), followed by a submerged native macrophyte, Stuckenia pectinatus, and three native emergent species, Typha capensis, Cyperus sexangularis, and Phragmites australis. Pollution indices clearly showed the promising assimilation by native and non-native macrophytes species; however, the Swartkops River was heavily influenced by multiple non-point sources along the system, compromising the assimilation effect. Furthermore, we emphasise that excessive anthropogenic inputs compromise the system’s ability to assimilate heavy metals inputs leading to water quality deterioration.
Introduction
Aquatic ecosystems have been subjected to organic and inorganic pollution, which have worsened with poor waste water management [1]. These impacts have resulted in a noticeable loss of aquatic biodiversity, water quality deterioration, ecosystem integrity, and important socio-economic services [2]. Therefore, effective rehabilitation practices and conservation strategies are needed to minimize and control freshwater contamination.
Previous field and mesocosm trials have shown that reversing the impact of anthropogenic inputs in the environment is challenging, and that minimising the level of these inputs and waste management will help curb environmental contamination [3]. Ecologists have tested different methods to try and reduce contamination in freshwater systems, and these include adsorption, soil washing, reverse osmosis, coagulation, and flocculation [4][5][6].
However, Hanif et al. [7] showed that these methods were costly, sometimes ineffective, and disruptive; for example, soil washing alters sediment microbial communities making it difficult to re-use the treated soil [8]. Methods, such as ion-exchange and artificial membranes, generate end-waste material that requires special deposition, thus, creating additional costs for their disposal [9], whilst coagulation and flocculation can be ineffective in decolorizing laundry effluents [10].
It is clear that there is a need for innovative techniques with merits over traditional methods. Green technology, such as phytoremediation, which uses plants and associated microbes to assimilate and breakdown contaminants in natural environments, is one method that has been widely researched and applied [2,[11][12][13][14][15][16]. Phytoremediation is the most innovative, cost-effective, and environmentally friendly technology available to assimilate organic and inorganic contaminants even at low concentrations [12,17,18]. Studies have shown that phytoremediation has socio-economic and environmental merits over traditional physicochemical clean-up, and can reduce water quality contamination by more than 50% in mesocosm settings [19][20][21][22][23][24].
The assimilation efficacy of macrophytes has been studied by several researchers [9,[25][26][27][28][29][30][31]. These studies investigated the fate of toxic and non-toxic elements in the field and laboratory using native and non-native macrophytes, and each case study showed improved water chemistry, through reduced nutrients, and heavy metal concentrations after assimilation. To date, phytoremediation feasibility studies have focused on the treatment of heavy-metal contamination when using macrophytes species, such as Typha capensis (Rohrb.) N.E.Br. (Typhaceae) (Bulrush), Phragmites australis (Cav.) Trin. ex Steud (Poaceae) (Common reed), and Cyperus sexagularis (L.) (Cyperaceae) (Swamp flat-sedge) [32][33][34][35]. These macrophytes species are widespread and abundant in freshwater systems, they can tolerate different environmental constraints, thus, making them significant candidates for phytoremediation [30]. Furthermore, these macrophytes provide basic ecosystem services that serve an important role in biogeochemical processes, the natural cycling of nutrients [36], and supplying the system with a continuous source of energy [37].
The selection of plant species for phytoremediation is usually based on their tolerance and ability to accumulate a wide range of contaminants [42]. Non-native macrophytes thrive extremely well in phosphate and nitrate enriched waters as compared to native macrophytes in South Africa [43]; however, such conditions promote high biomass of non-native macrophyte species, such as S. molesta, P. stratiotes, and P. crassipes, making them more effective accumulators, but more also invasive, and thus displacing native aquatic biodiversity [42,44].
Secondly, although non-native macrophyte species have proven to be better assimilators of heavy metals [16,40,41]. Non-native macrophytes are equally destructive they modify invaded ecosystems by altering the hydrology and aquatic species composition, reduce ecosystem processes, production, and contribute to lose of aquatic biodiversity [1,16,45,46]. Therefore, in this study, we field test the assimilation potential of both native and non-native macrophyte stands found along the Swartkops River in South Africa. We hypothesize that the presence of native and non-native macrophytes species will help reduce the heavy metal contamination in water and sediments downstream of semi and permanent native and non-native stands.
Study Area
The study was conducted in the Swartkops River (33 • Winterhoek Mountains of the Swartkops catchment and flow into Algoa Bay, and into the Indian Ocean ( Figure 1) [47]. The Algoa Bay is an important coastal line in South Africa, known for its marine biodiversity and serving as a habitat and nursery site for various marine animals, including Spheniscus demersus (African penguins), Mirounga leonina (Southern elephant seal), and Sphyrna zygaena (Great white shark) [48].
Water Chemistry
Integrated water sample (1000 mL, n = 1) was collected ~20 cm below the water surface at each site using pre-rinsed clear polyethylene sample container for water chemistry analysis. Water samples were then stored on ice until they reached the laboratory, and, within 48 h after collection, water samples were sent to BEM-Labs, Cape Town, South Africa for water chemistry analysis, including Chemical Oxygen Demand (COD), Zinc (Zn), Iron (Fe), Cadmium (Cd), Arsenic (As), Chromium (Cr), Lead (Pb), Mercury (Hg), and Copper (Cu).
At the laboratory (BEM-Labs), the water samples were acidified to a pH of ±2 and digested to isolate all the metal ions in solution. Once cooled, the samples were filtered through a 0.45 µL syringe filter to remove any particulates. The resulting samples were then analysed using an Agilent ICP-OES 720 Axial instrument for total heavy metals. Since these were integrated water samples, it is possible that some properties, such as pH, or organic carbon, varied between the samples and may have influenced the speciation (and bioavailability) of pollutants.
Sediment Chemistry
Using a gardening trowel, integrated soil sediment samples were collected at five areas per site at approximately 10 cm depth. Sediments samples were collected into plastic zip-lock bags and then stored on ice. Similar to the water chemistry samples, sediment samples were within 48 h after collection sent to BEM-Labs for sediment chemistry and heavy metal analysis, including Zn, Fe, Cd, As, Cr, Pb, Hg, and Cu. The 155 km long Swartkops River drains the 42 km 2 wide catchment area where protected areas dominate the upper reaches of the catchment; the middle reaches is dominated by urban, formal settlements, and agricultural lands; and the lower reaches are surrounded by industries, formal and informal settlements before flowing into the ocean. The landscape activities contribute to the release of domestic effluents, industrial waste, untreated sewage, and other point and non-point source pollutants [49]. The natural vegetation dominating the lower catchment is Bushveld and Succulent thicket, which has been severely altered by the introduction of alien invasive plant species, such as Eucalyptus spp. (Gum trees) and Acacia spp. (Black Wattle and Port Jackson Willow) [49].
Ten study sites were selected along the Swartkops River and sampled for a period of six-months, at monthly intervals, from April 2018 to September 2018. Sample collection took place upstream and downstream of semi and permanent non-native macrophytes mats, P. crassipes and S. molesta ( Figure 1). Site 1 was situated among agricultural lands, which was upstream from Uitenhage town but downstream from protected areas. The site experienced minimal urban and industrial effluents except some agricultural inputs ( Figure 1). Site 2 was situated downstream from site 1, in the heart of the Uitenhage urban area and after the confluence of Swartkops River and KwaNobuhle tributary. Site 2 was less than 1 km upstream from P. crassipes mat 1 (hereafter site 3), whereas site 4 was located 0.6 km downstream from site 3 ( Figure 1). Site 5 was 2.4 km upstream from P. crassipes mat 2 (hereafter site 6), and site 7 was about a kilometre downstream from site 6 ( Figure 1). Site 8 was located~1.6 km upstream from S. molesta mat 3 (hereafter site 9), and site 10 was located 0.6 km downstream from site 9 (Figure 1). At each site, water and sediment samples together with dominant native (i.e., T. capensis, P. australis, C. sexagularis, and S. pectinatus) and non-native (i.e., P. crassipes and S. molesta) macrophyte species were collected and analysed for heavy metal accumulation analysis (Table S1).
Water Chemistry
Integrated water sample (1000 mL, n = 1) was collected~20 cm below the water surface at each site using pre-rinsed clear polyethylene sample container for water chemistry analysis. Water samples were then stored on ice until they reached the laboratory, and, within 48 h after collection, water samples were sent to BEM-Labs, Cape Town, South Africa for water chemistry analysis, including Chemical Oxygen Demand (COD), Zinc (Zn), Iron (Fe), Cadmium (Cd), Arsenic (As), Chromium (Cr), Lead (Pb), Mercury (Hg), and Copper (Cu).
At the laboratory (BEM-Labs), the water samples were acidified to a pH of ±2 and digested to isolate all the metal ions in solution. Once cooled, the samples were filtered through a 0.45 µL syringe filter to remove any particulates. The resulting samples were then analysed using an Agilent ICP-OES 720 Axial instrument for total heavy metals. Since these were integrated water samples, it is possible that some properties, such as pH, or organic carbon, varied between the samples and may have influenced the speciation (and bioavailability) of pollutants.
Sediment Chemistry
Using a gardening trowel, integrated soil sediment samples were collected at five areas per site at approximately 10 cm depth. Sediments samples were collected into plastic zip-lock bags and then stored on ice. Similar to the water chemistry samples, sediment samples were within 48 h after collection sent to BEM-Labs for sediment chemistry and heavy metal analysis, including Zn, Fe, Cd, As, Cr, Pb, Hg, and Cu.
At BEM-Labs laboratory, a portion of the sediment sample was weighed into an Erlenmeyer flask. We added 20 mL nitric acid and 10 mL hydrogen peroxide to the flask, and the flask was then heated to allow the sample to digest. After digestion, the sample was transferred to a 100 mL volumetric flask, made up to volume, and then filtered. The resulting sample was then analysed on the Agilent ICP-OES 720 Axial instrument for heavy metals.
Macrophytes Chemical Analysis
Native marginal and aquatic vegetation species together with non-native macrophytes were collected at each site for heavy metal analysis. Five stems of emergent plants i.e., T. capensis, C. sexangularis and P. australis; five matured floating plants i.e., P. crassipes and S. molesta, and about 200 g of submerged plant i.e., S. pectinatus, were collected by hand and rinsed with distilled water to remove any debris and periphyton biofilm.
Plant material were transferred into different zip lock bags (per plant species) and stored on ice until they reached the laboratory. In the laboratory, plant samples were immediately oven-dried at 60 • C for 72 h. During this procedure, all the cell processes (e.g., respiration) stopped, making sure that samples represents the nutrients composition per gram of leaf without the influence of water. Thereafter, dried leaves were homogenised into coarse material by grinding using a mortar and pestle. About 6.5 g of dried plants tissue was weight and packaged into aluminium foil envelopes and also sent to BEM-Labs, for heavy metal analysis, including Fe, Hg, Zn, Cd, As, Pb, and Cu. For each sample, 20 mL nitric acid and 5 mL hydrogen peroxide were added and the flask was heated to allow the sample to digest, until approximately 1 mL of the solution was left. The remaining sample was transferred into a 10 mL volumetric flask, made up to volume using distilled water and filtered. The filtered sample was analysed on the Agilent ICP-OES 720 Axial instrument for heavy metals.
Data Analysis
To assess the reduction in water and sediment chemistry between upstream and downstream semi and permanent P. crassipes and S. molesta mat sites, the percentage reduction in water and sediment heavy metals concentrations were computed. Furthermore, to understand the current environmental condition at Swartkops River and the concentration of heavy metals, sediments and macrophyte indices were used to quantify heavy metal assimilation by both native and non-native macrophytes along the river system.
The geo-accumulation index (Igeo), which measures the degree of heavy metal contamination, was used to estimate heavy metal pollution in the Swartkops River during the study, and this was calculated following the equation defined by Muller [50]: where Cn is the measured concentration of metal in sediments, and Bn is the measured geo-chemical background value of the metal. The 1.5 factor is used to minimize possible variations of the background values, which may be qualified to lithogenic variations [51]. The geo-chemical background values were given according to the world surface rock average as seen in Martin and Meybeck [52].
Secondly, the pollution load index (PLI), which is an important index in evaluating soil sediment quality was used to estimate heavy metal pollution in the sediments. The pollution load index is expressed as the product of the contamination factor (CF) of all measured heavy metals on-site and was calculated following a formula adopted from Islam et al. [54]: The Contamination Factor (CF) of each metal was computed separately per site using the metal concentration and the background value of the metal (background value from the average shale value) [55], CF was calculated following Atgin et al. [56].
where Cm (sample) is the concentration of heavy metal in sediment and Cm (background) is the background value of metals adopted from world surface rock average by Martin and Meybeck [51]. According to Chakravarty and Patgiri [57], the PLI value < 1, indicates no pollution, whilst PLI value > 1, indicates pollution (or deterioration of the sediment).
The enrichment factor (EF) is a more comprehensive assessment of heavy metal contamination [58]. The method is based on normalisation of the measured heavy metal concentration with respect to the reference metal, such as Aluminium (Al) or Fe [59]. For the present study, Fe was used as a reference heavy metal for normalization because, according to Nirmala et al. [60], Fe is redox sensitive under oxidation conditions and constitutes significant sinks of heavy metals in aquatic ecosystems.
Background values used for the present study were given according to the world surface rock average by Martin and Meybeck [52]. According to Chen et al. [61], EF < 1, indicates no enrichment; EF = 1-2, minimal enrichment; EF = 3-5, moderate enrichment; EF = 5-10, moderately severe enrichment; EF = 10-25, severe enrichment; EF = 25-50, very severe enrichment; and EF > 50, extremely severe enrichment. These were calculated following Buat-Menard and Chesselet [62]: where Cmetal (sample), is the concentration of the examined heavy metal; Cnormalizer (sample), is the concentration of the normalizer/reference heavy metal (Fe); Cmetal (reference metal), is the concentration of the examined heavy metal in its suitable background or baseline reference material, and Cnormalizer (reference metal) is the concentration of the normalizer heavy metal (Fe) in its suitable background.
Then, to assess and estimate the native and non-native macrophyte species accumulation potential for heavy metal concentration in sediments, the bio-concentration factor (BCF) was calculated following Zayed et al. [63]: where metal (plant) is the concentration of heavy metals in plants, and metal (sediment) is the concentration of heavy metals in sediments. BCF value > 1, indicates that the plant species is a better hyper-accumulator of the heavy metal; whereas, BCF value = 1 indicates that plant species is an accumulator of the heavy metal, and BCF value < 1 indicates that a plant is a better excluder [64].
To test the significant differences in sediment indices (i.e., Igeo, PLI, and EF) between sites and the macrophyte assimilation factor (BCF) for each plant species, the Shapiro-Wilk test for normality and Levene test for homogenous variance were employed. The outcome of the tests revealed that none of the variables were normally distributed (Shapiro-Wilk, p < 0.05) nor were the variances homogenous (Levene test, p > 0.05). Thus, a nonparametric test, in this case, Kruskal-Wallis analysis of variance, with multiple comparison test was employed. All statistical analyses were conducted in R version 3.6.1 [65], except where specified.
Water and Sediment Chemistry
Heavy metal concentrations were variable along the Swartkops River with no consistent reduction trend downriver (Table S2). The Fe concentration showed significant differences between sites (H = 28.13, p = 0.001) with site 1 recording high Fe concentration (1.1 mg/L) and site 10 low concentration (0.09 mg/L) (Table S2). There was significant difference in Zn concentration between sites (H = 18.03, p = 0.034), the highest Zn concentration (0.12 mg/L) was recorded at site 10, and the lowest Zn concentration (0.02 mg/L) was recorded for all sites except sites 5 and 7 (Table S2).
The COD concentrations were significantly different between sites (H = 21.89, p = 0.001). The highest COD concentration was recorded at site 5 (57.4 mg/L) and the lowest at site 1 (14.64 mg/L) ( Table S2). The As and Cu concentrations were not significantly different, whereas heavy metal, i.e., Cd, Cr, Hg, and Pb, concentrations showed constant values of 0.0021, 0.026, 0.0021, and 0.006 mg/L, respectively, throughout the sampling period (Table S2).
Swartkops River Sediment Contamination
The geo-accumulation index (Igeo) was significantly different for heavy metals i. (Table 1).
Geo-accumulation index values revealed that sites were extremely contaminated by Cr, Fe and Zn, recording Igeo values of more than 5. Site 5 recorded the highest Igeo value for Zn (12.16) and Cr (11.06) whereas, site 1 recorded the highest Igeo value for Fe (15.11) ( Table 1). All sites were extremely contaminated (Igeo > 5) with Pb and Cr, except site 9 (Pb) and site 10 (Pb and Cr). Arsenic recorded the lowest Igeo values ranging from −0.64, uncontaminated sediments (site 3), to 2.94, moderately contaminated (site 1) ( Table 1).
The enrichment factor (EF) revealed that five heavy metals, including As (H = 17.08, p = 0. (Table 1). Site 1 recorded high EF values for majority of heavy metals, i.e., As, Cr, Cu, and Hg, whilst site 5 revealed high EF for heavy metals, i.e., Zn, and Pb (Table 1).
Based on the EF values obtained, all sites experienced no enrichment except for site 1, which showed minimal Hg enrichment ( Table 1). The PLI values were not significantly different between sampling periods (Kruskal-Wallis ANOVA, p > 0.05) ( Table 1). All recorded PLI values were below 1, except site 5, which recorded PLI value of 1.10 for the month of April. In general, the month of June recorded PLI > 1 for majority of sites; however, they were all not significantly different.
Heavy Metal Assimilation along the Swartkops River
Pontederia crassipes and Salvinia molesta semi and permanent mats showed promising heavy metal assimilation, but this varied between sites. However, in some cases, the trend was clear showing heavy metal reduction between upstream and downstream P. crassipes and S. molesta mats, thus, indicating possible macrophyte assimilation potential (Table S4).
Emergent native macrophytes species recorded the lowest bio-concentration factor (BCF) values when compared to both floating and submerged native macrophyte species ( Table 2). Typha capensis and C. sexangularis showed a BCF value of less than 1 for Cu at all sites; however, for Zn, T. capensis recorded a BCF of less than 1 at sites 1, 6, 7, 8, 9, and 10, whereas C. sexangularis recorded BCF of less than 1 at sites 1,7, 8, 9, and 10 ( Table 2). Phrgamites australis showed significantly different BCF values for Zn (H = 16.43, p = 0.05), at all sites, except for site 5, which showed BCF values of less than 1 ( Table 2).
The floating non-native P. crassipes BCF results were significantly different between sites for As (H = 23.15, p < 0. Table 2). Pontederia crassipes recorded BCF values of less than 1 for Cu and Zn at all sites; whilst As recorded BCF > 1 for site 1 and site 9, and Hg BCF > 1 at sites 6, 7, 9, and 10 ( Table 2).
Discussion
The present study reports that the Swartkops River system is heavily polluted by various heavy metals. Although our results show some degree of assimilation by native and non-native macrophyte stands, the continuous inputs, i.e., non-point sources, at different entry points along the river system surpass this potential. Research on macrophyte assimilation (or phytoremediation) has mainly been conducted in mesocosm settings, with limited in situ case studies or case studies in the natural environment [23,[66][67][68].
The effectiveness of phytoremediation in the reduction of heavy metal concentrations in water and sediment by non-native P. crassipes and S. molesta was tested in the present study and others (e.g., [19,29,69,70]). Although the results did not show a consistent decreasing trend due to high variation between sites, the present study's findings still showed promising macrophyte assimilation potential as most sites showed reduced concentrations of heavy metals as hypothesized. The Swartkops River is in a deteriorating state, and these findings have been corroborated by a number of studies before (see [49,[71][72][73]), which revealed that intense land-use developments along the Swartkops River catchment and riparian areas have a huge effect on the system's physical, chemical, and biological well-being.
Findings from the present study revealed that there were few significant reductions in heavy metal concentrations between the immediate upstream and downstream sites within individual non-native macrophyte patch. For example, between site 2 and site 4 upstream and downstream of P. crassipes mat (site 3), as well as site 8 and site 10 of S. molesta mat (site 9), we reported more than 45% reduction in heavy metal concentration (i.e., Zn, Cr, As, Pb, and Hg) (Table S4).
Reductions were attributed to the presence of P. crassipes and S. molesta mats acting as accumulators for heavy metals from upstream. The above findings corroborated with Mishra and Tripathi [19], whose study reported on the effectiveness of P. crassipes in accumulation of Cr and Zn effluents, were P. crassipes efficiently assimilated more than 50% of the heavy metal concentration in only 11 days of exposure, further emphasizing the phytoremediation potential of these macrophytes.
It is possible that some of the pollutants were accumulated by sediments. This is because, as contaminants constantly wash off downriver, some slowly settles and gets assimilated in the sediments. Jernström et al. [74] indicated that the nature of sediments in water bodies reflects, to a great extent, the condition of the system as a result of various pollutants in the water; in addition, these sediments may also serve as indicators by revealing the concentration of the pollutants settling in them.
These results were supported by Hadad et al. [75] and Schaller et al. [76], who reported that the top sediment layer, integrated with a low diffusion rate of elements can play a significant role in adsorption and accumulation of heavy metals. Various indices from the present study, including EF, PLI, and Igeo, showed that sediments along the Swartkops River system were moderately to extremely contaminated as a result of pollutants along the river catchment (Table 1).
These findings were more evident at site 5, which recorded the highest heavy metal concentrations (i.e., Zn, As, Cr, Cu, and Pb) in sediments, in addition, the EF and Igeo values were highest for Zn and Pb, revealing extreme sediment contamination by these heavy metals at site 5 (Table 1). This emphasize that the Swartkops River is facing probable environmental pollution especially with heavy metals, i.e., Fe, Cu, Cr, Zn, and Pb.
This was because both studies showed reductions in heavy metal concentrations indicating phytoremediation potential by native and non-native macrophytes. Despite the macro-phyte assimilation potential, few hyper-accumulated heavy metals were recorded when compared to what other studies had achieved when using the same macrophytes species [15,78,79] This could be attributed to fact that the present study only used C. sexangularis, P. autralis, and T. capensis species leaves for heavy metal analysis. Macrophytes assimilate heavy metals; however, their concentrations differ with plant parts or segments. For example, Vymazal and Březinová [80] reported that the assimilation and distribution of heavy metals in above-ground parts differs from below-ground plant parts, and this is because of different physiological absorption mechanisms in plants. Other studies, including Chandra and Yadav [77], Eid et al. [70], Bonanno [81], and Vymazal and Březinová [80], supported these findings by revealing that emergent macrophytes species, including Phragmites spp., Cyperus spp., and Typha spp., usually have similar accumulation trends.
These macrophytes species accumulate larger quantities of certain heavy metals, including Cr, Mn, Cu, Ni, Hg, Pb, and Zn, better in underground plant parts as compared to above-ground plant parts, and this is usually in the order of roots > rhizomes > leaves > stems. Although the present study did not evaluate heavy metal concentrations for below ground plant parts for P. australis, C. sexangularis, and T. capensis, the accumulated heavy metal concentrations and low BCF values recorded in emergent macrophytes could have been influenced by the same trend, which is variation in the distribution within the plant parts, which may also differ with plant size.
In contrast, floating (non-native) and submerged (native) species revealed a greater uptake of heavy metals (i.e., Cr, Fe, Hg, and Zn) with high BCF values compared to emergent macrophytes (Table 2). This was expected for P. crassipes, as it is known for a high accumulation ability and tolerance to disturbances. The high uptake of heavy metals by S. pectinatus could have been solely influenced by using the whole plant (roots, stem, and leaves), which were fully exposed to heavily polluted systems.
The present study further revealed that P. crassipes was the most effective accumulator of heavy metals, followed S. pectinatus, P. australis, C. sexangularis, and T. capensis. The order of accumulation in heavy metals by macrophyte species (floating, emerged, and submerged) was similar to a study by Goulet et al. [82] who tested floating Lemna minor (L.) (Araceae) (Common duckweed), submerged Potamogeton epihydrus (Raf.) (Potamogetonaceae) (Ribbon-leaf pondweed), Nuphar variegeta (Durand.) (Nymphaeaceae) (Yellow pond-lily), and emerged Typha latifolia (L.) (Typhaceae) (Common cattail) in the removal of heavy metals in a mesocosm study. The study revealed that, amongst all macrophytes, floating macrophytes were more effective in assimilating heavy metals, followed by submerged, and lastly emergent macrophytes, which was similar to the present study.
Although there was promising heavy metal assimilation, the Swartkops River did not show overall water and habitat quality improvement downriver. This indicates that heavy metal reductions (>45%) in concentration between native and non-native macrophyte stands did not improve the water and sediment quality contamination; however, this was not the same for some important sediments and macrophyte pollution indices, which were variable across sites. This could be due to constant influxes from multiple non-point and point sources (i.e., sewage treatment works, industries, and other anthropogenic activities) along the river system, meaning that the constant inputs have a significant effect on the system deterioration. Distance between sampled sites could have also influenced our findings, as some sites were located about one kilometre away from the non-native macrophyte stands, thus, allowing pollution inputs between sites, further suppressing the assimilation as seen in this study.
In addition, field experiments are considered dynamic and difficult to work with because they are complex and are affected by multiple extraneous variables that are not easy to control and can affect the outcome of results. Since this study was the first of its kind in the highly impacted Swartkops River system, we show that the phytoremediation technique can be effective; however, the state and land-use pressure play a crucial role, and we recommend more field-based studies with limited alterations.
Conclusions
The study showed the promising phytoremediation potential of native and non-native macrophytes to mitigate heavy metal contaminants from anthropogenic activities along the Swartkops River system. Water and sediment pollution indices were variable across sites showing no consistent trend in the reduction of water and sediment quality, and this was in contrast with our hypothesis. The lack of water and sediment quality improvement down river could have been due to constant pollution effluents from multiple non-point sources along the river system.
It is also possible that the river system could have been severely polluted to the extent that ecosystem services provided by both native and non-native macrophytes (although evident) were supressed. This study showed that native and non-native macrophytes can be used to assimilate pollutants; however, this can be better achieved in more control settings, i.e., laboratory and mesocosm settings, compared to complex and dynamic field conditions.
The screening of sediments and macrophytes (both native and non-native) provided an overview state of the Swartkops River system, and this may serve as an early warning or indication of changes in the system. Various authors [14,16,23,30,31,40] have demonstrated phytoremediation success in the reduction of water and sediments heavy metal concentrations; however, very few studies have tested if the improvement of water and sediment quality assists the recovery of biological diversity particularly through biological indicators, i.e., aquatic macroinvertebrates.
Thus, we propose that adjunctive studies should be conducted to assess phytoremediation using biological variables (periphyton, aquatic macroinvertebrate, etc.) to quantify phytoremediation success. The current study further emphasizes that physicochemical variables are not sensitive but variable and can only provide a snap-shot of habitat degradation. The sediment and macrophyte indices were reliable indicators of heavy metal contamination and macrophyte bio-accumulation potential; however, excessive anthropogenic input in the Swartkops River suppressed macrophyte ecosystem services. We therefore recommend more field studies to test various green technologies to mitigate the deterioration water and habitat quality using relevant biological indicators.
Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/plants10122676/s1, Table S1: A summary of bio-physical characteristics of the ten sampling sites at the Swartkops River system, Eastern Cape, South Africa. Table S2: Water chemistry mean values and ±standard deviation recorded from 10 sites, including non-native macrophytes stands along the Swartkops River system South Africa from April-September 2018. Bolded H-values indicate significant differences (Kruskal-Wallis ANOVA, p < 0.05); NS = not significant, p > 0.05. Table S3: Sediment chemistry mean and (±standard deviation) recorded from 10 sites, including native macrophytes stands along the Swartkops River system South Africa (April 2018-September 2018). Bolded H-values indicate significant differences (Kruskal-Wallis ANOVA, p < 0.05). Table S4: Percentage reduction of heavy metals concentration in sediments semi and permanent stands of Pontederia crassipes and Salvinia molesta along the Swartkops River system, Eastern Cape, South Africa. conclusion or recommendation expressed in this material is that of the authors, and not that of the National Research Foundation. We thank Lenin Chari, Takudzwa Comfort Madzivanzira, Aldwin Ndlovu, Tiyisani Chabalala, Zolile Maseko, Tshililo Mphephu, Evans Mauda, Bongiswa Ramalivhana, Frank Akamagwuna, Thifhelimbilu Mulateli, Nwabisa Magengelele, and Chumakwande Makehle for assistance during fieldwork. | 2021-12-08T16:18:48.413Z | 2021-12-01T00:00:00.000 | {
"year": 2021,
"sha1": "0bb095d0bc58d55c11594483017b67a7c3bf8650",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2223-7747/10/12/2676/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "19ad7b14bd1fde43cf6ec9affa9a11826219d610",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
270631893 | pes2o/s2orc | v3-fos-license | Evaluation of sterile glove usage on digital tactile sensitivity using the Grating Orientation Task
Introduction Surgical glove use may be associated with a decrease in tactile sensitivity, with thicker gloves or double-gloving techniques further altering sensation. This study evaluates digital tactile sensitivity by use of a Grating Orientation Task (GOT) with multiple sterile gloving techniques (no gloves, single standard gloving, double standard gloving, orthopedic gloves, and micro-thickness gloves). Methods Each participant performed the GOT at increasing grating widths until correctly noting orientation in ≥8 of 10 trials with multiple glove types or double-gloving technique. Glove order was randomly assigned and participants were blinded to the orientation and dome size. Results All gloves except micro-thickness gloves showed increased threshold sensitivity values (i.e. worse fingertip sensitivity) when compared to control (micro:control, p = 0.105, others:control, p < 0.05). Single-layer gloves showed no significant difference in sensitivity when compared to orthopedic (p = 0.06) or double-layer latex gloves (p = 0.26). Discussion Standard latex gloves decreased fingertip sensitivity when evaluated with the GOT. Double-layer and orthopedic latex gloves do not decrease sensitivity when compared with single-layer gloving. Micro-thickness gloves may provide similar tactile sensitivity to no surgical glove.
Introduction
Single use sterile gloves have been used in human and veterinary surgery since the 1960's, and styles of gloves have rapidly expanded and developed in the recent past.Sterile gloving techniques are used to maintain a sterile working field, minimize contamination, protect surgeons' hands from injury, and to preserve tactile sensation.Double gloving is currently recommended for most human medical procedures, as it has been shown to decrease perforation rate and potential exposure to pathogens (1).Perforation of the outer glove still occurs at a similar rate to single-gloving techniques, but inner-glove perforation is considerably lower (2,3).While the benefits of double gloving have been shown to include a reduction in inner glove perforation, this has not directly translated to a demonstrable decrease in incision site infection rate in veterinary medicine (4).Further, some surgeons claim that double gloving techniques come with a perceived loss of tactile sensitivity and dexterity (5).
Many studies have investigated the prevalence and effect of glove puncture identified during routine surgeries in both the human and animal medical fields (6)(7)(8)(9).There are options available for reducing the risk of puncture and exposure that range from double gloving recommendations, use of color indicator primary layer gloves, or use of woven steel protective outer gloves (2,9,10).Previous studies have shown no loss of dexterity or tactile sensation using the double gloving technique when evaluated with a twopoint discrimination test (5).Newer testing methods to evaluate tactile sensation have been evaluated (11)(12)(13) and may provide more specific insight to a surgeon's tactile sensation with different gloving techniques.
The Grating Orientation Task (GOT) has been proposed as a method for reducing stimulus variability when assessing for tactile sensation (13).A contact dome covered in grooves and ridges of equal width can provide a larger surface and may provide better tactile sensation than a focal point analysis object.Additionally, both the orientation of the grating and the size of the grooves can be altered to assess for the limits or capabilities of tactile sensation in many applications.Previous evaluations have determined that two orthogonal orientations to the grating (proximal-distal and lateral-medial) provides adequate variability between the patterns for differentiation when assessing fingertip tactile sensation (13).
The objective of this study was to analyze the effect of different glove types and double standard gloving on the tactile sensation of participants' fingertips.The hypothesis was that there would not be a significant change in tactile sensation between the control (no glove) group when compared to the double-gloved sample group.
Materials and methods
Institutional Review Board approval was granted for the project (ISU IRB #17-563-000).Clinical-year Veterinary students, Veterinary residents, and faculty of the Department of Veterinary Clinical Sciences volunteered to perform the GOT using JVP domes (Johnston, VanBoven, and Philips domes; Stoelting, Co. Wood Dale, IL) to discern grating orientation (Figure 1).Sizes of JVP Domes used included 4.5, 4.0, 3.5, 3.0, 2.5, 2.0, 1.5, 1.0, and 0.75 mm width of grates.Gloving techniques evaluated included single latex gloving, double latex gloving, single orthopedic gloving, and micro-thickness gloves (Ansell Perry Style 42 [for both single and double-layer], Ansell Encore Latex Orthopedic, and Ansell Encore Latex Micro; Ansell LTD.Iselin, NJ).Individuals with a known latex allergy or those with previous medical conditions that could affect digital tactile sensitivity (e.g., carpal tunnel syndrome, diabetes mellitus with, or without neuropathy, etc.) were excluded from participation.All double gloving techniques were evaluated with both the inner and outer glove being of the same selected size for each individual participant, with size being determined by the participant.Glove sizes were used based on each individual participant's self-selection, and no direct size guidance was given.Glove sizes used included 6.0 through 8.0 for all standard latex or micro-thickness gloves.Orthopedic gloves varied in size from 6.5 through 8.0.All individuals who used size 6.0 gloves for standard and micro-thickness testing used size 6.5 for orthopedic gloves due to size limitations from the manufacturer for the specific style selected for this study.Control data was collected from each individual participant without the use of surgical gloves before performing the GOT for any gloved categories.The gloved tests were performed in a modified randomization pattern, where no glove was used first, and then the remaining gloves were tested randomly.However, single standard gloving was then always followed by double standard latex wherever the former fell in the previously randomized order.
The Grating Orientation Task (GOT) was used to evaluate digital sensitivity via participants' detection of correct orientation of the device.The same researcher performed all GOT trials (TR) to provide consistency in placement and pressure of dome to participants' fingertip.Participants' vision was blinded via a cardboard screen to obscure dome size and orientation.Each participant placed their hand through the screen and held their dominant hand supine with the index finger extended.Grates were placed on a participants' dominant index finger oriented in either a proximal-distal (vertical) or lateral-medial (horizontal) orientation for 10 touches at each grating size selected.Orientation for each individual test was determined via random generation of a coin flip (Random.orgcoin flipper tool) with heads being assigned the vertical orientation and tails the horizontal orientation of grating.The correct verbal identification of orientation (horizontal vs. vertical) of the grating required 80% or better success by the participant to determine a threshold sensitivity value, based on manufacturer recommendation of >50% above random chance.If three incorrect responses were recorded for any size grating, that was determined to be too small of a size grate for threshold sensitivity for that participant, and the next larger dome size was then evaluated.Each participant completed a total of 10 touches for a given size grating, even after they had failed to identify proper orientation in 3 or more touches, in an effort to avoid knowledge of incorrect answers.Participants first performed the task with the 1.5 mm size dome, and then would move to either a smaller size if their responses were correct in at least 8/10 touches, or would move to an incrementally larger size grating if incorrect identification of orientation was determined.A response of "unknown" or "unsure" was recorded as an incorrect response.If a participant attempted to roll or press on the dome at a pressure beyond what the evaluator supplied, this individual touch was discarded and a sequential touch was used to fulfill the total required for each size after generating another random coin flip for orientation.This was continued until the participant correctly identified the orientation of the grating in at least 8/10 touches, which was then recorded as their threshold sensitivity for each glove test parameter.Participants were not notified of the proper orientation after each touch and were not told their overall threshold grating size for each gloving technique.
Analysis
A power analysis was calculated to determine the sample size needed to detect a 0.5 mm threshold difference in the population above the mean.Alpha set at 0.05 with a power of 0.8 showed a minimum size of 53 participants was required for significance.Data was statistically analyzed using online statistical analysis software (Prism, GraphPad Software, http://www.graphpad.com).Normal distribution was not found via use of a Shapiro-Wilk test.For each independent recorded variable, a mean, standard deviation, and standard error from the mean were calculated.Glove threshold values were compared with a Kruskal-Wallis test for multiple independent samples.Significance was assessed between groups with the Dunn multiple comparisons test, and significance was set at p < 0.05.A post-hoc analysis was performed using a Mann-Whitney u-test.
Results
A total of 60 participants were enrolled in and completed the study.Fifty-seven participants were 4 th year clinical veterinary students, two were small animal surgery residents, and one was small animal surgery faculty.Mean age of participants was 25.8 years (range 24-34); gender was not recorded.A total of 53 participants were right-hand dominant and 7 participants were left-hand dominant.Median surgical glove size was 7. The mean GOT threshold value without gloves was 2.16 ± 0.53 mm, singlelayer latex 2.52 ± 0.43 mm, double-layer latex 2.71 ± 0.45 mm, micro latex 2.43 ± 0.58 mm, and orthopedic latex 2.78 ± 0.43 mm (Figure 2).
The non-gloved test compared to micro-thickness gloves showed no statistically significant difference in threshold sensitivity, p = 0.105.Single-layer latex showed an increased fingertip threshold sensitivity when compared to no gloves, p = 0.01.Both double-layer latex and orthopedic gloves showed an increased threshold value when compared to no gloves, p < 0.001 for both glove types.There was no statistically significant effect when comparing the threshold sensitivity of single-layer gloves to either double-layer or orthopedic gloves, p = 0.27 and p = 0.06, respectively.Double-layer and orthopedic gloves did have a statistically significant higher GOT threshold value (decreased fingertip sensitivity) when compared to micro-thickness gloves, p = 0.04 and p = 0.005, respectively.There was no difference between single-layer or micro-thickness gloves, p = 0.93.There was no difference in threshold sensitivity between double-layer or orthopedic gloves, p = 0.97.No statistically significant differences were noted when evaluating left-handed participants' threshold values vs. right handed participants' values for any glove types tested.
Discussion
The objective of this study was to evaluate the sensitivity threshold for different gloving techniques by use of the GOT.The perception that double-layer latex gloves may decrease fingertip sensitivity when compared with single-layer latex gloves was not supported, as they had similar threshold sensitivity values.However, use of single-layer standard latex gloves did show an increased threshold value when compared to no gloves, suggesting a poorer fingertip sensitivity.The use of single-layer latex surgical gloves increased GOT threshold values by 16.7% when compared to the control.The use of double-layer and orthopedic latex gloves increased threshold values by 25.4% and 28.7%, respectively.However, no significant difference was noted between the singlelayer, double-layer, and orthopedic glove groups.Micro-thickness latex gloves had a 11.7% increased threshold sensitivity relative to the control, but this was not statistically significant.As an option advertised for improved tactile sensitivity with a 20% thinner latex glove (14), micro-thickness gloves appear to yield similar sensitivity to no glove use (p = 0.105).To the authors' knowledge, these thinner style gloves have not been shown to have an increased perforation rate, but studies comparing specific glove styles and perforation rates are lacking.Additionally, while limited studies have evaluated different brands of surgical gloves (15), our study evaluated multiple thickness styles as well as double vs. single gloving within one glove manufacturer.Orthopedic gloves have been shown to have a perforation rate similar to doublegloving techniques in veterinary medicine (16) and are advertised as up to 50% thicker than standard latex surgical gloves (17).However, one study has shown a decrease sensitivity with use of orthopedic gloves while showing a similar perforation rate in human arthroplasty between orthopedic-thickness and single-layer standard latex gloves (18).Our study showed a similar threshold sensitivity value for both standard latex and orthopedic gloves when evaluated using the GOT.
While the GOT has been used in multiple settings to evaluate digital tactile sensitivity, to the authors' knowledge it has not been used to evaluate sensitivity with surgical gloves.The two-point discrimination task is more commonly used in similar studies (5,9,18), the GOT provides a large platform for assessing fingertip sensitivity and could provide a more accurate interpretation in a surgical setting.Our study found similar results to the conclusions of Fry et al. when evaluating tactile sensitivity of single vs. double gloving (5).
Sterile surgical gloves are a necessity in modern veterinary surgery.With the risk of glove perforation or damage welldocumented, recommendations can be made for either thicker gloves or double-layer gloves in those procedures with higher risk of perforation (2,3,6,9,10).Based on evaluation with the GOT, the use of sterile latex surgical gloves does increase GOT threshold sensitivity value.However, this effect on threshold sensitivity was not found to be significant when comparing standard gloving to orthopedic gloves or double-layer gloves.Recommendations for these gloving techniques that could decrease risks with glove perforation could reasonably be made without significant concern for loss of tactile sensitivity when compared to standard gloving.
The authors acknowledge several limitations to the study.A spring or pressure sensor was not used, which could change the sensitivity threshold for some participants when evaluating with the GOT (12).Fit of the gloves being used was not measured or evaluated and has been shown to affect dexterity, although not sensitivity, with improper sizing (19).Glove sizes used were based on individual's selection rather than fitting to a specific size.The duration of glove wearing prior to testing, as well as the fit of sterile surgical gloves could be another route of future investigation to determine those effects on fingertip sensitivity when evaluated with the GOT.Additionally, gender was not recorded in an effort to blind evaluation of the data interpretation, but could be a source of variance in the accuracy of results or fingertip sensitivity.Further, the entire population selected for study is one that has experience and familiarity with sterile gloves.A sample evaluating participants who have infrequent or no exposure to sterile gloving techniques may provide a more accurate assessment of the impact on fingertip threshold sensitivity values.
Participants were instructed to not roll or move their finger, and this was not deemed feasible without direct researcher oversight controlling and censoring those tests where participants did Participants were instructed to invert their hand and hold in a supinated neutral position.However, this was not specifically controlled in regards to wrist angle or hand posture, with some incorrect postures potentially affecting tactile sensitivity (20).Since data collection for this study, a more standardized guideline of instruction and positioning of the participant's hand during the GOT has been proposed.Wang et al. proposed a stepwise "two-down one-up" rule to more specifically determine an exact threshold sensitivity value (21).This consisted of a decrease in grating size tested after two correct responses, or increasing width with one incorrect answer.A step up was recorded as a transition point, and the test evaluated the mean of 8 of these transitions to determine threshold sensitivity value.Further, Wang et al. (21) established a short teaching parameter to visualize and confirm grating orientation of the JVP domes before being tested, which was not performed in the present study.In our study participants threshold sensitivity value was only assessed on a total of 10 touches per dome size, but this allowed for more tests to be completed in a given timeframe, as the test was performed to determine a value for each of the five tested parameters.
A limited selection of glove types were used in this study, and those selected for testing were due to the authors' experiences with common glove types used in a veterinary teaching hospital setting.This did also limit the specific sizes of gloves available, with some participants needing to perform the test with orthopedic gloves ½ size larger than with other sizes of gloves, due to manufacturer limitations of sizes and styles available.Further studies could expand upon other commonly used glove types (e.g., nitrile, latex-free surgical gloves, textured) to determine their effect on fingertip sensitivity under similar testing parameters.
The GOT test used in this study is simply a touch and tactile sensitivity test.It does not evaluate dexterity and does not evaluate motion, friction, or changes in pressure that may have a larger impact on more minute fingertip sensitivity (13).The change in fingertip sensitivity noted in this study may not correlate to an altered sensitivity in an in vivo setting, where much more information is available to discern small changes in surface texture.
In conclusion, single-layer, double-layer, and orthopedic sterile latex gloves showed an increased threshold sensitivity value when compared to the bare fingertip.However, this difference was fairly small, and there was no difference in sensitivity between singlelayer and double-layer latex gloves.While there may not be the same impetus for the strong recommendation of double gloving in the veterinary field when compared to human medicine, it may still have use in procedures with a high rate of glove perforation.This study suggests that double gloving may be recommended as deemed necessary without significant concern for its impact on fingertip sensitivity.
FIGURE
FIGUREThreshold sensitivity values (mean ± standard error from the mean) for each gloving category evaluated.Asterisks indicate a statistically significant di erence in threshold sensitivity between groups (p < .).
roll or move their fingertip.Those individual test results where improper technique required correction were not recorded, and the test was repeated until each individual had an appropriate number of touches to determine threshold sensitivity (≥8 positive responses out of 10 touches at a set dome size). inadvertently | 2024-06-21T15:05:06.149Z | 2024-06-19T00:00:00.000 | {
"year": 2024,
"sha1": "e8ab9da23ea3edfdd3b39552ffbabca896fe8bd3",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fvets.2024.1401130/pdf?isPublishedV2=False",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "292f18d2de023616f4311f90047deab37b5c7dc3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235354610 | pes2o/s2orc | v3-fos-license | Absence of Non-Canonical, Inhibitory MYD88 Splice Variants in B Cell Lymphomas Correlates With Sustained NF-κB Signaling
Gain-of-function mutations of the TLR adaptor and oncoprotein MyD88 drive B cell lymphomagenesis via sustained NF-κB activation. In myeloid cells, both short and sustained TLR activation and NF-κB activation lead to the induction of inhibitory MYD88 splice variants that restrain prolonged NF-κB activation. We therefore sought to investigate whether such a negative feedback loop exists in B cells. Analyzing MYD88 splice variants in normal B cells and different primary B cell malignancies, we observed that MYD88 splice variants in transformed B cells are dominated by the canonical, strongly NF-κB-activating isoform of MYD88 and contain at least three novel, so far uncharacterized signaling-competent splice isoforms. Sustained TLR stimulation in B cells unexpectedly reinforces splicing of NF-κB-promoting, canonical isoforms rather than the ‘MyD88s’, a negative regulatory isoform reported to be typically induced by TLRs in myeloid cells. This suggests that an essential negative feedback loop restricting TLR signaling in myeloid cells at the level of alternative splicing, is missing in B cells when they undergo proliferation, rendering B cells vulnerable to sustained NF-κB activation and eventual lymphomagenesis. Our results uncover MYD88 alternative splicing as an unappreciated promoter of B cell lymphomagenesis and provide a rationale why oncogenic MYD88 mutations are exclusively found in B cells.
INTRODUCTION
MyD88 has long been studied as an adaptor molecule for Tolllike receptor (TLR) and Interleukin-1 receptor (IL-1R) signaling in innate immunity (1). Its pivotal role is strikingly illustrated by the fact that loss-of-function mutations lead to severe immunodeficiency, whereas gain-of-function mutations promote oncogenesis: For example, rare dysfunctional alleles of MYD88 compromise formation of the MyD88-mediated postreceptor complex (2), the so-called Myddosome (3,4). Its assembly is a pre-requisite for effective activation of the IL-1Rassociated kinases (IRAKs) 2 and 4 and eventual activation of NF-kB and mitogen activated protein (MAP) kinases (1). Patients carrying loss-of-function MYD88 alleles consequently fail to respond to microbial TLR agonists and IL-1 and thus do not mount a sufficient innate immune response against pyogenic bacteria, leading to insufficient immunity and frequent premature death (5). Conversely, MYD88 mutations leading to constitutive Myddosome assembly (6), most notably the mutation Leu 265 to Pro (L265P) (7), are oncogenic and associated with sustained NF-kB signaling. L265P drives lymphoproliferation in murine models (8). In humans, L265P is highly prevalent in various B cell malignancies (7) but absent in other, e.g. myeloid, hematopoietic (8) malignancies. Its strict occurrence in B cell malignancies has highlighted L265P's diagnostic, chemo-and immunotherapeutic potential (9)(10)(11) but also posed the questions why only B cells are vulnerable to MYD88 gain-of-function mutations? Additionally, the varying frequency of the L265P mutation in different B cell malignancies has been puzzling: Although the MyD88 L265P mutation may be found in up to 90% of Waldenström's Macroglobulinemia patients (12), in diffuse large B cell lymphoma (DLBCL) and chronic lymphocytic leukemia (CLL) only 30 or 4% of patients carry this or other known gain-of-function MYD88 mutations, depending on subtype (7,13). Thus, other mechanisms apart from mutation of MYD88 appear to operate in L265P-negative patients, whereas a consistent "NF-kB signature" has been recognized as a unifying feature for most of these B cell malignancies (14)(15)(16).
The activation of NF-kB is also a primary outcome of MyD88-dependent signaling in myeloid cells (1). However, negative feedback on NF-kB signaling by alternative splicing seemingly operates in myeloid cells: TLR stimulation with LPS leads to the upregulation of a splice variant, then termed 'MyD88 short' (MyD88s, here also referred to as isoform 3, see Figures 1A, B and Table 1) (17). Conversely to constitutive splicing (18), alternative splice variants arise from "alternative" splice sites in pre-mRNAs, that trigger, for example, exon skipping, alternative 5' or 3' splice site usage within exon or intron sequences or intron retention. The resulting transcripts may be subject to frame shifts, premature termination codons and/or non-sense mediated decay (NMD) (18,19). Collectively, >90% of human multi-exon genes are subject to alternative splicing which greatly expands the diversity and function of the proteome (20,21). In eukaryotes the spliceosome, where socalled splice factors (SFs) cooperate with five small nuclear ribonucleoprotein complexes (U1, U2, U4/U6, and U5), recognizes and assembles on introns to cleave and ligate RNA molecules for intron removal, generating protein-coding mRNAs (22). The spliceosome catalyzes splicing with high precision, but also displays high flexibility to regulatory signals for rapid responses, such as alternative splicing. Such a direct link between regulatory signals and innate immunity was recently proposed for the SF3A and SF3B mRNA splicing as both factors were shown to connect TLR signaling with the regulation of MyD88s (23,24). MyD88s (isoform 3) represents an alternatively spliced inframe deletion of exon 2 and thus a MyD88 variant significantly shorter than the canonical isoform 2: Whereas isoforms 1 and 2 contain the canonical N-terminal death domain (DD), central intermediate domain (ID) for IRAK recruitment, and C-terminal Toll/IL-1R (TIR) domain for TLR binding, MyD88s (isoform 3) lacks the ID. The ID has been proposed to couple activated TLRs to the IRAK-containing Myddosome and thus transduce incoming signal (25). Hence, MyD88s is signalingincompetent. Even though its characterization has been limited to myeloid and epithelial cells, MyD88s (isoform 3) by many has been considered a primary negative regulator of this pathway and part of an essential negative feedback loop induced upon TLR signaling in myeloid cells and epithelial (26)(27)(28). Isoform 1, the first reference sequence described, represents the longest transcript and translated protein for MyD88 by taking an alternative donor splice site 24 nt downstream of exon 3, adding 8 amino acids within the TIR domain. Apart from isoforms 1-3, two additional splice isoforms of MYD88 have since been described, namely, isoforms 4 and 5 ( Figures 1A, B and Table 1), whose properties have been less studied. Additionally, whether alternative splicing and feedback regulation is operable in other, non-myeloid immune cells has not been addressed.
We speculated that if a negative feedback loop existed in B cells, TLR activation should also induce MyD88s (isoform 3) and thereby limit ongoing signaling. Interestingly, we found here that B cells only transiently induce isoform 3 upon short exposure to TLR agonists, but extended TLR-MyD88 stimulation rather maintained the canonical isoform. Our data thus indicates that in B cells an isoform 3-mediated negative feedback loop does not seem to restrain NF-kB long-term; rather, extended TLR stimulation drives the canonical, i.e. NF-kB promoting, isoform and thus does not restrict extended NF-kB activation by diverting transcripts to less signaling-competent isoforms like MyD88s (isoform 3) as in myeloid cells. In line with this, primary Table 1. (B) Illustration of target epitopes of the different antibodies used in this study. (C-E) HEK293T cells were transfected with plasmids for different MYD88 splice isoforms and lysates analyzed for expression or pathway activation by immunoblot (C, n=3) or NF-kB dual luciferase assay (D, n=4), respectively. (E) as in D but using MyD88-deficient I3A cells (n=3). (F) Immunoblot of primary B cell lysates from two different donors, lysates of HEK293T transfected with untagged isoforms 1 to 5 ('Isof. 1-5 ladder') and MyD88-competent or deficient (KO) THP-1 reporter cells. In C-E one representative of 'n' technical replicates is shown as mean + SD from three repeats. ns, non-significant; * p<0.05 according to two-way ANOVA comparing to isoform 1 (D, E).
B cell malignancies showed significantly higher degrees of the canonical MYD88 splice isoform and include transcripts for an additional three hitherto unrecognized MYD88 splice isoforms. Our data warrant a re-evaluation of previously assumed myeloid cell derived concepts of MYD88 splicing and NF-kB regulation in human primary cells, especially B cells, and provide an explanation for the susceptibility of B cells to oncogenic MYD88 mutation.
Study Participants and Sample Acquisition
All patients and healthy blood donors included in this study provided their written informed consent before study participation. Approval for use of their biomaterials was obtained by the local ethics committee at the University Hospitals of Tübingen; Germany, in accordance with the principles laid down in the Declaration of Helsinki as well as applicable laws and regulations. Patient recruitment, sample acquisition and preparation of B cell lymphoma, CLL and ovarian cancer patients are described below. Healthy blood donors were recruited at the Interfaculty Institute of Cell Biology, Department of Immunology, University of Tübingen; Germany.
Plasmid Constructs
N-terminally StrepIII-Hemagglutinin (HA) tagged and untagged MYD88 isoform expression constructs were based on the reference sequences listed in Table 1 and generated by gene synthesis (Genewiz; Germany) or PCR cloning and verified by DNA sequencing. Further details in Supplementary Material.
Cell Cultures
All HEK293T and DLBCL cell lines were described and cultured as previously (6). THP-1 WT and MyD88-deficient cells were a kind gift from V. Hornung, Gene Center, Munich, Germany. THP-1 WT and MyD88-KO Dual reporter cells were provided by R. Amann, University of Tübingen, Germany. Further details in Supplementary Material.
Dual Luciferase Assay
Dual luciferase assays (DLA) were described previously (6). Briefly, MYD88 isoforms (1-100 ng), NF-kB firefly luciferase reporter (100 ng) and Renilla luciferase control reporter (10 ng) were transfected into HEK293T cells. 48 h after transfection cell lysates were measured using the Dual-Luciferase Reporter Assay System by Promega according to instructions. Further details in Supplementary Material.
Immunoprecipitation, SDS-PAGE and Immunoblot
For immunoprecipitation cell lysates (RIPA buffer with phosphatase and protease inhibitors) were incubated for 1.5 h with MyD88 Comparisons were made to unstimulated control, unless indicated otherwise, denoted by brackets.
MYD88 Displays Comprehensive Splicing Leading to Functionally Disparate Isoforms
Given the importance that the MyD88s splice variant has been ascribed in murine myeloid cells (17,23), we sought to conduct a systematic characterization of all known human MYD88 splice variants. Until recently, five MYD88 mRNA transcripts with differential splicing have been reported ( Table 1 and Figure 1A), giving rise to five protein isoforms with different domain structure ( Figure 1B). Compared to the canonical isoform 2, isoform 1 features an additional 8 amino acids in frame between exon 3 and 4, i.e. in the TIR domain, due to the use of an alternative splice site (dark grey box and/or dashed lines in Figures 1B and S1B). Isoform 3 lacks the ID (exon 2) but includes both DD and TIR domain and corresponds to the aforementioned MyD88s variant. Isoform 4 and 5 both lack the TIR domain entirely, due to frame-shifts resulting from the skipping of exon 3 ( Figure S1A). In terms of canonical MyD88 domains, isoform 4 thus is limited to a DD-ID protein followed by 36 C-terminal amino acids that bear no apparent similarity to any known proteins ( Figure S1A). In isoform 5, exon 2 is additionally skipped, thus resulting in a DD-only variant. In order to investigate functional differences, these isoforms were cloned into StrepHA-tagged expression constructs and their expression verified in transfected HEK293T cells. Evidently, all constructs could be detected as proteins of 40, 37, 35, 27 and 23 kDa ( Figure 1C and Table 1), albeit with different expressions levels. The shortest isoform, termed isoform 5, was barely detectable, indicating it may be less stable. Next, we assessed the ability of all isoforms to drive NF-kB activation using dual luciferase assays upon transfection of equal amounts of expression plasmids in HEK293T cells. Whilst this assay cannot report on the ability to transduce incoming TLR signals, it is well established to assess MyD88 downstream signaling potential (2,6,7,(33)(34)(35). Here, isoform 1 was the most active isoform, followed by isoform 2, the canonical MyD88 splice variant ( Figure 1D). Isoform 4 was also able to induce NF-kB activity, at slightly lower levels. Isoform 3 and 5 were not able to induce NF-kB activity, consistent with a lack of ID, which is required to assemble into a Myddosome and recruit IRAK4 (4,34). Since HEK293T cells endogenously express MyD88 isoform 2 at high levels (cf. Figure 1C), we also conducted the experiment in the MyD88-deficient HEK293T-derivative cell line, I3A (33). An almost identical picture emerged, where the canonical isoform 2 induced the highest NF-kB activity ( Figure 1E).
Since both murine and human MyD88s (isoform 3) were described as dominant-negative regulators of canonical MyD88 due to lack of the ID (34, 36), we also tested whether isoforms 3 and 5 could block TLR signaling, e.g. via TLR5, in the HEK293T system, but this was not the case ( Figures S1C, D). Collectively, non-canonical MyD88 isoforms with an intact DD and ID (isoforms 1 and 4) are capable of transmitting downstream NF-kB activity and their expression may thus support the function of the canonical MyD88 (isoform 2), whereas isoforms 3 and 5 are inactive.
Primary B Cells Express Multiple MYD88 Splice Isoforms
All analyses on MYD88 splicing have so far focused on (mostly transformed) myeloid and epithelial cells but as aforementioned MyD88 also plays an oncogenic role in B cells via NF-kB signaling (11). To assess the expression levels of these isoforms in primary B cells and be able to identify them by molecular weight via SDS-PAGE, we also generated expression constructs without a tag as a 'molecular ladder'. Whole cell lysates from HEK293T transfected with these untagged isoforms 1 to 5 and from unmodified and MyD88 knockout reporter THP-1 cells (see Methods) were then compared alongside lysates of primary B cells from two different healthy donors. We could detect the expression of four different MyD88 isoforms, identifying isoforms 2, 3 and (probably) 4 as matching the molecular weight of the untagged expression constructs and strongly reduced or absent in the edited THP-1 cells ( Figure 1F). In long exposures a band migrating at the height of isoform 5 was also visible in 1 donor but not THP-1 cells. Collectively, the canonical isoform 2 shows the highest protein expression levels in primary B cells and isoform 5 the lowest ( Figure 1F).
Transformed B Cells Also Express Multiple MYD88 Splice Isoforms
As expression patterns between primary and transformed cells may differ, we next characterized the expression of the five isoforms in several ABC and GCB DLBCL cell lines using isoform-specific primers to distinguish isoforms 1/2 from other isoforms ( Figures S2A, B, Methods and Table S1). This confirmed the expression of isoforms 3, 4 and 5 at mRNA level in these cell lines (Figure 2A). Using lysates of these ABC and GCB cell lines and an antibody directed against the DD, multiple MyD88-specific bands were also detectable ( Figure 2B). Taking into account the predicted molecular weights of the alternative isoforms (cf. Table 1 and Figure 1F) and their corresponding mRNA levels in BJAB cells vs primary B cells (cf. Figure 2A), certain labeled bands in Figure 2B are likely to correspond to isoform 3, 4 and 5. This same pattern of bands was observed using a combination of 2 additional anti-MyD88 antibodies ( Figure S2C). To enrich the alternative isoforms from whole cell lysates, we pulled down MyD88 using an antibody, which is directed against the TIR domain (exon 4) and thus should detect isoforms 1, 2 and 3. Subsequent immunoblot of the elution showed bands corresponding to isoform 2 and surprisingly isoform 4, possible due to DD-mediated heterodimer formation (6) with isoform 2 ( Figure 2C). Any detected alternative isoforms were less prominent than isoform 2 ( Figure 2B, C) in the DLBCL lysates. This suggests that B cells express multiple MyD88 splice isoforms both on mRNA and protein level but isoform 2 is also dominant in transformed B cells.
Primary B Cell Malignancies Show a Preference for Isoform 2
As these transformed cell lines may not reflect primary tumors, we next characterized the RNA expression of the five isoforms in primary B cell lymphoma samples and untransformed naïve B cells. Sashimi plots of RNAseq data from a total of 186 different lymphoma cases (Burkitt lymphoma, DLBCL, follicular lymphoma, follicular lymphoma-DLBCL), untransformed germinal center B cells (GCB, n=5) and naïve peripheral blood B cells (n=5, acquired by the German ICGC MMMLSeq consortium, see Methods) showed expression of all five isoforms at mRNA level ( Figures 2D and S2D). Consistent with earlier mRNA and protein analysis, the canonical isoform 2 was significantly more abundant in transformed vs untransformed B cells, whereas other isoforms were either comparable between these groups (isoform 3) or significantly lower (isoform 1, isoform 4 and isoform 5) ( Figure 2E). Thus, transformed primary B cell tumor samples also showed a preference for the canonical isoform 2but not isoform 3 (MyD88s) or other noncanonical isoforms. This was surprising as an 'NF-kB signature' has been attributed to these types of entities (14)(15)(16) and in myeloid cells NF-kB signaling was proposed to induce MyD88s (isoform 3) as aforementioned. Collectively, this suggests that, contrary to expectations, lymphoma samples show a higher ratio of canonical MyD88 (isoform 2) to MyD88s (isoform 3) than naive B cells. The analysis of sub-clusters (dependent on driver mutations) of DLBCL samples suggested that those driven by direct activators of NF-kB signaling (e.g. an 'MyD88-like' subcluster, see Methods) had a lower ratio of alternative splicing vs canonical, and specifically isoform 3, than those driven by indirect NF-kB activation (e.g. BCL2-, BCL6-and TP53-like DLBCL, see Figures 2F and S2E). In line with this, samples with NF-kB-promoting MYD88 gain-of-function mutations, such as L265P, had a lower isoform 3 vs isoform 2 ratio, i.e. expressed significantly more isoform 2 vs isoform 3 transcripts ( Figure 2G). At least on mRNA level, primary B cell tumors thus did not show evidence for an isoform 3-mediated negative feedback look despite an 'NF-kB signature' described for these entities.
TLR Stimulation Induces Isoform 3 Only Transiently in Stimulated B Cells
Based on what has been published regarding the induction of MyD88s via NF-kB signaling in myeloid cells (17,36), we next tested whether defined NF-kB activating stimuli, e.g. LPS for TLR4 and CpG for TLR9, would lead to an upregulation of isoform 3 in freshly purified ( Figure S3A) primary B cells. Indeed, TLR9 stimulation enhanced mRNA levels of isoform 3 and 4 at 6 h (mean fold change = approx. two-fold), but at later time points it decreased again to unstimulated levels. TLR4 stimulation induced a marginal but significant reduction of isoform 3 at 18 h ( Figure 3A). Overall, TLR stimulation changed the relative ratios of MYD88 splice isoforms very little and the variability between donors is high. As control, we isolated, differentiated and stimulated hMoMacs from the same donors and observed an increase upon 6 h TLR4 stimulation, in line with earlier studies (Figure 3B), although it has to be borne in mind, that these earlier studies mainly tested in murine macrophages or human epithelial cells (17,26,36). Conversely, when B cells were stimulated until proliferation with TLR9 CpG + IgM, surprisingly, MYD88 transcription was reduced altogether and did not lead to higher relative induction of the MyD88s (isoform 3, Figure 3C), despite the fact that TLR stimulation was effective at driving cellular proliferation as assessed by CFSE proliferation assays ( Figure S3B). Therefore, we conclude that proliferating B cells, like lymphoma samples, show and maintain a preference for canonical MyD88 signaling. Furthermore, in B cells sustained NF-kB signaling does not induce or coincide with a shift towards inhibitory isoforms as reported for myeloid cells regarding MyD88s (isoform 3). Rather, the canonical, signaling-competent isoform 2 dominates
Novel MyD88 Isoforms With TIR Truncation in B Cells Are Supportive of NF-kB Signaling
In the process of RNAseq analysis we noticed additional alternative splicing events, namely either usage of another donor splice site within the exon 3 (leading to isoforms 6 and 7) or the retention of the exon 3-4 intron (here termed isoform 8), see Figures 2D, 4A, B, Figures S4A, B and Table 1. The novel splice site within exon 3 (20 nt upstream of a canonical donor) showed a Human Splicing Finder (HSF) score of 81. Typically, a score above 65 is considered a strong splice site (37), indicating these additional splicing events are highly plausible. This alternative donor site leads to a premature STOP codon and thus results in additional isoforms with a truncated TIR domain ( Figures 4A, B and S4A, B), which have not been reported so far. When expression constructs corresponding to isoforms 6-8 were transfected into HEK293T cells, proteins of the expected size (29 kDa for isoform 6, 24 kDa isoform 7 and 26 kDa for isoform 8; plus 6 kDa from the StrepHA-tag) were detectable ( Figure 4C and Table 1). The isoform 8 construct was generated from an hypothetical sequence, which was confirmed by sequencing BJAB amplification product upon PCR using specific primers (Table S1 and Figure S4B). To gain an insight into their ability to signal to NF-kB, we performed NF-kB dual luciferase assays in normal HEK293T and I3A cells as before. Evidently, isoforms 6 and 8 were able to induce downstream NF-kB activation in HEK293T cells, whereas isoform 7 did not ( Figures 4D, E). Isoform 6-8 transcripts were also detectable in the lymphoma samples (Figures 4F-H) and, as with the other non-canonical isoforms, they were significantly less abundant in lymphoma cells vs naive B cells. In the 289 RNA-seq samples of the ICGC Chronic Lymphocytic Leukemia (CLL) dataset, 7 isoforms could be readily detected and quantified, with the canonical isoform showing the highest relative abundance, followed by isoform 6, while isoform 5 showed the lowest abundance ( Figure 4I). Furthermore, there were noticeable reads mapping to the exon 3-4 intron ( Figure S4C) confirming isoform 8 in CLL. Additionally, we could also detect isoform 8 in primary B cells ( Figure S4D Table 1 and hypothetical sequence for isoform 8. (C-E) HEK293T cells were transfected with plasmids for different MYD88 isoforms and lysates analyzed for expression or pathway activation by immunoblot (C, n=2) or NF-kB dual luciferase assay (D, E n=3), respectively. (E) as in D but using MyD88deficient I3A cells (n=3). (F-H) RNAseq analysis from untransformed B cells or lymphoma samples (n=as indicated in Figure 2E). Intron retention presented as relative number of splice reads using the acceptor splice site of exon 4 (G) or coverage of intron 3 compared to mean of flanking exons 3 and 4 (H). (I) RNAseq analysis from CLL samples (n=289). In C-E one representative of 'n' technical replicates is shown, for D, E, as mean + SD from three repeats. F-I represent combined data (Tukey box and whiskers) from 'n' biological replicates (each dot represents one replicate). ns, non-significant; * or # = p<0.05 according to two-way ANOVA comparing to isoform 2 (D, E) or Wilcoxon Mann-Whitney (F-I) in comparison to naïve B cells (*, F-H) and to GCB cells (#, F-H) or isoform 2 (I).
hMoMacs ( Figure S4E) by RT-qPCR. Interestingly, TLR4 stimulation in hMoMacs significantly enhanced the mRNA levels of isoform 8, another signaling competent form (cf. Figures 4D,E). All eight MYD88 splice isoforms were also detectable in non-immune cells, as verified in a publicly available RNAseq dataset (31) for ovarian cancer ( Figure S5). On the whole, there are 3 additional splice isoforms of MyD88 with truncated TIR domains out of which two, unexpectedly, can support signaling upon overexpression, similar to the canonical MyD88 isoform. This extended analysis highlights an even higher diversity of splice variants emanating from the MYD88 oncogene than previously thought. Furthermore, splicing in B cell lymphomas appears to strongly favor the canonical MYD88 isoform without diverting splicing events to alternative or signaling-incompetent splice isoforms. Importantly, we find no evidence for a significant induction of MyD88s (isoform 3) as a restrictor of TLR pathway activity.
DISCUSSION
Alternative splicing has emerged as a frequent phenomenon employed for fine-tuning or regulating signaling pathways and plays a pivotal role in the adaptive immune system (38,39). However, decisive regulators of innate immune pathways have also been subject to alternative splicing: Since its discovery in 2002, the induction of MyD88s via NF-kB signaling loop has been viewed as a classical example of an inflammation-restricting negative feedback loop in innate immunity (17,27). Hence, all the numerous subsequent studies on MyD88 splicing have exclusively focused on this isoform (23,24,(40)(41)(42) and have been largely limited to myeloid cells, primarily in the murine system.
We here provide a comprehensive characterization of all currently reported human MYD88 splice isoforms. This includes the novel isoforms 6-8, which are the only variants to contain partial TIR domains. During the course of this analysis, isoforms 6 and 7 were added to Genebank but had not been confirmed or studied in detail. Isoform 8 is a novel and surprisingly frequent splicing event not reported before and found abundantly in naïve B cells. Our analysis suggests that, with the exception of isoforms 3 (MyD88s), 5 and 7, isoforms (4, 6 and 8) may induce downstream NF-kB activity in overexpression assays. Whether they can nucleate or engage in the Myddosome in response to TLR signaling in the absence of a complete TIR domain remains to be studied. Potentially, isoforms 4, 6 and 8 may also be signaling incompetent. Thus, all MYD88 splice isoforms, except isoforms 1 and 2, may lead to dysfunctional MyD88 proteins. This would make our observations made on transcript levels even more striking as then none of the alternative splicing events would be able to counteract constitutive NF-kB signaling via isoform 2. Consequently, the oncogenic influence of isoform 2 is likely to be even more dominant.
Furthermore, we show that MYD88 splicing is much more multi-faceted than previously reported: Our data indicate that whereas normal B cells use a richer repertoire of splice isoforms, the transformed status rather displays a reduced diversity and appears to lack alternative splice events. The reason for this is unknown but our data warrant a further investigation in additional cohorts and entities, e.g. Waldemström's macroglobulinemia, in future. Based on our data it appears that the preference for canonical isoform 2 and thus unrestricted NF-kB signaling may be favored in the oncogenic process. BCL2, BCL6 or TP53-driven lymphomas, which have an indirect effect on the NF-kB signature, showed lower levels of canonical MYD88 and higher levels of isoform 1 and isoform 4, compared to MyD88-like lymphomas ( Figures 2F and S2E). This fits well with the observation that the gain-of-function mutation, L265P, leads to extended NF-kB hyperactivation and is a hallmark of oncogenic B cells (7,8). Of note, our data indicate that B cells lack a sustained negative feedback mechanism of MyD88s induction to rescue mutated cells from MyD88driven oncogenesis: For example, TLR stimulation induced MyD88s in TLR-stimulated hMoMacs and B cells at short time points, but MyD88s was not prominently expressed or regulated under the extended presence of NF-kB stimuli in B cells and lymphoma cell lines. Thus, B cells with increased NF-kB activity, due to L265P mutation or other mechanisms, cannot get "reigned in" (controlled) via MyD88s expression, unlike some myeloid cells, then continued NF-kB pro-survival activity may result ( Figure 5). Our data thus provide an explanation why oncogenic mutations have only been reported in B cell lymphoma, rather than tumors arising from myeloid cells, whose MyD88s induction loop possibly renders them more resistant to MyD88 pathway induced NF-kB activity.
Our observations that alternative splicing of genes in the MyD88 dependent pathway are important candidates in oncogenesis agree with the recent description of oncogenic IRAK4 isoforms, albeit in myeloid malignancies (43). It is intriguing to speculate whether the aforementioned negative feedback loop, that is absent in proliferative B cells, prevents MYD88 mutations from manifesting themselves, but does not prevent oncogenic signaling arising from the next downstream pathway member, IRAK4. Undoubtedly, with the availability of powerful sequencing techniques the analysis of alternative splice isoforms of MyD88 pathway members for discovering novel nonmutational cancer drivers is both possible and warranted. In the substantial percentage of cases without druggable driver mutations this may offer opportunities for targeting e.g., via antisense oligonucleotide-mediated exon skipping (44,45). In this therapeutic sense, MyD88s or the other signaling incompetent isoforms described here may provide a blueprint for such an approach in B cell lymphomas.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by Ethics committee of the Medical Faculty, University of Tübingen. The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
YC, O-OW and SD performed experiments. YC, SB, SF, SN, SD, JA, and SO analyzed data. RS and SO were involved in sample collection. YC and AW conceived and AW supervised the entire study. YC and AW wrote the manuscript and all authors provided additions and comments to the manuscript. All authors contributed to the article and approved the submitted version. | 2021-06-07T13:20:18.604Z | 2021-06-07T00:00:00.000 | {
"year": 2021,
"sha1": "62cbc66eda08b54248992625bdcac7eb6cc015c7",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2021.616451/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "62cbc66eda08b54248992625bdcac7eb6cc015c7",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
255372253 | pes2o/s2orc | v3-fos-license | Confidence Sets under Generalized Self-Concordance
This paper revisits a fundamental problem in statistical inference from a non-asymptotic theoretical viewpoint $\unicode{x2013}$ the construction of confidence sets. We establish a finite-sample bound for the estimator, characterizing its asymptotic behavior in a non-asymptotic fashion. An important feature of our bound is that its dimension dependency is captured by the effective dimension $\unicode{x2013}$ the trace of the limiting sandwich covariance $\unicode{x2013}$ which can be much smaller than the parameter dimension in some regimes. We then illustrate how the bound can be used to obtain a confidence set whose shape is adapted to the optimization landscape induced by the loss function. Unlike previous works that rely heavily on the strong convexity of the loss function, we only assume the Hessian is lower bounded at optimum and allow it to gradually becomes degenerate. This property is formalized by the notion of generalized self-concordance which originated from convex optimization. Moreover, we demonstrate how the effective dimension can be estimated from data and characterize its estimation accuracy. We apply our results to maximum likelihood estimation with generalized linear models, score matching with exponential families, and hypothesis testing with Rao's score test.
INTRODUCTION
The problem of statistical inference on learned parameters is regaining the importance it deserves as machine learning and data science are increasingly impacting humanity and society through an increasingly large range of successful applications from transportation to healthcare [see, e.g., 15,14]. The classical asymptotic theory of M-estimation is well established in a rather general setting under the assumption that the parametric model is well-specified, i.e., the underlying data distribution belongs to the parametric family. Two types of confidence sets can be constructed from this theory: (a) the Wald-type one which relies on the weighted difference between the estimator and the target parameter, and (b) the likelihood-ratio-type one based on the log-likelihood ratio between the estimator and the target parameter. The main tool is the local asymptotic normality (LAN) condition introduced by Le Cam [26]. We mention here, among many of them, the monographs [22,41,40].
In many real problems, the parametric model is usually an approximation to the data distribution, so it is too restrictive to assume that the model is well-specified. To relax this restriction, model misspecification has been considered in the asymptotic regime; see, e.g., [19,45,13]. Another limitation of classical asymptotic theory is its asymptotic regime where n → ∞ and the parameter dimension d is fixed. This is inapplicable in the modern context where the data are of a rather high dimension involving a huge number of parameters.
The non-asymptotic viewpoint has been fruitful to address high dimensional problems-the results are developed for all fixed n so that it also captures the asymptotic regime where d grows with n. Early works in this line of research focus on specific models such as Gaussian models [7,8,25,5], ridge regression [18], logistic regression [3], and robust M-estimation [52,12]; see Bach [4] for a survey. Spokoiny [36] addressed the finite-sample regime in full generality in a spirit similar to the classical LAN theory. The approach of [36] relies on heavy empirical process machinery and requires strong global assumptions on the deviation of the empirical risk process. More recently, Ostrovskii and Bach [32] focused on risk bounds, specializing their discussion to linear models with (pseudo) self-concordant losses and obtained a more transparent analysis under neater assumptions.
A critical tool arising from this line of research is the so-called Dikin ellipsoid, a geometric object identified in the theory of convex optimization [31,6,9,39,11,10]. The Dikin ellipsoid corresponds to the distance measured by the Euclidean distance weighted by the Hessian matrix at the optimum. This weighted Euclidean distance is adapted to the geometry near the target parameter and thus leads to sharper bounds that do not depend on the minimum eigenvalue of we have, when p = p θ0 , that θ 0 ∈ arg min θ∈Θ L(θ). In fact, when q θ > 0 and there is no θ such that p θ a.s.
Empirical risk minimization. Assume now that we have an i.i.d. sample {Z i } n i=1 from P. To learn the parameter θ from the data, we minimize the empirical risk to obtain the empirical risk minimizer θ n ∈ arg min θ∈Θ L n (θ) : This applies to both maximum likelihood estimation and score matching estimation. In Sec. 3, we will prove that, with high probability, the estimator θ n exists and is unique under a generalized self-concordance assumption. Confidence set. In statistical inference, it is of great interest to quantify the uncertainty in the estimator θ n . In classical asymptotic theory, this is achieved by constructing an asymptotic confidence set. We review here two commonly used ones, assuming the model is well-specified. We start with the Wald confidence set. It holds that n(θ n − θ ) H n (θ n )(θ n − θ ) → d χ 2 d , where H n (θ) := ∇ 2 L n (θ). Hence, one may consider a confidence set {θ : n(θ n − θ) H n (θ n )(θ n − θ) ≤ q χ 2 d (δ)} where q χ 2 d (δ) is the upper δ-quantile of χ 2 d . The other is the likelihoodratio (LR) confidence set constructed from the limit 2n[L n (θ ) − L n (θ n )] → d χ 2 d , which is known as the Wilks' theorem [46]. These confidence sets enjoy two merits: 1) their shapes are an ellipsoid (known as the Dikin ellipsoid) which is adapted to the optimization landscape induced by the population risk; 2) they are asymptotically valid, i.e., their coverages are exactly 1 − δ as n → ∞. However, due to their asymptotic nature, it is unclear how large n should be in order for it to be valid.
Non-asymptotic theory usually focuses on developing finite-sample bounds for the excess risk, i.e., P(L(θ n ) − L(θ ) ≤ C n (δ)) ≥ 1 − δ. To obtain a confidence set, one may assume that the population risk is twice continuously differentiable and λ-strongly convex. Consequently, we have λ θ n − θ 2 2 /2 ≤ L(θ n ) − L(θ ) and thus we can consider the confidence set C finite,n (δ) := {θ : θ n − θ 2 2 ≤ 2C n (δ)/λ}. Since it originates from a finite-sample bound, it is valid for fixed n, i.e., P(θ ∈ C finite,n (δ)) ≥ 1 − δ for all n; however, it is usually conservative, meaning that the coverage is strictly larger than 1 − δ. Another drawback is that its shape is a Euclidean ball which remains the same no matter which loss function is chosen. We illustrate this phenomenon in Fig. 1. Note that a similar observation has also been made in the bandit literature [16].
We are interested in developing finite-sample confidence sets. However, instead of using excess risk bounds and strong convexity, we construct in Sec. 3 the Wald and LR confidence sets in a non-asymptotic fashion, under a generalized self-concordance condition. These confidence sets have the same shape as their asymptotic counterparts while maintaining validity for fixed n. These new results are achieved by characterizing the critical sample size enough to enter the asymptotic regime.
Preliminaries
Notation. We denote by S(θ; z) := ∇ θ (θ; z) the gradient of the loss at z and H(θ; z) := ∇ 2 θ (θ; z) the Hessian at z. Their population versions are S(θ) := E[S(θ; Z)] and H(θ) := E[H(θ; Z)], respectively. We assume standard regularity assumptions so that S(θ) = ∇ θ L(θ) and H(θ) = ∇ 2 θ L(θ). We write H := H(θ ). Note that the two optimality conditions then read S(θ ) = 0 and H 0. It follows that λ := λ min (H ) > 0 and λ := λ max (H ) > 0. Furthermore, we let G(θ; z) := S(θ; z)S(θ; z) and G(θ) := E[S(θ; Z)S(θ; Z) ] be the autocorrelation matrices of the gradient. We write G := G(θ ). We define their empirical quantities as L n (θ) : The first step of our analysis is to localize the estimator to a Dikin ellipsoid at θ of radius r, i.e., where, given a positive semi-definite matrix J, we let x J := J 1/2 x 2 = √ x Jx. Effective dimension. A quantity that plays a central role in our analysis is the effective dimension. Definition 1. We define the effective dimension to be The effective dimension appears recently in non-asymptotic analyses of (penalized) M-estimation; see, e.g., [37,32]. It better characterizes the complexity of the parameter space Θ than the parameter dimension d. When the model is well-specified, it can be shown that H = G and thus d = d. When the model is misspecified, it can be much smaller than d depending on the spectra of H and G . Moreover, it is closely connected to classical asymptotic theory of M-estimation under model misspecification-it is the trace of the limiting covariance matrix of √ nH n (θ n ) 1/2 (θ n − θ ); see Sec. 3.5 for a thorough discussion. Generalized self-concordance. We will use the notion of self-concordance from convex optimization in our analysis. Self-concordance originated from the analysis of the interior-point and Newton-type convex optimization methods [31]. It was later modified by Bach [3], which we call the pseudo self-concordance, to derive finite-sample bounds for the generalization properties of the logistic regression. Recently, Sun and Tran-Dinh [38] proposed the generalized selfconcordance which unifies these two notions. For a function f : similarly. Definition 2 (Generalized self-concordance). Let X ⊂ R d be open and f : X → R be a closed convex function. For R > 0 and ν > 0, we say f is (R, ν)-generalized self-concordant on X if with the convention 0/0 = 0 for the case ν < 2 and ν > 3. Recall that u 2 ∇ 2 f (x) := u ∇ 2 f (x)u. Remark. When ν = 2 and ν = 3, this definition recovers the pseudo self-concordance and the standard selfconcordance, respectively.
In contrast to strong convexity which imposes a gross lower bound on the Hessian, generalized self-concordance specifies the rate at which the Hessian can vary, leading to a finer control on the Hessian. Concretely, it allows us to bound the Hessian in a neighborhood of θ with the Hessian at θ , which is key to controlling H n (θ n ). We illustrate the difference between them in Fig. 2. As we will see in Sec. 3.3, thanks to the generalized self-concordance, we are able to remove the direct dependency on λ in our confidence set. To the best of our knowledge, this is the first work extending classical results for M-estimation to generalized self-concordant losses. Concentration of Hessian. One key result towards deriving our bounds is the concentration of empirical Hessian, i.e., (1 − c n (δ))H(θ) H n (θ) (1 + c n (δ))H(θ) with probability at least 1 − δ. When the loss function is of the form (θ; z) := (y, θ x) (e.g., GLMs), the empirical Hessian reads H n (θ) = n −1 n i=1 (Y i , θ X i )X i X i where (y,ȳ) := d 2 (y,ȳ)/dȳ 2 , which is of the form of a sample covariance. Assuming X to be sub-Gaussian, Ostrovskii and Bach [32] obtained a concentration bound for H n (θ ) with c n (δ) = O( (d + log (1/δ))/n) via the concentration bound for sample covariance [42,Thm. 5.39]. For general loss functions, such a special structure cannot be exploited. We overcame this challenge by the matrix Bernstein inequality [44,Thm. 6.17], obtaining a sharper concentration bound with c n (δ) := O( log (d/δ)/n). Note that the matrix Bernstein inequality has been used to control the empirical Hessian of kernel ridge regression with random features [34,Prop. 6] and later extended to regularized empirical risk minimization [28,Lem. 30]. However, their results require the regularization parameter to be strictly positive (otherwise the bounds are vacuous) and the sample Hessian to be bounded. On the contrary, our technique allows for zero regularization and unbounded Hessian as long as the Hessian satisfies a matrix Bernstein condition. Moreover, combining generalized self-concordance with matrix Bernstein, we are able to show the concentration of H n (θ n ) around H for general losses, which is itself a novel result.
Assumptions
Our key assumption is the generalized self-concordance of the loss function.
Many loss functions in statistical machine learning satisfy this assumption. We give in Sec. 4.1 examples from generalized linear models and score matching.
In order to control the empirical gradient S n (θ), we assume that the normalized gradient at θ is sub-Gaussian.
When the loss function is of the form (θ; z) = (y, θ x), we have S(θ; Z) = (Y, θ X)X. As a result, Asm. 2 holds true if (i) (Y, θ X) is sub-Gaussian and X is bounded or (ii) (Y, θ X) is bounded and X is sub-Gaussian. For least squares with (y, θ x) = 1 2 (y − θ x) 2 , the derivative (Y, θ X) = θ X − Y is the negative residual. Asm. 2 is guaranteed if the residual is sub-Gaussian and X is bounded. For logistic regression with (y, In order to control the empirical Hessian, we assume that the Hessian of the loss function satisfies the matrix Bernstein condition in a neighborhood of θ . Assumption 3 (Matrix Bernstein of Hessian). There exist constants K 2 , r > 0 such that, for any θ ∈ Θ r (θ ), the standardized Hessian H(θ) −1/2 H(θ; Z)H(θ) −1/2 − I d satisfies a Bernstein condition (defined in Appx. C) with parameter K 2 . Moreover, where · 2 is the spectral norm and Var(J) . By convention, we let Θ 0 (θ ) = {θ }.
Main Results
We now give simplified versions of our main theorems. We use C ν to represent a constant depending only on ν that may change from line to line; and C K1,ν similarly. We use and to hide constants depending only on K 1 , K 2 , σ H , ν.
. Under Asms. 1 to 3 with r = 0, it holds that, whenever the empirical risk minimizer θ n uniquely exists and satisfies, with probability at least 1 − δ, With a local matrix Bernstein condition, we can replace H by H n (θ n ) in (2) and obtain a finite-sample version of the Wald confidence set. 3). Suppose the same assumptions in Thm. 1 hold true. Furthermore, suppose that Asm. 3 holds with r = C ν λ Then we have P(θ ∈ C Wald,n (δ)) ≥ 1 − δ whenever Remark. In the precise versions of Thms. 1 and 2, the term d log (e/δ) in the bounds (2) Thm. 2 suggests that the tail probability of θ n − θ 2 Hn(θn) is governed by a χ 2 distribution with d degrees of freedom, which coincides with the asymptotic result. In fact, according to Huber [19], under suitable regularity assumptions, it holds that This induces an asymptotic confidence set with a similar form of (3) and radius O( Our result characterizes the critical sample size enough to enter the asymptotic regime. From Thm. 2 we can also derive a finite-sample version of the LR confidence set. Corollary 3. Let ν ∈ [2, 3). Suppose the same assumptions in Thm. 2 hold true. Let C LR,n (δ) be Then we have P(θ ∈ C LR,n (δ)) ≥ 1 − δ whenever We give the proof sketches of Thm. 1, Thm. 2, and Cor. 3 here and defer their full proofs to Appx. A. We discuss in Sec. 3.5 how our proof techniques and theoretical results complement and improve on previous works.
We start by showing the existence and uniqueness of θ n . The next result shows that θ n exists and is unique whenever the quadratic form S n (θ ) H −1 n (θ )S n (θ ) is small. Note that this quantity is also known as Rao's score statistic for goodness-of-fit testing. This result also localizes θ n to a neighborhood of the target parameter θ .
, then the estimator θ n uniquely exists and satisfies . The main tool used in the proof of Prop. 4 is a strong convexity type result for generalized self-concordant functions recalled in Appx. C. In order to apply Prop. 4, we need to control S n (θ ) H −1 n (θ ) . This result is summarized in the following proposition.
The proof of Prop. 5 consists of two steps: (a) lower bound H n (θ ) by H up to a constant using the Bernstein inequality and (b) upper bound S n (θ ) H −1 (θ ) using a concentration inequality for isotropic random vectors, where the tools are recalled in Appx. C. Combining them implies that S n (θ ) H −1 (θ ) can be arbitrarily small and thus satisfies the requirement in Prop. 4 for sufficiently large n. This not only proves the existence and uniqueness of the empirical risk minimizer θ n but also provides an upper bound for θ n − θ Hn(θ ) through S n (θ ) H −1 n (θ ) . In order to prove Thm. 2, it remains to upper bound H n (θ n ) by H up to a constant factor. This can be achieved by the following result. Proposition 6. Under Asms. 1 and 3 with r = C ν λ (ν−3)/2 /R, it holds that, with probability at least 1 − δ, Finally, Cor. 3 follows from Thm. 2 and the Taylor expansion: there existsθ n ∈ Conv{θ n , θ } such that where we have used ∇L n (θ n ) = 0.
Approximating the effective dimension
One downside of Thm. 2 and Cor. 3 is that d depends on the unknown data distribution. Alternatively, we use the following empirical counterpart The next result implies that we do not lose much if we replace d by d n . This result is novel and of independent interest since one also needs to estimate d in order to construct asymptotic confidence sets under model misspecification.
Remark. Asm. 4 is a Lipschitz-type condition for G(θ; z). This assumption was previously used by [29, Assumption 3] to analyze non-convex risk landscapes.
with probability at least 1 − δ, whenever n is large enough (see Appx. A.3 for the precise condition).
Remark. The precise version of P rop. 7 in Appx. A.3 implies that d n is a consistent estimator of d.
With Prop. 7 at hand, we can obtain finite-sample confidence sets involving d n , which can be computed from data. We illustrate it with the Wald confidence set.
Then we have P(θ ∈ C Wald,n (δ)) ≥ 1 − δ whenever n satisfies the same condition as in Prop. 7. Under a well-specified model, it also coincides with the Hessian matrix H(θ) at the optimum which captures the local curvature of the population risk. When the model is misspecified, the Fisher information deviates from the Hessian matrix. In the asymptotic regime, this discrepancy is reflected in the limiting covariance of the weighted M-estimator which admits a sandwich form H −1/2 G H −1/2 ; see, e.g., [19,Sec. 4].
Discussion
Effective dimension. The counterpart of the sandwich covariance in the non-asymptotic regime is the effective dimension d ; see, e.g., [37,32]. Our bounds also enjoy the same merit-its dimension dependency is via the effective dimension. When the model is well-specified, the effective dimension reduces to d, recovering the same rate of convergence O(d/n) as in classical linear regression; see, e.g., [4,Prop. 3.5]. When the model is misspecified, the effective dimension provides a characterization of the problem complexity which is adapted to both the data distribution and the loss function via the matrix H −1/2 G H −1/2 . To gain a better understanding of the effective dimension d , we summarize it in Tab. 3 in Appx. A under different regimes of eigendecay, assuming that G and H share the same eigenvectors. It is clear that, when the spectrum of G decays faster than the one of H , the dimension dependency can be better than O(d). In fact, it can be as good as O(1) when the spectrum of G and H decay exponentially and polynomially, respectively. Comparison to classical asymptotic theory. Classical asymptotic theory of M-estimation is usually based on two assumptions: (a) the model is well-specified and (b) the sample size n is much larger than the parameter dimension d. These assumptions prevent it from being applicable to many real applications where the parametric family is only an approximation to the unknown data distribution and the data is of high dimension involving a large number of parameters. On the contrary, our results do not require a well-specified model, and the dimension dependency is replaced by the effective dimension d which captures the complexity of the parameter space. Moreover, they are of non-asymptotic nature-they hold true for any n as long as it exceeds some constant factor of d . This allows the number of parameters to potentially grow with the same size.
Comparison to recent non-asymptotic theory. Recently, Spokoiny [36] achieved a breakthrough in finite-sample analysis of parametric M-estimation. Although fully general, their results require strong global assumptions on the deviation of the empirical risk process and are built upon advanced tools from empirical process theory. Restricting ourselves to generalized self-concordant losses, we are able to provide a more transparent analysis with neater assumptions only in a neighborhood of the optimum parameter θ . Moreover, our results maintain some generality, covering several interesting examples in statistical machine learning as provided in Sec. 4.1.
Ostrovskii and Bach [32] also considered self-concordant losses for M-estimation. However, their results are limited to generalized linear models whose loss is (pseudo) self-concordant and admits the form (θ; Z) := (Y, θ X). While sharing the same rate O(d /n), our results are more general than theirs in two aspects. First, the loss need not be of the form (Y, θ X), encompassing the score matching loss in Ex. 4 below. Second, we go beyond pseudo self-concordance via the notion of generalized self-concordance. Moreover, they focus on bounding the excess risk rather than providing confidence sets, and they do not study the estimation of d .
Pseudo self-concordant losses have been considered for semi-parametric models [27]. However, they focus on bounding excess risk and require a localization assumption on θ n . Here we prove the localization result in Prop. 4 and we focus on confidence sets. Regularization. Our results can also be applied to regularized empirical risk minimization by including the regularization term in the loss function. Let θ λ n and θ λ be the minimizers of the regularized empirical and population risk, respectively. Let d λ := Tr (H λ ) −1/2 G λ (H λ ) −1/2 where H λ and G λ are the regularized Hessian and the autocorrelation matrix of the regularized gradient at θ λ , respectively. Then our results characterize the concentration of θ λ n around θ λ : This result coincides with Spokoiny [37, Thm. 2.1]. If the goal is to estimate the unregularized population risk minimizer θ , then we need to pay an additional error θ λ − θ 2 H λ which is referred to as the modeling bias [37, Sec. 2.5]. One can invoke a so-called source condition to bound the modeling bias and a capacity condition to bound d λ . An optimal value of λ can be obtained by balancing between these two terms [see, e.g., 28].
For instance, let Z :
EXAMPLES AND APPLICATIONS
We give several examples whose loss function is generalized self-concordant so that our results can be applied. We also provide finite-sample analysis for Rao's score test, the likelihood ratio test, and the Wald test in goodness-of-fit testing. All the proofs and derivations are deferred to Appx. B.
Examples
Example 3 (Generalized linear models). Let Z := (X, Y ) be a pair of input and output, where X ∈ X ⊂ R d and Y ∈ Y ⊂ R. Let t : X × Y → R d and µ be a measure on Y. Consider the statistical model which is generalized self-concordant for ν = 2 and R = 2M . Moreover, this model satisfies Asms. 2 to 4 and 2'.
Example 4 (Score matching with exponential families). Assume that Z = R p . Consider an exponential family on R d with densities The non-normalized density q θ then reads log q θ (z) = θ t(z) + h(z). As a result, the score matching loss becomes Therefore, the score matching loss (θ; z) is convex. Moreover, since the third derivatives of (·; z) is zero, the score matching loss is generalized self-concordant for all ν ≥ 2 and R ≥ 0.
Rao's Score Test and Its Relatives
We discuss how our results can be applied to analyze three classical goodness-of-fit tests. In this subsection, we will assume that the model is well-specified. Due to Asmp. 0, we will use θ to denote the true parameter of P and reserve θ 0 for the parameter under the null hypothesis. Given a subset Θ 0 ⊂ Θ, a goodness-of-fit testing problem is to test the hypotheses We focus on a simple null hypothesis where Θ 0 := {θ 0 } is a singleton. A statistical test consists of a test statistic T := T (Z 1 , . . . , Z n ) and a prescribed critical value t, and we reject the null hypothesis if T > t. Its performance is quantified by the type I error rate P(T > t | H 0 ) and statistical power P(T > t | H 1 ). Classical goodness-of-fit tests include Rao's score test, the likelihood ratio test (LRT), and the Wald test. Their test statistics are Hn(θn) , respectively. Our approach can be applied to analyze the type I error rate of these tests as summarized in the following proposition. When θ * − θ n = ω(n −1/2 ), we have (b) Suppose that the assumptions in Thm. 2 hold true. When θ − θ 0 = O(n −1/2 ) and τ n : When θ * − θ n = ω(n −1/2 ), we have (c) The same statements replacing T LR by T Wald .
NUMERICAL STUDIES
We run simulation studies to illustrate our theoretical results. We start by demonstrating the consistency of d n and the shape of the Wald confidence set defined in Cor. 8, i.e., Note that the oracle Wald confidence set should be constructed from θ n − θ H and d ; however, Cor. 8 suggests that we can replace H and d by H n (θ n ) and d n without losing too much. To empirically verify our theoretical results, we calibrate the Wald confidence set based on θ n − θ Hn(θn) with the threshold from the oracle Wald confidence set and compare its coverage with the one calibrated by the multiplier bootstrap-a popular resampling-based approach for calibration. Finally, we compare the coverage of the Wald and LR confidence sets calibrated by the multiplier bootstrap.
In all the experiments, we generate n i.i.d. pairs by sampling X and then sampling Y | X.
Numerical Illustrations
Approximation of the effective dimension. By Prop. 7, we know that d n is a consistent estimator of d . We verify it with simulations. We consider two models. For least squares, the data are generated from X ∼ N (0, I d ) and Y |X ∼ N (1 X, 1). For logistic regression, the data are generated from X ∼ N (0, We then estimate d = d (since the model is well-specified) by d n and quantify its estimation error by E |d n /d − 1|. We vary n ∈ [2000, 10000] and d ∈ {5, 10, 15, 20}, and give the plots in Fig. 3. For a fixed d, the absolute error decays to zero as the sample size increases as predicted by Prop. 7. For a fixed n, the absolute error raises as the dimension becomes larger in logistic regression, but it remains similar in least squares.
Shape of the Wald confidence set. Recall that the Wald confidence set in Thm. 2 is an ellipsoid whose shape is determined by the empirical Hessian H n (θ n ) and thus can effectively handles the local curvature of the empirical risk. We illustrate this feature on a logistic regression example. We generate data from X ∼ N (0, Σ) with different Σ's and . We then construct the confidence set with d = d. As shown in Fig. 4, the shape of the confidence set varies with Σ and captures the curvature of the empirical risk at θ 0 .
Calibration
We investigate two calibration schemes. Inspired by the setting in Chen and Zhou [12, Sec. 5.1], we generate n = 100 i.i.d. observations from three models with true parameter θ 0 whose elements are equally spaced between [0, 1]-1) well-specified least squares with X ∼ N (0, I d ) and Y | X ∼ N (θ 0 X, 1), 2) misspecified least squares with X ∼ N (0, I d ) and Y | X ∼ θ 0 X + t 3.5 , and 3) well-specified logistic regression with X ∼ N (0, I d ) and H n (θ n ) and d n , respectively, leading the confidence set C n (δ) := {θ : θ n − θ Hn(θn) ≤ d n /n + c n (δ)}. To calibrate C n (δ), we use the data generating distribution to estimate c n (δ) so that P(θ ∈ C (δ)) ≈ 1 − δ, and then plug it into C n (δ). We call it the oracle Wald confidence set. As shown in Tab. 2, its coverage is very close to the prescribed confidence level in the well-specified case and it tends to be more conservative in the misspecified case.
Multiplier bootstrap. To further evaluate the oracle calibration, we compare its coverage with the one calibrated by the multiplier bootstrap [e.g., 12]-a popular resampling-based calibration approach that is widely used in practice. We Wald } B b=1 to decide if the Wald confidence set covers the true parameter. It is clear that the bootstrap Wald confidence set performs similarly as the oracle Wald confidence set in least squares, but it is more liberal in logistic regression.
For comparison purposes, we also describe the procedure to construct a bootstrap likelihood ratio confidence set (BootLR). The first two steps are the same as the bootstrap Wald confidence set, while the third step is to compute the bootstrap LR statistic to decide if the bootstrap LR confidence set covers the true parameter. For the well-specified least squares, the two bootstrap confidence sets perform similarly with coverages close to the target ones. However, when the target coverage is small (i.e., 0.75), they tend to be liberal. For the misspecified least squares, the bootstrap two confidence sets perform similarly. When the target coverage is large, they tend to be conservative; when the target coverage is small, they tend to be liberal. For the well-specified logistic regression, the bootstrap Wald confidence set tends to be liberal and the bootstrap LR one tends to be conservative.
A Proof of main results
Our proof techniques rely on a self-concordance property to localize the estimator and control the Hessian and related quantities. This property was, up to our knowledge, first put to use in machine learning by Abernethy et al. [1] in the context of sequential allocation of experiments and multi-armed bandits. The key observation is that, within the Dikin ellipsoid, the variation of the Hessian can be easily controlled. More recently, Ostrovskii and Bach [32] obtained risk bounds for generalized linear models based on this observation. Our results and proof techniques also rely on this observation. We show how to leverage this observation to obtain confidence sets for a broad class of statistical models under a generalized self-concordance assumption owing to the use of the matrix Bernstein inequality. For instance, we obtain confidence bounds for parameter estimation using score matching and generalized linear statistical models under possible model misspecification as provided in Sec. 4.
Our proofs are inspired by Ostrovskii and Bach [32]. However, there are two key differences. First, since they focus on loss functions of the form (Y, θ X), the Hessian is (Y, θ X)XX where (y,ȳ) := d 2 (y,ȳ)/dȳ 2 . As a result, they can control the deviation of the empirical Hessian using inequalities for sample second-moment matrices of sub-Gaussian random vectors [32,Thm. A.2]. In contrast, we use matrix Bernstein inequality which allows us to work with a larger class of loss functions. Second, we extend their localization result from pseudo self-concordant losses to generalized self-concordant losses (Prop. 4). This is enabled by a new property on the existence of a unique minimizer for generalized self-concordant functions (Prop. 20). We also establish the concentration of the effective dimension.
In the remainder of this section, we first prove the localization result Prop. 4 and the score bound Prop. 5 in Appx. A.1. It not only guarantees the existence and uniqueness of θ n but also localizes it. We then, in Appx. A.2, control the empirical Hessian at θ n as in Prop. 6 using a covering number argument. Finally, we prove Thm. 1, Thm. 2, and Prop. 7.
We use the notation C to denote a constant which may change from line to line, where subscripts are used to emphasize the dependency on other quantities. For instance, C d represents a quantity depending only on d.
A.1 Localization
We start by showing that the empirical risk L n is generalized self-concordant. Applying Prop. 20 to L n leads to the localization result. Let λ n, := λ min (H n (θ )) and λ n := λ min (H n (θ )). Recall K ν from Cor. 19. Define Proof. By the first order optimality condition, we have S(θ ) = 0. As a result, is an isotropic random vector. Moreover, it follows from Lem. 22 that X ψ2 K 1 . Define J := G 1/2 H −1 G 1/2 /n.
Then we have
Invoking Thm. 23 yields the claim.
The next result characterizes the concentration of H n (θ ). Let Note that it decays to 0 at rate O(n −1/2 ) as n → ∞.
A.2 Proof of the main theorems
Before we prove the main theorem, we control the empirical Hessian as in Prop. 6. A naïve approach is to invoke Lem. 13 to bound H n (θ) by H n (θ ). However, this would not work since the generalized self-concordance parameter of L n , i.e., n ν/2−1 R, is diverging as n → ∞. Hence, we use a covering number argument: 1) we take a covering with radius O(n 1−ν/2 ); 2) we bound H n (θ) by H n (π(θ)) where π(θ) is the projection of θ onto the covering. The factor n 1−ν/2 in the radius will cancel out with the factor n ν/2−1 in the generalized self-concordance parameter; 3) we bound H n (π(θ)) by H(π(θ)) using matrix concentration; 4) we bound H(π(θ)) by H(θ ) where the generalized self-concordance parameter of L is R. Recall t n from (7)
Step 2. Relate H n (θ) to H for all θ in the covering. Fix an arbitrary θ ∈ N τ . Following the same argument as Lem. 13, we have, with probability at least 1 − δ, It follows from Asm. 1 and Lem. 17 that since R ν θ − θ H ≤ ε ≤ K ν < 1. By the monotonicity of ω ν , we get and thus, with probability at least 1 − δ, Let s n := t n (τ R ν /3ε) d δ/2 and , by a union bound, we have P(A) ≥ 1 − δ.
We give below the precise version of Thm. 1. Recall K ν and R ν from Cor. 19 and (8).
. Under Asms. 1 to 3 with r = 0, we have, whenever , the empirical risk minimizer θ n uniquely exists and satisfies, with probability at least 1 − δ, Proof. Similar to the proof of Prop. 5, we define two events In the following, we let n max 4(K 2 + 2σ 2 H ) log (4d/δ), .
This implies, by Vershynin [43, Lemma 2.7.6], W i is sub-Exponential with W i ψ1 ≤ K 2 1 (1 + M r n ). It then follows from the Bernstein inequality that .
Step 3. Prove the bound on the event ABCDE. Following the same argument as Thm. 1, we obtain Using the event C, we have and thus Now it remains to control We first control G(θ n ) − G G −1 . It follows from (15) and (17) that We then control G n (θ n ) − G(θ n ) G −1 . By (17), we have It then follows from the triangle inequality that sup θ∈Θr n (θ ) G n (θ) − G n (π(θ)) G −1 .
This yields that
(1 − s n )G G n (θ n ) (1 + s n )G , and thus
A.4 Effective dimension
To gain a better understanding on the effective dimension d , we summarize it in Tab. 3 under different regimes of eigendecay, assuming that G and H share the same eigenvectors.
B Examples and applications
We give the derivations for the examples considered in Sec. 4.1 and prove the results for goodness-of-fit testing in Sec. 4.2.
B.1 Examples
Example 5 (Generalized linear models). Let Z := (X, Y ) be a pair of input and output, where X ∈ X ⊂ R τ and Y ∈ Y ⊂ R. Let t : X × Y → R d and µ be a measure on Y. Consider the statistical model for all x. It induces the loss function (θ; z) := − θ, t(x, y) + log exp( θ, t(x,ȳ) )dµ(ȳ).
We first verify Asm. 1, i.e., show that it is generalized self-concordant for ν = 2 and R = 2M . We denote by E Y |x the expectation w.r.t. p(y | x). Note that log θ, t(x,ȳ) dµ(ȳ) is the cumulant generating function. It follows from some computation that As a result, which completes the proof. We then verify Asm. 2 and Asmp. 2'. By Lem. 21, it suffices to show that S(θ ; Z) 2 is a.s. bounded. In fact, Since |t(X, Y )| 2 a.s.
The claim then follows from the example above and t(x, Y ) 2 = x 2 ≤ M .
Then we have Example 6 (Score matching with exponential families). Assume that Z = R p . Consider an exponential family on R d with densities The non-normalized density q θ then reads log q θ (z) = θ t(z) + h(z). As a result, the score matching loss becomes Therefore, the score matching loss (θ; z) is convex. Moreover, since the third derivatives of (·; z) is zero, the score matching loss is generalized self-concordant for all ν ≥ 2 and R ≥ 0. When the true distribution P is supported on the non-negative orthant R p + , the score matching loss does not apply. Fortunately, a generalized score matching [20,50] loss can be used to address this issue. Let w 1 , . . . , w m : R + → R + be functions that are absolutely continuous in every bounded sub-interval of R + . Then the generalized score matching loss reads which consists of a weighted version of the original score matching loss with weights {w j (x j )} d j=1 (the last two terms in (19)) and an additional term (the first term in (19)). According to [50,Theorem 5], the loss (19) admits a quadratic form: whereĀ(z) is p.s.d. Hence, it is generalized self-concordant. Note that a particular example is the pairwise graphical models studies in [48,49].
Example 7 (Generalized score matching with exponential families). When the true distribution P is supported on the non-negative orthant, R d + , the Hyvärinen score does not apply. Hyvärinen [20] proposed the non-negative score matching to address this issue, which is later generalized in [50, Section 2.2]. Let h 1 , . . . , h m : R + → R + be positive functions that are absolutely continuous in every bounded sub-interval of R + . Then the generalized Hyvärinen score reads which is a weighted version of the original Hyvärinen score with weights {h j (x j )} d j=1 (the last two terms in (20)) with an additional term (the first term in (20)).
We then consider an exponential family on R d + with densities log q θ (z) = θ t(z) − S(θ) + b(z).
According to [50,Theorem 5], the score (20) admits the quadratic form: where Γ(z) is p.s.d. Hence, this score is self-concordant. Note that a particular example is the pairwise graphical models studies in [48,49].
B.2 Applications to goodness-of-fit testing
Before we start, we note that a simple modification to the confidence bound in Thm. 2 leads to the following risk bound that can be utilized to analyze the likelihood ratio test. We then give the bounds for function values. Define two functions Proposition 16 (Sun and Tran-Dinh [38], Prop. 10). For any x, y ∈ dom(f ), we havē where it holds if d ν (x, y) < 1 for the case ν > 2.
In the following, we fix x ∈ dom(f ) and assume ∇ 2 f (x) 0. We denote λ min := λ min (∇ 2 f (x)) and λ max := λ max (∇ 2 f (x)). The next lemma bounds d ν (x, y) with the local norm y − x x . Let Lemma 17. For any ν ≥ 2 and y ∈ dom(f ), we have Moreover, it holds that where it holds if R ν y − x x < 1 for the case ν > 2.
Proof. Recall the definition of d ν in (25). If ν = 2, then, by the Cauchy-Schwarz inequality, The case ν > 2 can be proved similarly.
The next result shows that the local distance between the minimizer of f and x only depends on the geometry at x. It can be used to localize the empirical risk minimizer as in Prop. 4. Proposition 20. Whenever R ν ∇f (x) ∇ 2 f (x) −1 ≤ K ν , the function f has a unique minimizerx and Proof. Consider the level set Take an arbitrary y ∈ L f (f (x)). According to Prop. 16, we have By the Cauchy-Schwarz inequality and Lems. 17 and 18, we get Due to Cor. 19, it holds that R ν y − x x < 1 + 1{ν = 2} andω ν (−R ν y − x x ) ≥ 1/4. It follows that d ν (x, y) < 1 + 1{ν = 2} and Hence, the level set L f (f (x)) is compact so that f has a minimizerx. Moreover, by Prop. 15 and ∇ 2 f (x) 0, we obtain ∇ 2 f (y) 0 for all y ∈ L f (f (x)). This yields thatx is the unique minimizer of f and it satisfies x − x x ≤ 4 ∇f (x) ∇ 2 f (x) −1 .
C.2 Concentration of random vectors and matrices
We start with the precise definition of sub-Gaussian random vectors [43,Chapter 3.4].
Definition 3 (Sub-Gaussian vector). Let S ∈ R d be a random vector. We say S is sub-Gaussian if S, s is sub-Gaussian for every s ∈ R d . Moreover, we define the sub-Gaussian norm of S as S ψ2 := sup S, s ψ2 .
Note that · ψ2 is a norm and satisfies, e.g., the triangle inequality. where C is an absolute constant.
Lemma 21. Let S be a random vector such that S 2 ≤ M for some constant M > 0. Then X is sub-Gaussian with X ψ2 ≤ M/ √ log 2.
As a direct consequence of Vershynin [43, Prop. 2.6.1], the sum of i.i.d. sub-Gaussian random vectors is also sub-Gaussian. We call a random vector S ∈ R d isotropic if E[S] = 0 and E[SS ] = I d . The following theorem is a tail bound for quadratic forms of isotropic sub-Gaussian random vectors.
We then give the definition of the matrix Bernstein condition [44,Chapter 6.4].
Definition 4 (Matrix Bernstein condition). Let H ∈ R d×d be a zero-mean symmetric random matrix. We say H satisfies a Bernstein condition with parameter b > 0 if, for all j ≥ 3, The next lemma, which follows from Wainwright [44,Eq. (6.30)], shows that a matrix with bounded spectral norm satisfies the matrix Bernstein condition. The next theorem is the Bernstein bound for random matrices. | 2023-01-03T06:41:47.397Z | 2022-12-31T00:00:00.000 | {
"year": 2022,
"sha1": "f6fee6c5a4ce074181c91a7e3b3a38c42aaf2e18",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "f6fee6c5a4ce074181c91a7e3b3a38c42aaf2e18",
"s2fieldsofstudy": [
"Mathematics",
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
220413469 | pes2o/s2orc | v3-fos-license | Development of an occupational advice intervention for patients undergoing elective hip and knee replacement: a Delphi study
Objective To obtain consensus on the content and delivery of an occupational advice intervention for patients undergoing primary hip and knee replacement surgery. The primary targets for the intervention were (1) patients, carers and employers through the provision of individualised support and information about returning to work and (2) hospital orthopaedic teams through the development of a framework and materials to enable this support and information to be delivered. Design Modified Delphi study as part of a wider intervention development study (The Occupational advice for Patients undergoing Arthroplasty of the Lower limb (OPAL) study: Health Technology Assessment Reference 15/28/02) (ISRCTN27426982). Setting Five stakeholder groups (patients, employers, orthopaedic surgeons, general practitioners, allied health professionals and nurses) recruited from across the UK. Participants Sixty-six participants. Methods Statements for the Delphi process were developed relating to the content, format, delivery, timing and measurement of an occupational advice intervention. The statements were based on evidence gathered through the OPAL study that was processed using an intervention mapping framework. Intervention content was examined in round 1 and intervention format, delivery, timing and measurement were examined in round 2. In round 3, the developed intervention was presented to the stakeholder groups for comment. Consensus For rounds 1 and 2, consensus was defined as 70% agreement or disagreement on a 4-point scale. Statements reaching consensus were ranked according to the distribution of responses to create a hierarchy of agreement. Round 3 comments were used to revise the final version of the developed occupational advice intervention. Results Consensus was reached for 36 of 64 round 1 content statements (all agreement). In round 2, 13 questions were carried forward and an additional 81 statements were presented. Of these, 49 reached consensus (44 agreement/5 disagreement). Eleven respondents provided an appraisal of the intervention in round 3. Conclusions The Delphi process informed the development of an occupational advice intervention as part of a wider intervention development study. Stakeholder agreement was achieved for a large number of intervention elements encompassing the content, format, delivery and timing of the intervention. The effectiveness and cost-effectiveness of the developed intervention will require evaluation in a randomised controlled trial. Trial registration number International Standard Randomised Controlled Trials Number Trial ID: ISRCTN27426982
AbstrACt
Objective To obtain consensus on the content and delivery of an occupational advice intervention for patients undergoing primary hip and knee replacement surgery. The primary targets for the intervention were (1) patients, carers and employers through the provision of individualised support and information about returning to work and (2) hospital orthopaedic teams through the development of a framework and materials to enable this support and information to be delivered. Design Modified Delphi study as part of a wider intervention development study (The Occupational advice for Patients undergoing Arthroplasty of the Lower limb (OPAL) study: Health Technology Assessment Reference 15/28/02) (ISRCTN27426982). setting Five stakeholder groups (patients, employers, orthopaedic surgeons, general practitioners, allied health professionals and nurses) recruited from across the UK. Participants Sixty-six participants. Methods Statements for the Delphi process were developed relating to the content, format, delivery, timing and measurement of an occupational advice intervention. The statements were based on evidence gathered through the OPAL study that was processed using an intervention mapping framework. Intervention content was examined in round 1 and intervention format, delivery, timing and measurement were examined in round 2. In round 3, the developed intervention was presented to the stakeholder groups for comment. Consensus For rounds 1 and 2, consensus was defined as 70% agreement or disagreement on a 4-point scale. Statements reaching consensus were ranked according to the distribution of responses to create a hierarchy of agreement. Round 3 comments were used to revise the final version of the developed occupational advice intervention. results Consensus was reached for 36 of 64 round 1 content statements (all agreement). In round 2, 13 questions were carried forward and an additional 81 statements were presented. Of these, 49 reached consensus (44 agreement/5 disagreement). Eleven respondents provided an appraisal of the intervention in round 3. Conclusions The Delphi process informed the development of an occupational advice intervention as part of a wider intervention development study. Stakeholder agreement was achieved for a large number of intervention elements encompassing the content, format, delivery and timing of the intervention. The effectiveness and cost-effectiveness of the developed intervention will require evaluation in a randomised controlled trial. trial registration number International Standard Randomised Controlled Trials Number Trial ID: ISRCTN27426982
IntrODuCtIOn
Hip and knee osteoarthritis are associated with a reduction in work participation and productivity and an increased risk of work loss. 1 2 The costs associated with occupational musculoskeletal disorders are significant. 3 4 The estimated annual cost of workplace ill health is £9.7 billion, equivalent to £18 400 per case. 5 These costs are borne not only by the individual (impact of ill health on quality of life), but also by their employers and society (loss of productivity, need for healthcare, rehabilitation and compensation). 3 result in work disability, poorer general health, increased risk of mental health problems and higher mortality. [6][7][8] Working, therefore, has physical and mental health benefits, alongside its socioeconomic value. Lower limb joint replacement is an effective and costeffective treatment for patients with hip and knee osteoarthritis. [9][10][11][12] Recent changes to the pension age combined with an ageing UK workforce have resulted in a steady increase in the numbers of hip and knee replacements being performed in patients of working age over the last decade. [13][14][15] In 2017, 18 812 (20.5% of all hip replacements) and 17 765 (17.4% of all knee replacements) were performed in patients aged less than 60 years. [13][14][15] Current recommendations supporting return to work after hip and knee replacement are limited and inconsistent. 16 There is variation in the content, delivery and format of occupational advice delivered to patients having hip and knee replacements and a need to provided more comprehensive, individualised advice for these patients to support early, sustained return to work after surgery. 16 The Occupational advice for Patients undergoing Arthroplasty of the Lower limb (OPAL study) was a National Institute for Health Research-Health Technology Assessment commissioned research study that aimed to develop an occupational advice intervention to support return to work after hip and knee replacement. 16 OPAL used an intervention mapping framework supported by related qualitative and quantitative work streams. 16 Initial research evaluated the specific needs of the population of patients who were in work and intended to return to work following surgery, established how individual patients returned to work and documented the barriers preventing return to work. 17 18 Through these work streams a range of key performance indicators and potential intervention components that could be used to develop an occupational advice intervention emerged.
To refine these components and address areas of uncertainty relating to the intervention a multistakeholder intervention development group was constructed to ascertain whether agreement could be reached about the design, content, delivery, format and timing of the proposed occupational advice intervention. To facilitate this process a modified Delphi consensus process was employed. 16 The Delphi approach was chosen as it can be delivered remotely in a short time frame without the need to convene meetings. It also enables researchers to collect the opinions of a range of different individuals with differing areas of expertise which was desirable in this setting. The initial research performed as part of the intervention mapping process provided the basis for this process by generating an initial list of statements for the Delphi consensus development.
MethODs
Design of the modified Delphi study A modified 3 round Delphi consensus process was used. [19][20][21] The process was guided by the information gathered from research completed during the first phase of the OPAL project. [16][17][18] During the first phase of the OPAL project, a number of intervention components emerged that were considered likely to be integral to the development of a successful occupational advice intervention (box 1). Expanded versions of these components were used as the basis for initial statement development that could be explored during the Delphi process.
Delphi stakeholder recruitment
Five stakeholder groups were identified for inclusion in the modified Delphi process. The sampling strategy for each stakeholder group is outlined in table 1, with participants chosen via a targeted approach to maximise patient, public and professional engagement. To ensure wide participation and the validity of the consensus process the process aimed to recruit a minimum of five individuals from each stakeholder group. A maximum limit of 15 individuals from any given stakeholder group was chosen to ensure one group's opinions did not overwhelm the opinions of others within the consensus process. As such, we aimed to have a minimum of 25 participants and a maximum of 75 participants for each round.
Although there are no definitive rules about the sample size for a Delphi study, a minimum of 8-10 participants has been suggested. 22 While higher response rates and ease of administration are an advantage of smaller homogeneous groups, we considered a larger sample size desirable given the variation in expertise and the heterogeneity within our stakeholder groups. Furthermore, if areas of uncertainty are being explored larger sample sizes can help to reduce errors and improves the reliability of the findings. 23 Prior to enrolment, potential participants from all stakeholder groups were invited to participate via an email from a member of the OPAL study team as per the sampling strategy for each stakeholder group outlined in table 1. This email included a participant information sheet describing the Delphi consensus process and what participation involved. Participants were asked to confirm their consent to participate by return of email and only those that responded indicating their willingness to participate were included in the process.
Development of Delphi statements
Prior to commencing, statements relating to the proposed content, format, delivery, timing and measurement of an occupational advice intervention were developed. Due to the breadth of statements developed and their interrelated nature, we adopted a stepwise approach to the presentation of individual statements to the Delphi group. Round 1 focused on defining the content of the intervention in two sections. Section 1, focused on passive content ('written' advice and information) and section 2 on active content (actions or processes for patients, employers and healthcare members to undertake). These statements were piloted by a small sample of surgeons, general practitioners (GPs) and patients. Having first defined the content, we then used this information to refine the statements relating to the format, delivery, timing and measurement of this content presented in round 2. In round 2, statements were grouped under headings allowing exploration around specific themes. Round 3 was then used to clarify any areas of residual uncertainty from rounds 1 and 2 and present the proposed occupational advice intervention back to the Delphi participants for final comments.
For each statement within the Delphi process, participants were asked to rate the extent of agreement with individual statements about the importance of including specific elements in a occupational advice intervention, with possible options being: strongly agree; agree; disagree; strongly disagree; do not know. For a subset of statements in round 1, they were also asked to rate the deliverability of the content or action alongside current healthcare provision. Therefore, for some statements the participants were asked to provide two ratings one for 'importance' and one for 'deliverability'.
At the end of each section, there was a free-text box where participants could add suggestions relating to the intervention that could be evaluated in subsequent rounds. In rounds where statements from a previous Delphi round were being represented, these were presented alongside controlled feedback with modal round one rating for these statements; the proportion of each response option selected by the other participants; and a reminder of the participant's own previous ratings.
Delivery of Delphi survey
The Delphi survey was delivered via email using an online web-based survey platform. 24 The email included a covering letter to the participants and an electronic link to the questionnaires. All three rounds allowed a minimum of 3 weeks for participants to respond. Automated reminders were sent via the electronic system after 10 days from the day of initialising the Open access survey. A further personalised email reminder was sent to non-responders during the final week of the surveys.
Round 1 and 2 questionnaires required respondents to provide their initials and occupation. All round 2 emails incorporated an overall report summarising the pooled responses from round 1 survey and, where appropriate, the responses of each of the five stakeholder groups. In addition, for those participants who completed the round 1 survey, an individualised report summarising their responses to the statements in round 1 were included with the round 2 survey to allow participants to reappraise their responses in view of the overall responses. 25 Round 3 emails included four core documents from the developed occupational advice intervention (a summary of the intervention, occupational checklist, patient 'return to work' workbook and employer booklet) for participants to review and comment. Email reminders were sent to non-responders during the final week of the surveys.
Analysis of data
Descriptive analyses of the Delphi responses were undertaken by the OPAL study team. Results of each round were discussed with the wider OPAL study research team before the statements were agreed for subsequent rounds.
An a priori consensus threshold of 70% (strongly agree/agree or strongly disagree/disagree) was agreed before statements were circulated. 25 There is no universal agreement on an acceptable level of consensus for a Delphi study, 26 27 however, reports suggest this should be decided before commencing the study and recommends at threshold of at least 70% to ensure validity of the findings. 27 For statements that failed to reach consensus, further analysis was undertaken based on responses for each of the five stakeholder subgroups. The following rules were then employed to determine which statements were discarded and which were represented in the next round. ► If no or only one stakeholder group reached concordant consensus (>70% agreement or disagreement) then the statement would be withdrawn. ► If two or more stakeholder groups reached concordant consensus (>70% agreement or disagreement) then the statement would be represented in a subsequent round. ► In the situation where one or more stakeholder groups reach 'agreement' and another group reach 'disagreement' the statement would be discussed by the OPAL investigator team and a decision on inclusion/exclusion of the statement would be made. For statements that were rated for both importance and deliverability in round 1, consensus was reached if the 70% threshold was achieved for both the importance and deliverability rating. Statements that reached consensus for one of the domains were analysed by stakeholder group as described above.
In rounds 1 and 2, statements reaching consensus were ranked according to the distribution of responses to create a hierarchy of agreement.
In round 3, the occupational advice intervention and associated documents were circulated for comment. Descriptive open feedback from participants to these documents were recorded.
Patient and public involvement
The OPAL research project was developed in collaboration with members of the British Orthopaedic Association (BOA) Patient Liaison Group (PLG). A patient coapplicant from the BOA PLG was involved in the development of the research question and defining the outcome measures used within the wider OPAL study. Patients were involved in the design of the study from inception of the project, through protocol development, study delivery and project dissemination. These included patients from the BOA PLG, the National Joint Registry patient group and patient and public groups affiliated with the sponsor site.
results round 1
Responses were received from 43 of the 66 participants (65%) including 14 patients, 8 surgeons, 6 GPs, 11 allied health professionals and nurses, and 4 employers. In section 1 ('written' advice and information), consensus was reached for 26 of 32 statements (81%). Of the remaining six statements, five reached consensus for two or more stakeholder groups and were therefore taken forward to round 2 and one statement was discarded. Section 1 statements reaching consensus and ranked based on the strength of consensus are listed in table 2.
In section 2 (actions or processes for patients, employers and healthcare members to undertake), participants were asked to rate both the importance and deliverability of each statement. Of the 32 components presented, only 10 (31%) reached consensus for both importance and deliverability (table 3). Of the remaining 22 statements, 14 reached consensus for importance but not deliverability, 2 reached consensus for deliverability but not importance and 6 did not reach consensus for either. Of these statements seven reached consensus for both importance and deliverability for two or more stakeholder groups and were therefore taken forward to round 2 and 15 statements were discarded.
Twelve questions carried forward from round 1 plus one additional question generated from the free-text comments were presented to the participants. Of these, 10 reached consensus based on their potential importance within the proposed occupational advice intervention.
A further 81 statements grouped into 13 categories were then rated. This allowed the team to explore different approaches to a given problem. For example, the first category asked participants to rate a set of five statements relating to which healthcare team member should have responsibility for delivery and coordination of the occupational advice intervention. If at least one or more statements in a given category reached consensus this was taken as representative of the Delphi group's position relating to the given category and the remaining statements were discarded. Overall 49 statements (60%) reached consensus (44 agreement and 5 disagreement), at least one statement in every category reached consensus (online supplementary appendix table 1).
the occupational advice intervention
Based on the evidence gathered throughout the OPAL study and consolidated through Delphi rounds 1 and 2 the occupational advice intervention was further developed and finalised.
The intervention was designed to support patients throughout their surgical pathway, starting during their initial outpatient appointment and continuing until 16 weeks after surgery. It had a number of key themes that linked to performance objectives for patients and staff and was supported by a range of patient and staff Table 3 Statements descriptions reaching consensus for section 2 (ordered by % of respondents that strongly agreed or agreed) How important/deliverable do you believe the following components are if an occupational advice intervention commencing prior to hip or knee replacement were to be developed Agreement (%) Agreement (%) Ten statements reaching consensus for both importance and deliverability Q37. A postoperative mechanism for the identification of patients that are not progressing toward return to work as planned. Q57. Information from patients that have experienced the process of returning to work after hip or knee replacement within the preoperative education process. 76 73 Figure 1 Simplified schematic of the OPAL 'return to work' intervention.
resources (to support delivery and measurement of the intervention). A simplified schematic of the OPAL 'return to work' intervention is presented in figure 1.
round 3
In round 3, the finalised occupational advice intervention along with selected patient and staff materials were circulated to 65 of the 66 Delphi participants for comment (one patient withdrew). Responses were received from 11 participants comprising a constructive appraisal of the intervention from 9, as well as highlighting typographical and formatting issues. The feedback was positive in all cases. A diagram of the overall Delphi consensus process is shown in figure 2.
DIsCussIOn
The Delphi consensus methodology was used to underpin the development of an evidence-based, theorydriven occupational advice intervention to assist patients returning to work after elective hip and knee replacement. It enabled the OPAL study team to rationalise the content, format, delivery and timing of the intervention and clarified areas of uncertainty related to the intervention that had arisen during the earlier stages of the research. Response to the developed intervention during the third round of the Delphi process was positive, validating the use of the Delphi process to support intervention development.
Prior to the Delphi process, the OPAL study had already completed a number of complementary research phases to enable the OPAL team understand the current evidence, stakeholder and patient perspectives, and current practice relating to return to work after hip and knee replacement. [16][17][18] Through the intervention mapping framework this is information generated a range of components for our intervention. The Delphi methodology was then used to 'refine' the intervention and reach consensus on the final design. This is similar to the modified Delphi approach used by Vonk Noordegraaf et al to develop a return to work intervention for gynaecological surgery, as it used existing evidence as the basis for the process but sought to bridge gaps and clarify uncertainty within this evidence. 28 However, one limitation of this approach is that it may inadvertently narrow the focus of the intervention with only components deemed important by the research team included. There is a risk that potentially useful intervention components that may have been of interest to the stakeholder groups were not included as the starting position was predefined. However, the approach used is not unusual and is similar to the approaches used by others. 20 29 Furthermore, given the breadth of work completed earlier in the OPAL study and the design of the modified Delphi survey allowing participants to suggest new intervention components within each round, this is unlikely to have had a negative impact.
Broad stakeholder involvement helped the research team ensure the final intervention was acceptable to all groups, increasing the chances of success when implemented and delivered. Unfortunately, despite good initial engagement, the response rates reduced as the process progressed. This is a common finding during Delphi processes 20 and was perhaps related to the larger sample size involved and extended period of the process with a 6-week gap between rounds 1 and 2 and a 6-month gap between rounds 2 and 3. The gap between rounds 2 and 3 was necessary as the intervention needed to be finalised with associated materials being developed during this period. Other contributing factors may include the increasing length of the Delphi questionnaires with each round and the volume of materials that needed to be reviewed in round 3. All participants were UK based and working with the setting of the UK National Health Service and social care provision or UK employment. Therefore, this may impact on the generalisability of the findings outside of the UK health setting.
While the low response rate in round 3 may be a concern, the purpose of this round was to circulate and draw comment regarding the final intervention rather than reach consensus on specific points. With 11 respondents, including at least one member from each stakeholder group, this seems valid as Delphi process relies more on the group dynamics even with reaching consensus rather than their statistical power and a lower limit of 10 participants is often considered sufficient for a Delphi panel. 30 31 During the process, there was a notable drop off in employer respondents. In total 12 employers initially expressed an interest in participating, however, only four responded in round 1, two in round 2 and one in round 3. It is often difficult to engage employers in research 32 and despite using a number of complementary strategies (17) we were unable to maintain engagement. However, as the intervention was designed to be delivered in secondary care rather than in the workplace this potentially did not significantly influence the nature of the final intervention.
The modified Delphi methodology employed in this study resolved uncertainties about a number of intervention components. However, there were a few areas where the consensus process was limited. Two key areas that stakeholders felt were important were (1) the provision of additional pre and post-operative physiotherapy/ occupational therapy (over and above standard care) in which return to work issues could be addressed and (2) the identification of 'high-risk' patients that should be provided with additional help and support. Yet, these positions conflicted with other information gathered from the Delphi participants and the evidence from OPAL phase 1. Essentially first, our cohort study failed to identify a 'high-risk' population and the current literature describing predictors of return to work after hip and knee replacement was limited. [33][34][35][36][37][38] This meant we were not able to confidently identify a 'high-risk' group in need of a more intensive targeted intervention. Second, there was concern about the cost, time and logistics associated with the implementation of a resource intensive intervention requiring additional patient interactions. The survey of practice and stakeholder/patient interviews demonstrated that services varied significantly in their structure and the resources available. 18 To be successful it was agreed that the intervention should supplement rather than replace existing pathways of care and should, where possible, use existing staff and adapt current working. Comments from Delphi participants, the OPAL research team and the study steering committee had similarly raised concerns about the implementation and sustainability of an intervention requiring significant additional resources. The OPAL research team therefore felt that, despite this component reaching consensus, it was prudent to pursue a less intensive model to improve implementation of the final intervention.
We were unable to compare our intervention to other occupational advice interventions for patients undergoing hip and knee replacement as no such interventions have been reported in the literature. A rapid evidence synthesis performed earlier in the OPAL study (PROS-PERO protocol registration number CRD42016045235) found only four studies that reported occupational advice interventions for patients undergoing elective surgery. This included two randomised clinical trials (RCTs) from Belgium and the Netherlands in patients undergoing gynaecological surgery and lumbar disc surgery 39 40 and two qualitative studies that explored factors affecting return to work from the perspective of the patient following knee replacement 41 and factors influencing work disability following mastectomy. 42 Of the two interventions described in the RCTs one described a personalised e-Health intervention 39 whereas the other assessed a rehabilitation-orientated intervention focusing on early resumption of activities. 40 Our intervention drew on elements of both of these interventions in terms of
Open access
delivering an individualised patient-centred approach while encouraging early resumption of workplace activities through discussion with employers alongside workplace adaptions and alterations to working patterns.
In conclusion, a modified Delphi consensus process employed within a wider intervention development project facilitated the development of the OPAL occupational advice intervention. Consensus was reached for a range of intervention components that allowed the content, format, delivery and timing of the intervention to be finalised. The intervention developed and the materials created to support its delivery were well received by the Delphi group. The effectiveness and cost-effectiveness of the developed intervention will require evaluation in an RCT. | 2020-07-09T09:14:42.586Z | 2020-07-01T00:00:00.000 | {
"year": 2020,
"sha1": "f2198e0d22dbcb0860dde9ba22454b138cc79e19",
"oa_license": "CCBY",
"oa_url": "https://bmjopen.bmj.com/content/bmjopen/10/7/e036191.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Highwire",
"pdf_hash": "3241d0a728af66ee646e85af93014cc3a12c12a9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
208844575 | pes2o/s2orc | v3-fos-license | Failure Behaviours of Steel Projectiles with Localised Melting Against Armour Plates
The surface remelting technology of high energy beam can locally weaken the case for controlled fragmentation, which may affect the survivability of the impacting projectiles. Failure behaviours of steel projectiles with melted layers grid normally perforating armour plates was investigated. The results reveal that shear fracture mainly occurs in the nose region of projectiles due to high loading, and the melting zone of projectiles can keep integrity with no damage, which means the survivability of projectile can be assured. Furthermore, an analytical model was proposed to the structural analysis of projectile, which is in accordance with the test results.
INTRODUCTION
The impact and protection engineering has become one of the key issues in the design of weapons. Nevertheless, most researchers pay more attention to the performance of targets under impact loading, the deformation and fracture of projectiles under extreme conditions need to be studied deeper. In the previous studies, considering the complexity of impacting, the rigid target hypothesis was often used. The classical Taylor bar impact test is a typical representative of this aspect. Backman & Goldsmith 1 , Woodward 2 , et al. and Rakvåg 3 , et al. have revealed various typical failure modes of projectile bodies. In some special cases, the hydrostatic tension can also cause many detects and serious destruction to the projectile body.
A systematic research on the steel projectile impacting harder target was performed by Chen 4 , et al., which is different from the standard test of Taylor bar to some degree. For example, the deformation zone of projectile head comprises: an inside circle and an outside ring, the tensile cracking in the outside ring never passes through the interface of these two portions. Ren 5 , et al. studied the fracture surface of recovered projectile by scanning electron microscope in which the spiral shear crack was symmetrically self-organised. Xiao 6 ,et al. performed impacting experiments with two kinds of projectiles, the soft projectile were fractured in the petal type, while the hard were fragmented. Besides, the loop patterns also appeared at the nose of projectile. Despite all this, the researches on the failure behaviours of impacting projectile are still insufficient.
With regard to the structural optimisation of penetrating projectiles, the damage ability of warhead perforated target is of special interest. There is a recognised need for controlled fragmentation methods for warheads of projectiles against hard targets. The local melting on the case by high energy beam is one of the innovative technologies of fragmentation 7 . However, since the melted layers grid formed into the outer surface of the case, may act as a stress-raiser during impacting process, there is a concern that the presence of such a grid might affect the structural integrity of the case and the survivability of the projectile. This study is to investigate the failure behaviours of steel localised melting projectiles perforated armour steel plates, and the possible effect of melted layers grid on the survivability of the steel projectiles was also examined. Firstly, experiments of small-scale, hollow steel projectiles of local melting normally impacting armour steel plates were conducted, and metallographic examinations were made to reveal the deformation and fracture mode of selected residual projectiles. Secondly, the magnetic particle inspection is selected to examine the main structure of projectiles. At last, an analytical model is introduced and discussed.
EXPERIMENTAL PROCEDURES 2.1 System of Experiment
The simulated projectiles were shot at the target plates by 37 mm smooth bore gun at the range of 380~500 m/s. To minimise the speed error as possible, the mass of internal charge was accurately evaluated. Some aluminum foils and a multi-channel time-measuring system HG 202C were put up to provide signals for the time recorder at the trajectory of projectile. Furthermore, a high speed camera Photron SA5 was set to capture the flight attitude and impacting process of projectile. Some wood blocks placed behind the target plate were employed to recover projectile for eliminating the secondary damage effect. The main experimental devices and principle are shown in Fig. 1. The magnetic particle inspection was carried on by CEW-4000 AC/DC dual-purpose magnetic particle testing machine, and the oil-based magnetic suspension was applied to the recovered projectiles by pouring or impregnation, to ensure that the surface of the shell is completely covered.
Projectile and Target Plate
The material of projectile was 30CrMnSiNi2A steel, and the main compositions were seen in Table 1. The projectiles were machined with an oval shape (CRH=3.0), and the overall length is 102 mm. To meet the requirement of speed, a hollow structure is adopted to reduce the launching weight, as shown in Fig. 2. All projectiles are pre-heat treated to make Rockwell hardness up to 40. The main parameters of heat treatment were described in Table 2. The dynamic mechanical properties of 30CrMnSiNi2A steel with Split-Hopkinson pressure bar and static material test system within five strain rates were shown in Fig. 3, it is seen that the yield stress is about 1580 MPa at the strain rate of 10 -4 s -1 , while that increases to 1682 MPa at 500 s -1 strain rate. During the increase of strain rate from 500 s -1 to 5000 s -1 , the yield stress of the material increases gradually(1682~1920 MPa), but the variation is slight. It is notable that 30CrMnSiNi2A steel is less sensitive to the strain rate. For producing fragments of a desired size and shape, a local melted and re-solidified layers grid system was formed on the exterior surface by high energy beam. The interaction between the case and high energy beam aims to create a grid of local melted layers on the surface, as the case passes under high energy beam. Because of the self-quenching effect of the cold interior of the sample, the melted layer has usually a finer and homogeneous structure than its original bulk material. Shear fractures initiate and propagate along the melted trajectories during the expansion process of case. Thus, the fragmentation behaviour of metal is enhanced along these definite and pre-determined paths 8 . Figure 4 displays the geometry and microstructure of melted zone, at which there are two different parts. The melted layer was composed of martensite, discrete cementite particles and retained austensite. As the carbides started dissolving during the rapid heating process, austensite with high carbon content were formed. Finally, the rapid cooling of surrounding material results in the formation of martensite, carbide and retained austenite. In comparison with the bulk material, the grain has been refined remarkably. According to the results of measurement, the average value of surface hardness of the melted layer is about HRC 52. The target plates in the experiments were manufactured from armour steel with the thicknesses of 4.0 mm and 6.0 mm, which were fixed by heavy blocks of iron. These target plates were heat-treated with the quenching and tempering processing to have a static tensile strength of 1500 MPa. In this paper, the critical processing parameter is low temperature tempering at 250 °C for 2 hours and air-cooled to the room temperature, avoiding unnecessary brittleness at the same time.
Summary of Results
A total of seven tests were completed and all the projectiles perforated the targets. The main conditions and results of tests are as listed in Table 3. To examine the characteristics of damage, the residual projectiles were recovered for further analysis. The failure of projectiles was described by mass loss, deformed length and diameter. Due to the plastic strain of the material in the contact zone of the projectile and target, the damage develops rapidly. All tests have been conducted under the normal penetration conditions.
Fracture Mode and Mass Loss of Projectile
During perforation of armour steel, projectiles experience high loads of short duration, which may be the primary factor for failure. The results of macroscopic inspection in Fig. 5 indicated that the integrated projectiles still maintained, instead of damage largely confining to the projectile nose.
Compared with the original, the mass loss of the residual projectile is larger and the nose cracks. Notable is the relative small amount of fracture in the nose region, which can be described in terms of velocity discontinuities identified by regions of maximum strain rate. The projectiles have only partly been destroyed, no cavities or cracks can be recognised at the unbroken part either by passage of stress relief waves or by intense friction. A typical fracture surface from the front of the projectile in perforation of targets is shown in Fig. 6. It is apparent that the dominating failure of projectile nose is shear fracture with a distinct glassy surface. For the strain rate sensitivity and tempering brittleness of 30CrMnSiNi2A steel used in this study, the nose of projectile was shear fractured with the angle of 45° by the dynamical compression load at high strain rates. Some melts are discovered at the shear plane, which can be explicated by the high temperature effect of local shear action.
The greater the impact speed, the more serious the damage, the greater the mass loss of the projectile. The maximum loss of the nose to the total mass is up to 6 per cent. Blue brittleness is also observed in the nose of some projectile, which is brought about by the spilling of secondary particles at a temperature of approximately 300 ℃ 9 . Generally, blue brittleness seems to coincide with the appearance of cracking 3 . It is reasonable considered that high pressure and heat generated by the transient effect between projectile and target are the main cause of shear fracture. The failure mode of target is affected by the impact velocity, strength, dimensions of nose, thickness, etc. The target plate is thinner than the projectile diameter, so the plates failed by the typical petal-like mode, under the local strong loads. It is generally believed that high radial and circumferential tensile stresses lead to this deformation 10 .
In case of 6 mm thickness armour steel plate, the nose of projectile was seriously destroyed and shorter. From the SEM image of fracture surface given in Fig. 7, a flat fracture surface appears with cleavage-like patterns. This is a clear evidence of shear fracture as the dominating fracture mode. Moreover, the mass loss increases to about 8.7~8.8 % of the total mass, and the plate fails due to plugging, with the action of stretch stress. The plugs seem to be both smooth and cylindrical, which is significantly different from the failure mode of 4 mm target plate.
During impact, the plastic deformation of target plate will absorb energy. Therefore, the material flow resistance directly affects the protective performance of the plate. With the thickness increasing from 4 mm to 6 mm, the target absorbs more energy, results in the penetration resistance of projectile increasing. The difference is determined by the bending stiffness between the two plates. The bend stiffness is as 11 : where T is the thickness of plate, E is the elastic modulus of material and υ denotes Poisson's ratio. Because the bending stiffness increases to cubic with the thickness of the plate, the ballistic resistance of 6 mm thickness plate is significantly larger than that of 4 mm thickness plate, and the damage of projectiles penetrating thin plates are smaller than that penetrating thick plates. For two types of target plates, the damage of projectile head becomes more serious with the impact velocity increasing, but no visible damage or bulge occurring at the melting location of projectiles. The nondestructive examination of the melting zone will be discussed in next.
Nondestructive Inspection
The nondestructive inspection used in this paper is to detect surface and near-surface discontinuities/cracks by magnetic flux leakage and magnetic powders. Under the effect of magnetic flux leakage, the magnetic particles assemble at discontinuity and form magnetic traces, which show the position, shape and size of discontinuity. This nondestructive method can be used to examine defects or micro-cracks in the melted zone of residual projectiles. To make these indications easy to recognise, the melting zones of projectiles are uniformly coated with small white magnetic particles (see Fig. 8). The results of inspection are analysed and processed Table 4, which indicate there is no damage at the melting location during impacting. It implies that the presence of melted layers grid has little effect on the structural integrity or the survivability of the projectile.
Discussion
In the process of penetrating target, the movement and damage of projectile are related to resistance 12 . When the projectile penetrates into the target at high speed, an axial compressive stress will be produced in the body. Thus, it is necessary to estimate the stress distribution of the projectile structure.
Suppose a projectile of total mass m impacts a target plate, and the resistance of projectile body is sketched in Fig. 9. The external radius is expressed by R, the internal radius is expressed by r. Regardless of the projectile nose, the length of main structure is L, and the area of cross-section at the axial x is A(x). With the formation of craters, the impact resistance of projectile increases rapidly 13 , so the maximum resistance is obtained by F = ma (2) where a is the deceleration. The compressive stress of projectile body at the axial x is It can be reasonably assumed that the compressive stress at the tail is 0 x= s , thus by Eqn (3) (6) where x is measured from the projectile tail to the crosssection position.
The variation of the compressive stress in projectile wall with the distance from projectile tail is shown in Fig. 10. It is shown that the compressive stress x s increases with the distance x increasing, and the nose of projectile must resist the maximum compressive stress. When the maximum value of x s equals the failure stress of the projectile material cr s , the head of the projectile will be damaged 12 . On the other hand, the integrity of local melting zone at the main structure is stable, this conclusion is in line with the results of experimental observation.
CONCLUSIONS
This research is carried out to investigate the failure mode of localised melting on outer case of projectiles after impacting armour steel plates, and examine the possibly lethal effect of melted layers grid on the survivability of projectiles. Due to the great resistant force by the target medium, the Cases with tiny magnetic particles dominant failure mode is shear fracture acting on the nose of projectile. The examination on the residual projectiles by magnetic particle inspection indicates that there is no damage at the local melting locations, which means that the main structure of projectiles can keep integrity during impacting. An analytical model for the strength analysis of projectile structure was proposed, and the weakest location is predicted to be the nose, which meets the results of tests. From the current results, the localised melting layers has little effect on the survivability of the projectile. | 2019-10-24T09:13:42.430Z | 2019-09-17T00:00:00.000 | {
"year": 2019,
"sha1": "6cfca3ca377f011096661da8e49b6e0ea9a50f9e",
"oa_license": null,
"oa_url": "https://doi.org/10.14429/dsj.69.13338",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "fbddbc2646dc44c43ca30004a6a29f19d1d80f7a",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
252788432 | pes2o/s2orc | v3-fos-license | Indication of Formation Water Geochemistry for Hydrocarbon Preservation: New Applications of Machine Learning in Tight Sandstone Gas Reservoirs
The migration of formation water plays a crucial role in hydrocarbon accumulation and preservation. The hydrodynamic field controls the content of various ions in formation water and is an important participant in hydrocarbon evolution. One potential high-yield gas field is the tight sandstone gas reservoirs in the northern Tianhuan Depression of the Ordos Basin, China. However, due to the complex gas–water relationship and limited water sample data, the development of gas reservoirs has encountered great difficulties; we thus analyzed the geochemical characteristics of a large scale of formation water acquired from the Permian in the Ordos Basin (60 water samples collected from 45 wells in the He8 Member). The results showed that formation water is the original sedimentary water in tight sandstone reservoirs, which represent a closed hydrological environment, which is conducive to gas accumulation. This is also related to the demonstrated strong water–rock reaction and diagenetic. We also developed a statistical model between these geochemical parameters and gas preservation based on machine learning algorithms (decision trees). Note that machine learning, as a data-driven artificial intelligence algorithm, generates massive correlation models that can learn from the structured training data sets to carry out predictions or evaluations in newly presented data. This algorithm can process large amounts of information data more quickly and can build more perfect correlation models through deep learning mechanisms than traditional statistical methods. The results suggest that the metamorphism coefficient has the best indication effect on the preservation of gas reservoirs. The hydrological environment with (Cl–-Na+)/Mg2+ > 50.066, Na+/Cl– ≤ 0.476, and Ma2+/Ca2+ ≤ 0.102 is a good hydrocarbon accumulation area. This study can be applied, by analogy, to more comprehensively interpret the correlation between the geochemical characteristics of formation water and hydrocarbon storage and to improve the accuracy of predicting favorable hydrocarbon accumulation areas in tight sandstone gas reservoirs.
INTRODUCTION
Formation water (also known as oilfield water), associated with oil or natural gas in strata, is a major geofluid and plays a fundamental role in hydrocarbon migration, accumulation, and preservation. 1,2 Formation water is derived from including atmospheric water, seawater, endogenous water, as well as a mixture of the above in sedimentary basins. 3 The movement of formation water is a universal geodynamic force that promotes the migration of hydrocarbons vertically and horizontally in the formation and accumulates in the nearest trap; its chemical composition directly or indirectly reflects the environment and conditions of hydrocarbon occurrence. 4,5 Physical reactions that control the composition of formation water include evaporative concentration, dissolution of salt minerals, and sedimentary filtration. 6 In addition, chemical reactions such as water−rock interactions, such as dolomitization of calcite and metamorphism of quartz, are also important processes affecting its geochemical composition. 7,8 These water−rock interactions usually occur in closed high-temperature and high-pressure environments, and mineral transformation in the formation ensues. 9,10 Furthermore, the composition of fluids can change drastically over different geological periods. 11 This is further complicated by active aquifers or deep hydrothermal fluids (with high total dissolved solid (TDS)) and can also dramatically affect geochemical characteristics of the fluid. 12,13 The origin and evolution of formation water and the associated relationship with hydrocarbons are topics of great interest to geoscientists over the last 80 years. The geochemical composition of formation water is widely applied to deep basins to determine hydrogeological conditions and strata sealing. 1 Chemical ions of water range from being SO 4 -rich in shallow strata to HCO 3 -rich in intermediate strata to Cl-rich in deep setting. 14 Previous data have shown that formation water with high saline indicates that the formation is well sealed, and hydrocarbons cannot easily escape and are preserved. 15,16 At present, the sources of high concentration of brine are believed to be the following: (1) Cl ions are captured by the precipitate; 17 (2) after the original seawater is extruded during compaction, Ca 2+ and Mg 2+ adsorbed in mineral particles are released; 18 and (3) cements act as permeable membranes that efficiently filter and facilitate the diffusion of major ions into primary water. 19 Cl and Br ions are often considered as a conservative tracer, which is mainly used in seawater and evaporates, so it is favorable to compare different ions between normal seawater and formation water in the study area and grasp the enrichment or depletion of ions, which can provide important clues about mineral transformation. 20 In addition, more and more isotopic evidence is applied to the evolution of fluids; for example, δ 2 H and δ 18 O have been applied to investigate the origin and evolutionary history of water 21 and are reflected in physical and chemical reaction processes. Cl and Br isotope can be applied to study the sources of salt and water− rock interactions. 22 Some physical processes such as salt precipitation, 23 diffusion, 24 ion filtration, and anion exchange would cause changes of Cl and Br isotope ratios. 25−27 Chaudhuri first proposed that strontium isotopes ( 87 Sr/ 86 Sr) could offer clues about the migration paths of brine in gas fields. 28 Sr isotopes play a key role in determining the source of fluid salinity. 29 In terms of ions and isotopes, higher salinity and higher ion ratios represent better formation sealing. Hence, fully understanding and tracing the properties of fluids in potential reservoirs are of great significance for interpreting the provenance, evolution, and fluid history of the basin, thereby contributing to hydrocarbon exploration. 5 However, despite much effort in geochemistry and isotopic studies, the origin of extremely high-salinity formation water remains controversial. In addition, there is still discussion as to whether the geochemical composition of the formation water is mainly controlled by late diagenesis and whether the composition represents the conditions at the time of hydrocarbon formation.
The Northern Tianhuan (NT) in the Ordos Basin is adjacent to the largest gas field in China, the Sulige gas field, and is considered a potential high-yield area due to its special geographical pattern. 30 However, the high-water production and the complicated gas−water relationship limit the exploration and development process in this area. 31 Although there have been studies on the characteristics of formation water and the diagenesis of reservoir rocks, the influence of formation water on hydrocarbon accumulation and the indication of favorable areas remain unclear. Therefore, in this work, we analyzed the geochemical composition of the formation water from the He8 Member of Permian in the Ordos Basin (a typical tight sandstone strata deposited at Upper Paleozoic) and assessed its relationship with the distribution of hydrocarbons.
Fortunately, as computer theory and artificial intelligence continue to update, more and more machine learning algorithms are being introduced into oil exploration and target deployment; e.g., Vikrant and Mario and Bergen et al. used scalable gradient boosted decision trees to classify reservoir characteristics and lithology, respectively, 32,33 data mining and machine learning had been used to identify sweet spots in gas reservoirs, 34 the permeability of tight carbonate through genetic algorithms had been predicted, 35 and the lithofacies type of lacustrine shale had been determined using artificial neural networks. 36 In this study, we also established the interrelation between formation water chemistry and gas preservation based on a decision tree model. This work is expected to be applied to the exploration of tight sandstone gas reservoirs in the Ordos Basin and other comparable geological conditions.
GEOLOGICAL SETTING
The Ordos Basin is located in central China (Figure 1), west of the Lvliang Shan Mountains and east of the Helan Shan Mountains. 37 According to the results of tectonic evolution in the geological period, the Ordos Basin can be divided into six main tectonic units. It is worth noting that the Yishan slope is a west-inclined gentle slope (Figure 1). 38 The slope and adjacent Tianhuan Depression areas are considered large natural gas reservoirs in the basin. Influenced by the Caledonian movement, the Ordos Basin as a whole uplifted and experienced 150 million years of weathering and erosion, resulting in the strata loss from late Ordovician to Early Carboniferous. It was not until the Late Carboniferous that new deposits were received. From Carboniferous to Permian, the Ordos Basin experienced relatively stable deposition, forming large area coal measure source rocks and sandstone reservoirs. During the Triassic to Middle Jurassic period, rapid subsidence of strata led to obvious compaction and further densification of sandstone reservoir. In the late Jurassic− early Cretaceous, with the occurrence of strong tectonic− thermal events, a large amount of natural gas was generated and migrated into the reservoir, forming tight sandstone gas reservoirs. 30 The NT Depression is located in the northwestern Ordos Basin and covers approximately 11,000 km 2 (∼4247 mi 2 ) ( Figure 1). The study area is adjacent to the Sulige gas field in the east and connected with the Western fault-folded zone in the west. The upper Paleozoic strata are a set of clastic rock sedimentary system with marine and continental transition facies. 39 The Permian strata are successively developed into Taiyuan Formation (P 1 t), Shanxi Formation (P 2 s), Lower Shihezi Formation, and Upper Shihezi Formation (P 2 h), with a total sedimentary thickness of about 500 m. The source rocks consists of coal seams and mudstone of Taiyuan Formation and Shanxi Formation, which have the characteristics of extensive hydrocarbon generation. 40 Tight sandstone gas originates from the Shanxi and Lower Shihezi Formations. 38 The 1st to 7th Members (He 1−He 7) of P 2 h are the monolithic caprocks. The main gas producing strata are the Shan1 Member and He8 Member, and the two were in conformity contact ( Figure 2).
Sixty water samples were investigated from exploratory wells in the He8 Member. Note that natural gas generated from the Permian source rocks first entered the Shan 1 member, displaced the formation water that originally existed here, and accumulated in the favorable trap. It presents the characteristics of being generally gas-bearing due to the near-source advantage. Then natural gas continues to migrate upward to the He8 reservoir, the supply of gas source is insufficient, and formation water is partially discharged, forming a typical gas−water layers. Therefore, the He8 Member is one of the most interesting strata to study formation water and the gas−water relationship at the Permian. Although hundreds of reserve and environmental surveys have been completed over the past few decades, studies of the basin's formation water are still limited. Furthermore, the crack caused by the tectonic movement partly destroyed trap effectiveness, and late diagenetic evolution function (such as compaction and cementation) has changed fluid seepage channels. All of these affect the accumulation of gas reservoir but also affect the geochemical composition of formation water, which are essential for indicating the preservation of hydrocarbons.
Experimental Procedure.
From 20 Feb to 26 Apr 2021, we collected 60 brine samples from 45 exploration wells in the He8 Member. First, the wellhead valve was opened for 5 h to ensure the complete flushing of gas and associated wastewater (drilling fluid and wellbore sewage). The He8 water was then sampled using a bottom hole sampling device. All water samples were filtered through 0.45 μm semipermeable membranes. Brine samples were collected in 50 mL polypropylene bottles, which were sealed with Parafilm paraffin paper until the samples were analyzed.
The composition of the 60 brine samples was analyzed at the State Key Laboratory of Continental Dynamics (Northwest University, China). Thus, acid titration was applied to test the bicarbonate content, major anions (Cl − , SO 4 2− , HCO 3 − ) were measured by ion chromatography (Dionex-ICS1500), cations (Ca 2+ , Mg 2+ , Na + , K + ) were determined by AAS techniques with a Perkin-Elmer Zeeman 5000, pH was determined in the field using an industrial pH instrument (MT-5000), and the total dissolved solid load (TDS) was measured via the evaporation method. Charge balance indicates a small deviation with the test results, which is caused by the instrument's own measurement accuracy and some trace ions being ignored. Therefore, we introduced charge error and calculated this deviation. The results showed that the deviation was less than 1%, indicating that the accuracy and results of this experiment fully meet the subsequent analysis.
Machine Learning.
To obtain the correlation between the geochemical composition of formation water and hydrocarbon distribution, a decision tree (DT) method is introduced for geological modeling. Note that such machine learning approach has greatly boosted natural gas exploitation over the past decade. 42,43 The DTs are algorithms that classify samples by their eigenvalues. In principle, DTs are nonparametric supervised learning machines for classification and regression. 33 A DT is like a hierarchical process data tree that includes branches and nodes in each level of the tree. For DTs, the classification criteria are defined by the optimal value of relevant eigenvalues such as information gain, information gain ratio, and Gini coefficient. Considering that input variables with more categories have more chances to become the current best nodes than those with fewer categories (for example, the range of the value range of salinity is much larger than that of pH, so salinity has more chances to be the best node than pH), the information gain appears to be biased. The Gini coefficient makes up for this deficiency, and the calculation speed is fast and does not require a log function. Therefore, for the study area, we specially designed a decision tree analysis process based on the saline geochemical data set with the minimum Gini coefficient as the node (Figure 3), as follows: (1) Data preparation: Prepare formation water parameters analyzed in the experiment. The Gini coefficient was used to evaluate the weighting of the geochemistry index. 44 The Gini coefficient identified the impurity of the samples (eq 1), the smaller Gini index, and the higher classification purity. We can determine the node by selecting the attribute with the smallest Gini index (eq 2): Based on the rules of DTs, data set D will be divided into V equal node data sets. P k represents the proportion of the kth sample in the current node data, and D v represents the Vth node. According to the output result of the Gini coefficient weighting algorithms, we select the attribute with the smallest Gini index in each branch as the splitting node. Eventually, DTs establish the correlation model between the brine geochemical parameters and gas preservation. Table 1). The main distribution range was 22 to 46 g·L − 1, with an average of 32.54 g·L −1 . The comparison shows that the TDS of formation water is significantly higher than that of surface water (usually about 0.1 g·L −1 ), and 62% of the formation water samples are also higher than that of seawater (30 g·L −1 ). The TDS indicates that water−rock reaction has taken place in the study area to a certain extent, evaporation and concentration are strong, and the environment of formation water is generally closed. Such a wellsealed hydrological environment is conducive to natural gas preservation. 45 4.1.2. Ion Concentration. In tight sandstone gas reservoirs, the ion composition of formation water has undergone a complex and long evolution process like that of hydrocarbons. 46 In the hydrogeological development stage such as deposition, burial, and metamorphism, the water type and chemical characteristics will change, which also affect the migration, accumulation, and preservation of hydrocarbons. The Cl − content in the formation water samples is the highest followed by Na + , Ca 2+ , SO 4 2− , HCO 3 − , and Mg 2+ ( Figure 5). The content of Cl − ion was in the range of 5120−38,000 mg· L −1 , which accounted for 91% of the total anions on average ( Table 1). The content of HCO 3 − ion was in the range of 8− 1660 mg·L −1 , with an average of 360 mg·L −1 , which accounted for 1.73% of the total anions on average. The content of SO 4 2− ion was also low, reflecting that the water environment is closed and anoxic. The main cations were Na + , Ca 2+ , and Mg 2+ , and the Na + ion content ranged from 650 to 13,000 mg·L −1 with an average of 7300 mg·L −1 , while the Mg 2+ ion content was low with an average of 258 mg·L −1 .
Note that Ca 2+ concentration is higher than normal deposition water due to dissolution of calcium-bearing minerals, while Na + and Mg 2+ concentrations are relatively low, indicating strong water−rock reaction (dolomitization or dehydration of clay minerals). Cl − is absolutely dominant, mainly due to the ubiquitous NaCl in the original seawater. 47 The formation water is completely isolated formation water (CaCl 2 ), indicating that it belongs to the original sedimentary water and has undergone strong metamorphism.
pH.
The results of formation water samples show that the pH value ranges from 5.5 to 7.1, with an average of 6.5. The formation water is slightly acidic due to the organic acid fluid associated with hydrocarbon generation, which is common in Permian oil and gas reservoirs in the Ordos Basin.
Ion Ratio Parameters.
In addition to basic parameters such as salinity and pH mentioned above, ion ratio parameters of formation water are also commonly used to reflect formation water characteristics. 6 These parameters usually refer to the sodium-chloride coefficient, desulfurization coefficient, magne-sium−calcium coefficient, and metamorphism coefficient, which can better reflect the specific water environment and indicate hydrocarbon migration and preservation information.
4.2.1. Sodium/Chloride (Na + /Cl − ) Ratio. The Na + /Cl − ratio can be used to reflect the metamorphism degree of formation water and evaluate strata sealing. All the sample points are above the line of Seawater Evaporation Trend (SET) and Sea-Riverwater Evaporation Trend (SRET), showing a status of sodium loss (Figure 6a). Note that the He8 Member is shallowwater delta deposits, so the joint line of SET and SRET (hereinafter generally referred to as the joint line) to trace the processes of the formation water is very useful. 5 Sodium loss (deviation from the joint line) represents the albitization of potash feldspar and the Na + adsorption by clay minerals in addition to the increase or decrease with water. In terrestrial sediments, the higher the value of the sodium-chloride coefficient is, the more the formation water is affected by infiltration water, and the more adverse it is to the preservation of hydrocarbons. The upper limit of the sodium-chloride coefficient is generally considered to be 0.7. The Na + /Cl − ratio in the western area is high, which is not conducive to natural gas preservation (Figure 7). On the contrary, the lower Na + /Cl − ratio in the middle and east indicates that the formation water is less affected by external water and that the formation sealing is better.
Magnesium/Calcium (Mg 2+ /Ca 2+ ) Ratio.
Mg 2+ and Ca 2+ are expendable ions in formation water. The formation water samples show that Mg 2+ is depleted above the joint line, while Ca 2+ is enriched below the joint line (Figure 6b,c). Mg 2+ consumption is related to chlorite precipitation (eq 3) and dolomitization (eq 4), and Ca 2+ consumption is mostly related to calcite and laumonzeolite precipitation. 48 Under the background of a large consumption of Mg 2+ and Ca 2+ , dissolution of calcite, laumonzeolite, and other calcium-bearing minerals will increase the Ca 2+ content and make Ca 2+ concentration in formation water relative to Mg 2+ . Dissolution can also improve the pore structure of the strata to a certain extent. Therefore, the Mg 2+ /Ca 2+ ratio can be used to characterize the development degree of secondary pores, and the smaller this value is, the more developed secondary pores are. The central region is a favorable place for hydrocarbon storage because of its low Mg 2+ /Ca 2+ ratio and strong dissolution (Figure 8). 4 2− × 100/Cl − Ratio. The concentration of SO 4 2− is related to the intensity of desulfurization and redox conditions, and desulfurization bacteria play a decisive role in controlling the concentration of SO 4 2− in formation water. 49 In deep oil and gas reservoirs, sulfates are converted into sulfides by desulfurizing bacteria in water, so the content of SO 4 2− decreases and the desulfurization coefficient decreases. Theoretically, the smaller the desulfurization coefficient is, the better is the formation sealing, indicating a higher degree of formation water reduction, which is more conducive to the gas accumulation. 50 The desulfurization coefficient in the middle and eastern region is less than 6, which is obviously lower than that in the present sea and river water (this value is 12 51 ), reflecting that the formation sealing in this region is good, which is conducive to the preservation of hydrocarbons (Figure 9).
Metamorphic Coefficient (Cl
The metamorphic coefficient, an important evaluation parameter of the degree of water−rock reaction, reflects the level of cation exchange in the process of mineral dissolution and precipitation. 52 In the case of better formation sealing, the higher the (Cl − -Na + )/Mg 2+ ratio is, the more favorable it is to the preservation of oil and gas reservoirs.
The (Cl − -Na + )/Mg 2+ ratio of the He8 Member ranges from 12 to 62, with an average value of 37.9, and metamorphic coefficients are higher in the east−northeast direction. These can be interpreted as the lack of fracture development in the He8 Member in this region and no chemical exchange with adjacent movable aquifers, resulting in strong water−rock interaction. Na + and Mg 2+ are mainly replaced by Ca 2+ , which makes metamorphic coefficient higher. However, the metamorphism in the west to south direction of the study area is weak (Figure 10), which is not conducive to oil and gas storage.
Correlation between Hydrocarbon Preservation and Hydrogeochemical Properties. 4.3.1. Decision Tree
Model. First, we randomly selected 42 formation water samples (70% of the total samples) for model training and prepared geochemical parameters including Cl − , SO 4 2− , HCO 3 − , Na + , Ca 2+ , Mg 2+ , TDS, and pH, as well as ion ratio parameters including Na + /Cl − ratio, Mg 2+ /Ca 2+ ratio, SO 4 2− × 100/Cl − ratio, and (Cl − -Na + )/Mg 2+ ratio as nodes to participate in the calculation. Subsequently, the Gini coefficient (node discrim- For the accuracy of the model, the remaining 30% of samples were used as test samples. To select the optimal model, 70% of the original data were selected again for training and 30% for testing (this division process is repeated 100 times), and the experimental procedures and results were checked and compared. The model showed high reliability, and the correlation between formation water and gas preservation was higher than 83%. Clearly, the (Cl − -Na + )/Mg 2+ ratio takes the highest weight in the first decision step followed by the Na + /Cl − ratio, Mg 2+ /Ca 2+ ratio, and SO 4 2− × 100/Cl − ratio ( Figure 11). Each internal node was further selected, and the weight was identified with the smallest Gini coefficient. A decision tree then established correlations between key geochemical parameters and gas/water distribution (Table 2). Therefore, the (Cl − -Na + )/Mg 2+ and (Na + /Cl − ) ratio is significantly critical because the geological environment (i) highly correlated with gas reservoir preservation, and production data also indicate that these strata produce gas. Semisealed reservoirs (ii and iii) typically produce both gas and water simultaneously. Other reservoirs in the study area are not suitable for gas accumulation.
Model Validation.
The correlation indicates that the model is not absolutely precise, but the accuracy of the decision is more than 83% and is 86.66% for medium reservoirs. It is worth mentioning that the accuracy of the model is as high as 92.43% for discriminating good gas preservation, which is of great significance for the exploitation of tight sandstone gas reservoirs.
To verify the validity of the model, we selected four new development wells that were put into production in December 2021 for model validation. Note that these four wells are not included in this training data set. As can be seen from the well profile ( Figure 12), due to the relatively flat formation and tight reservoir, the small structural elevation difference has little influence on gas migration. The chemical properties of the Table 3), indicating that they are not in the same structural unit and do not interfere with each other. The decision tree model results show that wells T-10 and T-12 are favorable for gas preservation, well T-11 is semiclosed for gas preservation, and well T-13 is unfavorable for gas preservation. The production data also show that the formation water chemistry of well T-12 is in line with (i) good gas storage, and the daily gas is 8.97 × 10 4 m 3 . The practical exploration results also verify that the model has a high reliability; in addition, the correlation between ion ratio parameters and gas saturation shows that gas saturation and the Na + /Cl − ratio, Mg 2+ /Ca 2+ ratio, and SO 4 2− × 100/Cl − ratio showed a negative correlation (Figure 13a−c), and the (Cl − -Na + )/Mg 2+ ratio assumed a certain positive correlation (Figure 13d), indicating that the ion parameters of formation water are closely related to the preservation of natural gas and can well reflect the accumulation and preservation gas reservoirs.
The formation water of tight sandstone reservoir is mainly controlled by two mechanisms: tectonic movement and diagenetic evolution. 41 It should be noted that when the gas reservoir is subjected to uplift denudation or deep-large fracture, although the geochemical parameters of formation water will be affected, this situation is fatal to the effective accumulation of the gas reservoir and may lead to the escape of the gas reservoir, resulting in a large error of the established model. Therefore, in this paper, the tight sandstone gas reservoir referred to is only a sedimentary basin with a relatively stable subsidence and no deep faults and serious denudation. In addition, because of the difference of the actual sample, machine learning may have some error. The error is expected and reasonable, and under the unsupervised learning mechanism, is likely to happen overfitting, the infinite leaf nodes classification phenomenon, but in the final leaf nodes may be due to sample too little but invalid; therefore, when a class of leaf nodes is less than 5% of the total sample, the further classification of node is stopped to prevent overfitting. It is to be expected that this work will provide certain guidance value for the exploration of tight sandstone gas in the Ordos Basin and even the world, and the accuracy of predictions will increase as machine learning algorithms are updated in the future.
CONCLUSIONS
We investigated the geochemical properties of formation water in the northern Tianhuan Depression in the Ordos Basin, China. The result indicated that the geochemical characteristics varied significantly in the He8 Member. The formation water is original sedimentary water in tight sandstone reservoirs, and its genesis is related to evaporation and concentration effect and water−rock interaction effect, having experienced intensive condensation and metamorphism effect. The main conclusions of this study are summarized as follows: (1) The geochemical characteristics showed that CaCl 2 water is the main water type in the tight sandstone gas reservoir, which is isolated sedimentary water and has good sealing property. Sodium (deviation from the joint line) represents the albitization of potash feldspar and the Na + adsorption by clay minerals in addition to the increase or decrease with water. The Na + concentration loss is due to the albitization of potash feldspar and the Na + adsorption by clay minerals; Mg 2+ consumption is related to chlorite precipitation and dolomitization; and Ca 2+ increasing is mostly related to dissolution of calcite, laumonzeolite, and other calcium-bearing minerals.
(2) The geochemical characteristics and distribution of formation water were analyzed, including TDS, pH, ion concentration, and ratio parameters such as Na + /Cl − ratio, Mg 2+ /Ca 2+ ratio, SO 4 2− × 100/Cl − ratio, and (Cl − -Na + )/Mg 2+ ratio, which can indicate sealing conditions and gas preservation abilities. Based on machine learning algorithms, the Gini coefficient is selected as the classification node of the decision tree model, and a correlation between formation water geochemistry and natural gas production was confirmed. Figure 9. Comparison of SO 4 2− × 100/Cl − ratios and gas/water production distribution in the study area.
(3) The model shows that the metamorphism coefficient has the best indication effect on the preservation of gas reservoirs. The hydrological environment with (Cl − -Na + )/Mg 2+ > 50.066, Na + /Cl − ≤ 0.476, and Ma 2+ /Ca 2+ ≤ Figure 10. Comparison of (Cl − -Na + )/Mg 2+ ratios and gas/water production distribution in the study area. Figure 11. DT model of formation water geochemical characteristics and gas reservoir preservation. Seventy percent of the training data used for this model are samples 1−42 in Table 1, and 30% of the test data are samples 43−60.
0.102 is a good hydrocarbon accumulation area. The model was also successfully validated in new development wells. Most importantly, formation water geochemistry clearly correlates with hydrocarbon preservation, and this Figure 12. Gas and water wells profiles (wells T-10 to T-13) for validation (the location of the connecting well line is shown in Figure 7). Figure 13. The relationship between ion ratio parameters and gas saturation. (a) Na + /Cl − −gas saturation, (b) Mg 2+ /Ca 2+ −gas saturation, (c) SO 4 2− × 100/Cl − −gas saturation, and (d) (Cl − -Na + )/Mg 2+ −gas saturation. | 2022-10-11T17:15:42.916Z | 2022-10-06T00:00:00.000 | {
"year": 2022,
"sha1": "0d3dcdacbe7ca97e42e7a918398da2df21641fbb",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "PubMedCentral",
"pdf_hash": "a00ef26cd8808019c28f757fc377f2cee360715a",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3603538 | pes2o/s2orc | v3-fos-license | Nutritional intra-amniotic therapy increases survival in a rabbit model of fetal growth restriction
Objective To evaluate the perinatal effects of a prenatal therapy based on intra-amniotic nutritional supplementation in a rabbit model of intrauterine growth restriction (IUGR). Methods IUGR was surgically induced in pregnant rabbits at gestational day 25 by ligating 40–50% of uteroplacental vessels of each gestational sac. At the same time, modified-parenteral nutrition solution (containing glucose, amino acids and electrolytes) was injected into the amniotic sac of nearly half of the IUGR fetuses (IUGR-T group n = 106), whereas sham injections were performed in the rest of fetuses (IUGR group n = 118). A control group without IUGR induction but sham injection was also included (n = 115). Five days after the ligation procedure, a cesarean section was performed to evaluate fetal cardiac function, survival and birth weight. Results Survival was significantly improved in the IUGR fetuses that were treated with intra-amniotic nutritional supplementation as compared to non-treated IUGR animals (survival rate: controls 71% vs. IUGR 44% p = 0.003 and IUGR-T 63% vs. IUGR 44% p = 0.02), whereas, birth weight (controls mean 43g ± SD 9 vs. IUGR 36g ± SD 9 vs. IUGR-T 35g ± SD 8, p = 0.001) and fetal cardiac function were similar among the IUGR groups. Conclusion Intra-amniotic injection of a modified-parenteral nutrient solution appears to be a promising therapy for reducing mortality among IUGR. These results provide an opportunity to develop new intra-amniotic nutritional strategies to reach the fetus by bypassing the placental insufficiency.
Conclusion
Intra-amniotic injection of a modified-parenteral nutrient solution appears to be a promising therapy for reducing mortality among IUGR. These results provide an opportunity to develop new intra-amniotic nutritional strategies to reach the fetus by bypassing the placental insufficiency. PLOS
Introduction
Intrauterine growth restriction (IUGR) is generally defined as a significant reduction in fetal growth rate resulting in a birth weight in the lowest 10th percentile. It affects 7-10% of all pregnancies [1] and is considered a major contributor to perinatal morbidity and mortality, responsible for about 20-50% of perinatal deaths. It is also associated with worse short and long-term outcomes as increased prevalence of intrapartum distress, neonatal complications [2], suboptimal neurodevelopment [3,4] and cardiovascular disease [5,6]. Currently, there is no effective nutritional therapy to improve fetal growth or to ameliorate the adverse outcomes associated with IUGR [7][8][9][10]. Thus, the assessment of fetal well-being and timely delivery remain as the main management strategy, outweighing fetal injury/stillbirth versus the risks of iatrogenic preterm delivery. Placental insufficiency is the most common cause of IUGR where the nutrient transport to the fetus is compromised [11]. To date, several studies in humans [10,[12][13][14][15] and animals [16][17][18] testing diverse therapies administrated to the mother have failed to demonstrate a substantial improvement in fetal outcomes related to placental insufficiency. The lack of effectiveness of these therapies that are only aimed at the mothers could most probably be explained by a failure of nutrient transport between the mother and the fetus in the presence of the placental disease [19][20][21][22], therefore the administrated therapies cannot cross the placenta and reach the fetus. Direct nutrient supply to the fetus could theoretically overcome this problem by bypassing the placenta. However, previous studies that attempted to supply carbohydrates, growth factors or amino acid mixtures through trans-amniotic catheter insertion or direct fetal injections led to inconclusive results [23][24][25][26][27][28][29][30]. Moreover, most studies used a single nutrient approach with an invasive trans-amniotic placement of a catheter for several days. We hypothesized now that the administration of a complete nutrient composition (combining essential nutrients such as glucose, amino acids and electrolytes) in a single intra-amniotic injection could improve the outcomes of IUGR. Thus, we planned to administrate this complete nutrient composition therapy by intra-amniotic injections based on the fetus capacity of swallowing amniotic fluid, by which essential nutrients delivered intra-amniotically would reach the gastrointestinal tract and be absorbed [31,32], potentially compensating the nutrient deficiency caused by placental insufficiency.
In this study, we used a rabbit model of placental insufficiency to test the hypothesis that intra-amniotic nutrient delivery would improve the perinatal outcome of IUGR fetuses, by analyzing survival, birth weight and fetal cardiac remodeling.
Animals and experimental procedure
The study has been reported according to the ARRIVE guidelines[33] for reporting the in vivo experiments. Animal handling and all procedures were performed in accordance with applicable regulations and guidelines and with the approval of the Animal Experimental Ethics Committee of the University of Barcelona (Permit no: 250/15) All efforts were made to reduce both animal suffering and the number of animals used.
Thirty-eight time-mated 24 months old New Zealand White pregnant rabbits were provided by a certified breeder at 18 th day of gestation (full term is approximately 31 days). Dams were housed in separate cages on a reversed 12/12 h light cycle, with free access to water and standard chow. At 25 th day of gestation, IUGR was induced surgically by uteroplacental vessel ligation and intra-amniotic injections were performed. At 30 th day of gestation,an abdominal incision was made and the uterine horns were exteriorized to perform fetal echocardiography in a subgroup of fetuses. Subsequently, fetuses were delivered by cesarean section. Experimental design and timeline are shown in Fig 1 and the description of all the procedures are detailed in the following sections.
Rabbit model of IUGR and therapy administration
On gestational day 25, an abdominal midline laparotomy was performed and both uterine horns were exteriorized under endovenous anesthesia of Ketamine (Ketolar1 50mg/ml, Pfizer, 10 mg/kg) and Xylazine (Rompun1 2%, Bayer, 3mg/kg). Gestational sacs of both horns were counted and numbered, and each fetus was identified according to the fetal position within the bicornuate uterus. Prior to surgery, each uterine horn was randomly allocated to a group (control, IUGR or IUGR-T) based on a computer generated randomization number sequence. As each dam has two uterine horns, in order to obtain three experimental groups (control, IUGR and UGR-T), horns of each dam was assigned to a paired combination of these groups, resulting in three combinations (control and IUGR, control and IUGR-T, or IUGR and IUGR-T). Based on the ligation and nutrient injection, the experimental groups were created: Control (no IUGR induction and sham injection, n = 115), IUGR (IUGR induction and In pregnant rabbit at 25 th day of gestation, IUGR was surgically induced by uteroplacental vessel ligation and intra-amniotic injections were performed. Control fetuses did not undergo vessel ligature and they had sham injection. IUGR fetuses underwent uteroplacental vessel ligature and sham injection. and finally IUGR-T fetuses underwent uteroplacental vessel ligature and therapy administration which is intra-amniotic injection of 300 μl of modified-parenteral nutrient solution. At 30 th day of gestation, cesarean section was performed and uterine horns were exteriorized to perform fetal echocardiography in a subgroup of fetuses from each experimental group. The fetuses were then taken out for survival assessment and biometric measurements and then sacrificed for tissue sampling. Black arrows and yellow circles indicate ligated uteroplacental vessels of fetal sacs in the uterine horn, yellow arrow indicates intra-amniotic injection to the fetal sac and red arrow indicates the ultrasound transducer. sham injection, n = 118) and IUGR-T (IUGR induction and therapy administration, n = 106). IUGR was surgically induced by ligation of 40-50% of the uteroplacental vessels of the assigned gestational sacs [34]. In addition, 300 μl of modified-parenteral nutrition solution (see Table 1 for composition details: a complete mixed composition containing glucose, amino acids and electrolytes, but excluding lipids, based on previous evidence of respiratory insufficiency in fetuses who received trans-amniotic lipid emulsion [26]), was injected to the amniotic sac of the IUGR-T fetuses via a 25G needle (B.Braun Sterican1). The IUGR and control groups received needle puncture without administrating any substance to the amniotic sac (sham injection) (Fig 1). Administration of Buprenorphine (Buprex injectable, 0.3 mg/ml; Schering-Ploug, Madrid, Spain) was used as a post-operative medication: The dams received a single dosage of Buprenorphine (0.01-0.05mg/kg), administrated subcutaneously after the induction of IUGR and administrated orally diluted in the water during the first 48 hours after the operation (0.03ml /5kg/8h).
Fetal echocardiography and cesarean section
At 30 days of pregnancy, an abdominal midline laparotomy was performed and uterine horns exteriorized under endovenous anesthesia of Ketamine (Ketolar1 50mg / ml, Pfizer) 10 mg/ kg and Xylazine (Rompun1 2%, Bayer) 3mg/kg. After that, fetal echocardiography was performed in a subgroup of fetuses using vivid q ultrasound equipment (GE Healthcare, Little Chalfont Buckinghamshire, UK) with i12L-RS linear transducer, placed directly on each exteriorized gestational sac. Cardiac area, thoracic area, ventricular base-to-apex length and transverse diameters, and septal myocardial wall thickness were measured in end-diastole from a 2D image. Then, cardio-thoracic ratio was calculated by dividing cardiac area per thoracic area. Ventricular sphericity indexes were calculated as base-to-apex length divided by transverse diameter. Heart rate was also measured using Doppler applied on the left outflow tract. Immediately after echocardiography, all live and stillborn fetuses were obtained by uterine horn incision and weighted. Dams were sacrificed by endovenous overdose of sodium pentobarbital (200 mg/kg), immediately after fetal extraction. All living newborns were sacrificed by immediate decapitation. Survival rate was determined by the ratio of live fetuses at the time of the cesarean section to all viable fetuses at the time of the ligature procedure. Intestine samples were collected after delivery for subsequent analysis.
Sampling and analysis of fetal intestine
After sacrificing the fetuses, one-centimeter tissue sample was collected from the proximal small intestine and fixed with 4% paraformaldehyde in PBS for 24 h at 4˚C. Fixed intestinal samples were embedded in paraffin to obtain 5μm sections to be stained with hematoxylin and eosin. Histology images were acquired using a microscope (Leica, Bannockburn, IL) and software (Leica Application Suite, version 3.4). Quantification of intestine diameter, villus height and muscular and sub-mucosal layer thickness was performed using Image J software (http:// rsbweb.nih.gov/ij) in order to evaluate intestinal structure.
Statistical analysis
The STATA14.0 package was used for statistical analyses. Qualitative variables were compared by Pearson's Chi Square test. Normal distribution of quantitative variables was assessed by Shapiro-Wilk test. Normally-distributed variables were expressed as mean and standard deviation and analyzed by one-way ANOVA followed by a Bonferroni's Multiple Comparison post hoc test. Non-normal distributed parameters were shown as median and interquartile range and compared by non-parametric Kruskal-Wallis. Statistical significance was declared at p<0.05.
Intra-amniotic nutrient supplementation increases IUGR survival with no improvement in birth weight
A total of 339 fetuses were obtained (115 control, 106 IUGR and 108 IUGR-T fetuses, respectively) from 38 dams. Of these 339 fetuses, 201 were alive at the day 30 of cesarean section (82 controls, 52 IUGR and 67 IUGR-T). The mean litter size was 11.4 ± 2.3.
Non-treated IUGR fetuses presented a significantly lower survival rate (IUGR 44% vs. control 71% p = 0.003) and lower birth weight as compared to controls (Figs 2A and 3). However, under therapy, IUGR-T fetuses showed a significantly higher rate of survival (IUGR-T 63% vs. IUGR 44%, p = 0.02) despite the birth weight was similar to non-treated IUGR (Figs 2A and 3). A further analysis associated with fetuses' uterine position revealed that birth weight of control fetuses was significantly higher as compared to both IUGR and IUGR-T groups independently from the uterine position (Fig 4). As expected, fetuses in extreme positions (ovarian and cervical ends) had higher survival rate as compared to the fetuses in intermediate positions (control 78% versus 68%; IUGR 50% versus 41%; IUGR-T 66% versus 62%, Fig 5) as compared to the fetuses in the intermediate positions. This observation was detected in all the experimental groups. For both positions, control fetuses had significantly higher birth weight than IUGR and IUGR-T fetuses, while birth weight of the latter two groups did not differ from each other (Fig 4).
An analysis performed for a subgroup of fetuses that weighed less than 30 grams (which correspond to the 10 th centile of normally distributed weight at birth [3,6,[34][35][36]) revealed that a significantly higher proportion of IUGR-T animals with that weight were alive compared to IUGR animals ( Fig 2B).
Intra-amniotic nutrient supplementation does not compensate for fetal cardiac adaptation
Fetal echocardiography revealed similar cardiac alterations in both IUGR and IUGR-T fetuses with larger hearts, thicker myocardial walls and a more spherical left ventricle as compared to controls ( Table 2).
Intra-amniotic nutrient supplementation ameliorates IUGR intestine structural changes
Regardless of the absence of any positive change in fetal cardiac functions and birth weight among IUGR-T treated fetuses, the notable improvement in the survival rate in this group versus IUGR fetuses may suggest that the nutritional supplementation administrated was able to reach circulation in IUGR fetuses, very likely through swallowing and intestinal absorption. Actually, performed histological analyses of the small intestine (Fig 6) provide some evidence for that hypothesis; revealing shorter villus height and a less organized structure of the absorbent surface in IUGR than IUGR-T fetuses, which appears to be partially ameliorated in IUGR-T intestines (Fig 6).
Discussion
Our results support intra-amniotic injection of nutrients as a promising therapy for reducing mortality among IUGR, regardless of no apparent effects on birth weight. These results open opportunities for intra-amniotic nutritional strategies to reach the fetus bypassing the placenta.
The striking finding of our study is the improvement of survival rate in IUGR fetuses receiving intra-amniotic nutritional supplementation. Our results demonstrate that intraamniotic injection of a modified-parenteral nutrition notably reduces mortality in an animal model of placental insufficiency. In contrast, intra-amniotic nutrition was not able to ameliorate birth weight or fetal cardiac adaptation. Several studies in the early 90s also attempted to supply nutrients in the amniotic cavity with dissimilar results. Mulvihill et al. demonstrated a Nutritional intra-amniotic therapy increases survival in a rabbit model of fetal growth restriction positive effect on fetal growth with similar mortality by 5-days intra-amniotic continuous infusion of bovine amniotic fluid or dextrose plus amino acids in rabbits [27,28]. In contrast, Flake et al. could not demonstrate any improvement in birth weight by 6-days continuous amniotic infusion of dextrose, dextrose-amino acid mixture or lipids in a 'natural runting' IUGR rabbit model [26]. Actually, the infusion of lipid emulsion resulted in chronic lipid aspiration and further growth retardation. Phillips et al. used 4-days continuous intra-amniotic infusion of Nutritional intra-amniotic therapy increases survival in a rabbit model of fetal growth restriction radioactive glucose and proline to demonstrate fetal nutrients absorption but failed to show changes in survival and birth weight [30]. Buchmiller et al. showed unchanged body weight and mortality after 4-days intra-amniotic infusion of galactose [37]. Finally, in the present study a combination of carbohydrates, amino acids and electrolytes were administrated by a single amniotic injection in a rabbit model of uteroplacental vessel ligation showing improvement of survival despìte no improvement in birth weight. Overall, the contradictory results from different studies could be explained by differences in therapy duration (single administration vs. 4-6 days of continuous infusion) and timing, type of nutrients administered (including or not including electrolytes), IUGR models (naturally vs. uteroplacental ligation) and sample size. Nutrient administration by a single injection might associate less mortality than a more invasive procedure such as a catheter insertion required for continuous infusion during several days. While intra-amniotic lipids seem deleterious, carbohydrates and amino acids appear essential for fetal development and growth. In addition, electrolytes such as potassium, calcium, magnesium could also be essential for fetal survival by regulating nutrient uptake [38]. The use of large sample size in a severe IUGR model with high perinatal mortality enabled us to demonstrate an improvement in survival rate among IUGR-T fetuses. We speculate that the specific mixture of glucose, amino acids and electrolytes (without lipids) administrated by a single amniotic injection in the present study is enough to improve the fetal nutritional status subsequently increasing survival. Our data also suggest that nutritional status seems to be more critical than hypoxia for survival. A possible explanation for the lack of birth weight improvement in IUGR-T fetuses could be that survival of mainly the more severely restricted animals (fetuses with birth weight between 20 and 30 grams that would otherwise have died) pulled down the mean birth weight. Another potential explanation is that single administration of nutrients could only partially counteract the effect of placental insufficiency. Placental insufficiency is usually associated with a complex pathophysiologic adaptation leading to nutrient and oxygen restriction to the fetus, but also increased placental resistance inducing pressure overload to the fetal heart (that has to pump against a more resistant placenta). Most likely, intra-amniotic injection of modified-parenteral nutrition permits to ameliorate the critical fetal nutritional deficiency, but not the fetal hypoxia or pressure overload (that would explain the maintained low birth weight and fetal cardiac remodeling). Uterine horn position seems to be a relevant factor for birthweight in rabbit model. Bautista and colleagues [39] demonstrated that animals closer to the extremities of the uterine horn had higher weight and survival compared to the animals in intermediate position. In correlation with the results reported by Bautista et al., we have also found that fetuses in the extreme positions had significantly higher birth weight than the fetuses in the intermediate position in all groups (Fig 4). Moreover, the birth weight difference of the fetuses between subgroups was significant which is consistent with our results of the whole population (control fetuses had significantly higher birth weight than IUGR and IUGR-T fetuses, while the birth weight was similar in IUGR and IUGR-T), independently of the position. We have also observed a nonsignificant trend for higher survival rate in extreme positions (Fig 5) which is also consistent with previously reported data [39]. Taken together, the position analysis indicates that the therapy is effective to counteract IUGR, independent of the uterine position.
The present study also showed that IUGR induction by uteroplacental vessels ligation had a negative impact on the gut structure that seems to be ameliorated by intra-amniotic injection of nutrients. Similarly, previous studies exhibited improved small intestine growth in IUGR animals by esophageal infusions of nutrients to fetal rabbit and fetal sheep [28,29], demonstrating the nutritive value of fetal swallowing in fetal intestine. In addition, previous data suggest that intra-amniotic infusion of nutrients swallowed by the fetus are transported through the gastrointestinal tract, absorbed and concreted into fetal tissues [30], suggesting an active transport of nutrients in the fetal small intestine. Taken together, our findings correlate with these studies and provide additional support for the hypothesis that intra-amniotic infusion of nutrients has a trophic effect in fetal small intestine which might provide an additional explanation for the increase in survival with IUGR-T fetuses in our study.
We acknowledge gender issues as one potential limitation for our study as sex of the animals could not be determined at the time of birth, therefore it was not possible to analyze differences in survival rate and birth weight by gender. As it was stated in the study of Tarrade et al. [40], sexual dimorphism can be often observed in rabbits in the HFD model. Further studies are needed to assess the difference between male and female birth weight and survival rate in the nutritional intra-amniotic therapy model. We also acknowledge limited information on the weights of placenta of newborn rabbits, therefore we could not calculate the fetal-placental weight ratio (F:P). Future studies are warranted to examine the impact of intra-amniotic therapies in placental development.
In conclusion, our study demonstrates that intra-amniotic nutrient supplementation increases survival rates of IUGR fetuses remarkably, particularly among those more severe IUGR animals. The use of a mixed nutritional solution containing essential carbohydrates, amino acids and electrolytes seems as an appropriate approach for reducing the mortality in IUGR. The findings from our study could be considered as a potential advance to fetal intervention of IUGR. This would raise the possibility of therapeutic strategies to improve survival particularly in those more severely restricted fetuses. Future studies are warranted to evaluate different fetal nutritional supplementation in IUGR outcomes in order to find the optimal mode, dose and timing of administration and nutritional composition. | 2018-04-03T05:27:31.154Z | 2018-02-21T00:00:00.000 | {
"year": 2018,
"sha1": "3d844ca067a967e60d267031dd9da1e6e992527f",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0193240&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3d844ca067a967e60d267031dd9da1e6e992527f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
23099375 | pes2o/s2orc | v3-fos-license | In Vitro Assembly of Human Immunodeficiency Virus Type 1 Gag Protein*
Retroviral Gag protein is sufficient to produce Gag virus-like particles when expressed in higher eukaryotic cells. Here we describe the in vitro assembly reaction of human immunodeficiency virus Gag protein, which consists of two sequential steps showing the optimal conditions for each reaction. Following expression and purification, Gag protein lacking only the C-terminal p6 domain was present as a monomer (50 kDa) by velocity sedimentation analysis. Initial assembly of the Gag protein to 60 S intermediates occurred by dialysis at 4 °C in low salt at neutral to alkaline pH. However, higher order of assembly required incubation at 37 °C and was facilitated by the addition of Mg2+. Prolonged incubation under these conditions produced complete assembly (600 S), equivalent to Gag virus-like particles obtained from Gag-expressing cells. Neither form disassembled by treatment with nonionic detergent, suggesting that correct assembly might occur in vitro. Electron microscopic observation confirmed that the 600 S assembly products were spherical particles similar to authentic immature human immunodeficiency virus particles. The latter assembly stage but not the former was accelerated by the addition of RNA although not inhibited by RNaseA treatment. These results suggest that Gag protein alone assembles in vitro, but that additional RNA facilitates the assembly reaction.
Retroviral Gag protein is sufficient to produce Gag virus-like particles when expressed in higher eukaryotic cells. Here we describe the in vitro assembly reaction of human immunodeficiency virus Gag protein, which consists of two sequential steps showing the optimal conditions for each reaction. Following expression and purification, Gag protein lacking only the C-terminal p6 domain was present as a monomer (50 kDa) by velocity sedimentation analysis. Initial assembly of the Gag protein to 60 S intermediates occurred by dialysis at 4°C in low salt at neutral to alkaline pH. However, higher order of assembly required incubation at 37°C and was facilitated by the addition of Mg 2؉ . Prolonged incubation under these conditions produced complete assembly (600 S), equivalent to Gag virus-like particles obtained from Gag-expressing cells. Neither form disassembled by treatment with nonionic detergent, suggesting that correct assembly might occur in vitro. Electron microscopic observation confirmed that the 600 S assembly products were spherical particles similar to authentic immature human immunodeficiency virus particles. The latter assembly stage but not the former was accelerated by the addition of RNA although not inhibited by RNaseA treatment. These results suggest that Gag protein alone assembles in vitro, but that additional RNA facilitates the assembly reaction.
The main structural component of human immunodeficiency virus (HIV) 1 particle, Gag, is encoded by the gag gene and is the sole protein required for formation of Gag virus-like particles (VLPs), analogous to the immature form of authentic HIV. Accordingly, expression of Gag protein alone by recombinant viruses or by transfection of expression plasmids leads to simultanous assembly and budding of Gag VLPs from the cell surface (1)(2)(3)(4). This process is thought to consist of several steps: N-terminal myristoylation of Gag protein followed by targeting to the plasma membrane and self-assembly of Gag protein underneath the plasma membrane to form Gag VLPs and budding (1,5,6). Although the N-terminal myristoylation is essential for Gag targeting to the plasma membrane (1,7,8), assembly of the Gag protein itself appears not to require the myristoylation, since nonmyristoylated Gag protein co-assem-bles with myristoylated Gag protein and is found in budded Gag VLPs (4,9,10). Furthermore, expression of Gag protein in Escherichia coli, which lacks N-myristoyltransferase activity (11), yields Gag VLP-like structures inside the cells (12) despite the lack of Gag myristoylation.
HIV Gag protein consists of four distinct domains, the Nterminal matrix (MA, p17), the central capsid (CA, p24), the nucleocapsid (NC, p7), and the C-terminal p6 domain (13), each of which is produced by processing of Gag protein during or soon after virus particle budding (14,15). HIV particles just after budding are spherial but, concomitant with Gag processing, are transformed to particles containing conical cores. The Gag regions responsible for virus particle assembly have been extensively studied by amino acid deletion and substitution experiments, and evidence has accumulated to suggest that the C-terminal third of the CA domain, including the p2 peptide, which is located at CA/NC junction, is essential (16 -19). In contrast, most of NC and the entire p6 domain are dispensable for Gag VLP formation (1)(2)(3)20), although the NC domain contains a crucial determinant for packaging of viral genomic RNA (21)(22)(23)(24)(25). Data on the requirement of the MA domain for assembly have been apparently conflicting. Recent studies have shown that deletion of the entire globular domain of MA does not abolish virus particle formation (5,26,27), yet the globular domain plays a key role for trimerization of MA as well as MA-CA, suggesting the contribution of MA to authentic Gag assembly (28,29).
In contrast to these in vivo analyses, in vitro assembly of retroviral Gag protein was originally reported for Mason-Pfizer monkey virus, showing a spherical structure following renaturation of partially purified Gag protein (30), but the optimal conditions for the assembly reaction were not studied. Recently, the in vitro assembly giving rise to long tubular rather than spherical structures has been reported using the individual Gag domains such as CA and CA-NC (31,32). Morphological conversion of the assembly products from tubes to spheres was observed when several amino acids of MA were fused onto the N terminus of the CA domain (12,33), but unfortunately this construct was devoid of the globular domain of MA and the p2 region. More recently, the in vitro assembly of Gag protein including these domains has been carried out, showing spherical particles (34). However, the conditions for the in vitro assembly reaction appeared not to be optimized, as the in vitro assembled particles were much smaller than authentic HIV particles. To understand the authentic Gag assembly reaction, it is necessary to establish an efficient in vitro assembly reaction with Gag protein including all the necessary domains and determine the requirements for the assembly reaction to produce spherical particles that more closely mimic authentic HIV. Here, using Gag protein lacking only the C-terminal p6 domain, we describe an in vitro assembly reaction that is composed of two sequential steps: formation of 60 S assembly intermediate and complete assembly to 600 S equivalent to authentic Gag VLPs.
EXPERIMENTAL PROCEDURES
Materials-E. coli expression vector pTrcHisA was purchased from Invitrogen and metal chelate resin (HisBind Resin) from Novagen. A high molecular weight calibration kit was purchased from Amersham Pharmacia Biotech, and prestained protein molecular weight markers (low range) were from Bio-Rad. Calf liver RNA and anti-polyhistidine mouse monoclonal antibody were obtained from Sigma. Anti-HIV-1 CA mouse monoclonal antibody was provided by Dr. H. Holmes (Medical Research Council AIDS reagent repository, National Institute for Biological Standards and Control, Hertz, UK), and 80 S ribosome was kindly supplied by Dr. K. Mizumoto (Kitasato University, Japan). Other reagents, unless otherwise specified, were commercially available of analytical grade.
DNA Construction-The HIV-1 gag gene encoding the Gag region essential for virus particle formation (MA-CA-p2-NC) with the sequence encoding additional six histidine residues at the C terminus was amplified by polymerase chain reaction with 5Ј-CGCGCCATGGGTGC-GAGAGCGTCAGT-3Ј and 5Ј-CGCGGAATTCTCAATGATGATGATGA-TGATGATTAGCCTGTCTCTCAGT -3Ј (underlines in the sequence encode six histidine residues). The polymerase chain reaction fragment was digested with NcoI and EcoRI and cloned into E. coli expression vector pTrcHisA (Invitrogen).
Protein Expression and Purification-An overnight culture of transformed E. coli cells was inoculated at 1:20 and grown for 2 h at 37°C. After 1 h of induction with 1 mM isopropyl--D-thiogalactopyranoside, the E. coli cells were immediately chilled and harvested by centrifugation at 4°C at 8,000 ϫ g for 15 min. The cells were resuspended in binding buffer (20 mM Tris (pH 7.9), 150 mM NaCl, and 10 mM imidazole), sonicated at 4°C for 5 min, and then lyzed by addition of Nonidet P-40 to a final concentration of 0.2%. After centrifugation at 4°C at 15,000 ϫ g for 30 min, the supernatant was subjected to metal chelate chromatography (Novagen). After washes with 25 volumes of binding buffer and with 20 volumes of wash buffer (20 mM Tris (pH 7.9), 150 mM NaCl, and 60 mM imidazole), bound protein was eluted with 5 volumes of elute buffer (20 mM Tris (pH 7.9), 150 mM NaCl, and 1 M imidazole).
In Vitro Assembly Reaction-Following chromatography, eluted protein solution was initially adjusted with EDTA to a final concentration of 2 mM (to chelate Ni 2ϩ ) and then dialyzed overnight at 4°C against 20 mM Tris (pH 8.6 adjusted at room temperature), 100 mM NaCl, 0.2 mM EDTA, and 1 mM dithiothreitol (DTT) unless otherwise indicated. In some experiments, calf liver RNA or RNaseA (Sigma) was added before dialysis. For higher order or complete assembly, the dialyzed protein was incubated in the presence of 5 mM MgCl 2 at 37°C for 1 or 3 h unless otherwise indicated (see text and legends for Figs. 3, 4, and 5).
Purification of HIV Gag VLP-HIV Gag VLP was purified from a culture media of Spodoptera frugiperda (Sf9) cells infected with a recombinant baculovirus containing HIV-1 gag gene, as described previously (10). Briefly, the Gag VLP was pelleted through a 30% (w/v) sucrose cushions and then purified by centrifugation in a 20 -60% (w/v) sucrose gradient spun at 4°C at 147,000 ϫ g overnight. The purified Gag VLP was treated with 0.5% Triton X-100 at 4°C for 30 min.
Velocity Sedimentation Analysis-Protein was applied onto a 15-30% (v/v) glycerol gradient including 20 mM Tris (pH 8.0), 100 mM NaCl, 1 mM DTT, 0.5 mM EDTA and sedimented at 4°C at 220,000 ϫ g for 20 h. For higher order assembly, protein sample (multimerized Gag protein) and Gag VLP were applied onto 20 -70% (w/v) sucrose gradients in phosphate-buffered saline and sedimented at 4°C at 120,000 ϫ g for 2 h. After centrifugation, the gradients were fractionated by 200 l from the bottom to the top. A high molecular weight calibration kit (Amersham Pharmacia Biotech) and 80 S ribosome were used for molecular weight markers for sedimentation analysis.
Protein Detection-Protein sample was analyzed by SDS-polyacrylamide gel electrophoresis (PAGE) on 14% acrylamide. After electrophoresis, protein in a gel was either directly detected by Coomassie Brilliant Blue or silver staining or was subjected to Western blotting (35) using anti-HIV-1 CA or anti-polyhistidine monoclonal antibody (Sigma) and anti-mouse IgG alkaline phosphatase conjugate (Cappel). The immunocomplexes were visualized using nitro blue tetrazolium and 5-bromo-4-chloro-3-indolylphosphate (Bio-Rad).
Electron Microscopic Examination-The procedure for microscopic examination was described previously (36). In vitro assembly products were collected through a 30% (w/v) sucrose cushion and fixed with 2% glutaraldehyde in 50 mM cacodylate buffer (pH 7.2) at 4°C for 2 h. After post-fixation with 1% osmium tetroxide in 50 mM cacodylate buffer (pH 7.2) at 4°C for 1 h, the pellets were embedded in epoxy resin. Ultrathin sections were stained with uranyl acetate and lead citrate and examined with an electron microscope (Hitachi H-800).
RESULTS
Purification of HIV Gag Protein-To obtain purified HIV Gag protein, the HIV-1 gag gene with the additional sequence encoding six histidine residues at the C terminus was cloned into pTrcHisA vector and expressed in E. coli cells. After 1 h of isopropyl--D-thiogalactopyranoside induction at 37°C, the expressed Gag protein was purified by metal chelate chromatography. When the full-length gag gene was used, expressed Gag protein was accompanied with some degradation (data not shown). As expression of Gag protein lacking the C-terminal p6 domain showed no degradation (Fig. 1), this construct was used in the present study. Although the concentration of the Gag protein was below 1 mg/ml (Fig. 1A, a), the protein was 95% pure when detected by silver staining (Fig. 1B, lower). Identification of the Gag protein was confirmed by Western blotting with anti-HIV-1 CA monoclonal antibody (Fig. 1A, b) and also with anti-polyhistidine monoclonal antibody (Fig. 1A, c). When the purified Gag protein was sedimented on a 15-30% (v/v) glycerol gradient directly after purification and compared with molecular weight markers sedimented in parallel, it was detected in a monomeric form (50 kDa) (Fig. 1B).
Initial Assembly-Purified monomeric Gag protein (as above) was used as the starting material for in vitro assembly experiments. The Gag protein solution was initially adjusted to 2 mM EDTA and dialyzed overnight at 4°C against 20 mM Tris (pH 8.6 adjusted at room temperature), 100 mM NaCl, 0.2 mM EDTA, and 1 mM DTT to remove any excess concentration of imidazole. When the dialyzed Gag protein was subjected to velocity sedimentation analysis on a 20 -70% (w/v) sucrose gradient, most of the Gag protein sedimented faster than the Gag protein before dialysis (compare Fig. 2, A and B) and had a calculated S value of 60 S when compared with 80 S ribosomes. This indicates that monomeric Gag protein assembles to 60 S by dialysis under these conditions. In general, proteinprotein interaction is stimulated in the presence of Mg 2ϩ , yet further assembly of Gag to greater than 60 S did not occur by dialysis in the presence of 5 mM MgCl 2 at 4°C (Fig. 2C). Assembly to the 60 S occurred in dialysis buffer of neutral to alkaline pH at low salt concentration but failed at acidic pH or at high salt concentration (Table I). The optimized conditions for the reaction were similar to those under which HIV CA-p2-NC protein was originally reported to be assembled in vitro, depending on additional RNA to form tubular structures (31,32). To examine the involvement of RNA in our assembly reactions, the Gag protein was dialyzed in the presence of RNaseA, or additional RNA and analyzed similarly. No significant differences were observed in the sedimentation profiles, indicating that RNA was not involved in the Gag assembly reaction to 60S (Fig. 2, D and E). These observations suggest that Gag protein alone was simultaneously multimerized to 60 S at 4°C at a high level of efficiency when in neutral to alkaline pH at low salt concentration.
Higher Order of Assembly to Complete Assembly-To explore a possibility that further assembly of Gag protein is facilitated at higher temperatures, the dialyzed Gag protein was incubated in the presence of 5 mM MgCl 2 at 37°C for 1 h and analyzed by velocity sedimentation on a sucrose gradient. More than 75% of the total Gag protein (quantitated by NIH image software) was detected at a broad range with calculated S values of 150 -350 S, suggesting that a higher order of assembly in various degrees of multimerization occurred under these conditions, although a small fraction of the Gag protein remained at 60 S (Fig. 3A). However, when the dialyzed Gag protein was incubated in the absence of MgCl 2 but similarly at 37°C for 1 h, the shift from the 60 S position was much less efficient than observed after incubation with MgCl 2 , suggesting that the presence of 5 mM MgCl 2 facilitated the assembly reaction at 37°C (Fig. 3B). In contrast, when the Gag protein was incubated at 30°C with or without MgCl 2, the sedimentation profile was essentially similar to that before incubation, showing that no further assembly occurred at 30°C (Fig. 3C). Taken together, these results suggest that higher order of Gag assembly proceeds at 37°C but not at 30°C and is facilitated by the presence of MgCl 2 .
Since it has been reported that one Gag VLP contains 1000 -2000 molecules of Gag proteins and sediments at 600 S (20), the assembly state observed under the conditions reported here still appeared insufficient for complete assembly. However, when the incubation time for the reaction was prolonged to 3 h, the sedimentation profile was shifted to that of Gag VLP, suggesting that Gag assembly might proceed to completion within 3 h under these conditions, although a small fraction of the Gag protein was still observed at 60 S (Fig. 4A). The proportion of Gag remaining at 60 S in the complete reaction was similar to that observed in the partial assembly reaction, suggesting that the Gag molecules that fail to assemble to 150 -350 S by 1 h reaction never participate in the higher order of assembly, presumably due to denature during dialysis. Gag VLP and immature authentic HIV particles are not dissociated by treatment with nonionic detergents such as Triton X-100 and Nonidet P-40 (37,38). Accordingly, the detergent treatment was applied to the in vitro assembly product of 600 S as a general measure to examine whether the correct assembly of Gag protein occurred in vitro. The in vitro assembly product of 600 S was not dissociated by treatment with 0.5% Triton X-100 (Fig. 4B), similar to the stability of Gag VLP in the presence of the detergent (Fig. 4C), suggesting a parallel nature between the Gag proteins assembled in vitro and in vivo.
Acceleration of Complete Assembly by Addition of RNA-Retroviral genomic RNA is incorporated during Gag assembly by binding to the NC domain, and recent evidence suggests that the Gag constructs containing the NC domain assemble by the addition of RNA, which acts, presumably, as a scaffold (31,32,34). Although the initial assembly we observed following dialysis was independent of RNA, it is possible that the higher order or complete assembly occurred by virtue of trace amounts of RNA that might have been contaminated in the Gag protein preparations. To investigate this possibility, RNaseA was added to the Gag protein before or after dialysis, the mixture was incubated in the presence of 5 mM MgCl 2 at 37°C for 3 h, conditions under which assembly to 600 S was normally observed using the Gag protein solution. Sedimentation analysis showed that the assembly to 600 S also occurred under these conditions (Fig. 5A), suggesting that RNA is not required for the complete assembly of Gag protein. However, when RNA was added to the Gag protein and the mixture was incubated in the presence of 5 mM MgCl 2 at 37°C but only for 1 h, conditions under which Gag protein alone does not assemble up to 600 S, the sedimentation profile was shifted to that of the complete assembly product of 600 S (Fig. 5B). This finding indicates that the addition of RNA accelerates the assembly reaction from the 60 S to 600 S forms of Gag. Together, these results suggest that TABLE I Effect of pH and salt concentration on assembly reactions For initial assembly, Gag protein (1 mg/ml) was dialyzed against 20 mM Tris (pH indicated), 0.1 or 1 M NaCl, 0.2 mM EDTA, and 1 mM DTT overnight at 4°C. For a higher order of assembly, the dialyzed Gag protein was incubated at 37°C for 1 h in the presence of 5 mM MgCl 2 . The reaction mixtures were sedimented on 20 -70% sucrose gradients in phosphate-buffered saline. ϩ, occurred; Ϫ, failed; ND, not done. Initial assembly (at 4°C) Higher order of assembly (at 37°C) Gag protein alone assembles to 600 S in vitro but that the addition of RNA accelerates the higher order of assembly.
Microscopic Examination of Complete Assembly Product-Electron microscopic examination was carried out to confirm the defined structure of the in vitro assembly product of 600 S. Almost spherical (Fig. 6A) but often faceted particles (Fig. 6B) were observed by ultrathin section transmission electron microscopy. The particles were hollow surrounded by double-ring structures, with an average diameter of 80 nm. When compared with immature HIV Gag VLPs (Fig. 6C), these features suggest the structure of the in vitro assembly products is similar to that of authentic immature HIV.
DISCUSSION
In vitro assembly of HIV Gag protein was initially observed when a CA-p2-NC protein fragment was dialyzed at 4°C under low salt conditions at pH 8, although the assembly efficiency was very low (31). Recent studies on in vitro assembly have been carried out using CA-p2-NC, CA, and CA fused with several amino acids of or entire MA (12,(32)(33)(34). In these studies, the conditions used for assembly varied, since proteinprotein interaction depends on salt concentration, pH, and temperature, which themselves influence protein conformation. We described the in vitro assembly of nearly full-length HIV Gag protein (MA-CA-p2-NC), devoid of only the C-terminal p6 domain, showing the optimal condition for formation of a spherical particle with a double-ring structure, similar to authentic immature HIV particles. In parallel, the assembly efficiency of the Gag protein was semiquantitated by velocity sedimentation analysis and estimated that approximately 77% of the total Gag protein finally assembled to 600 S under the optimized condition. The assembly reaction appeared to be composed of two steps, both of which proceeded at low ionic strength at neutral to alkaline pH but failed at high salt or at acidic pH (Table I). The optimal salt concentration differed from that in the recent studies in which CA-driving assembly was observed (at 0.5-1 M salt), although the optimal pH range (neutral to alkaline) was consistent with those studies (32). It is plausible that the presence of the entire MA domain on the Gag protein used in our experiment resulted in the preference for the low salt condition, since a previous report has shown that MA-driving trimerization was sensitive to salt concentration (29).
Electron microscopic analysis of previous CA-driving assembly reactions has revealed that both CA and CA-p2-NC formed tubular or conial structures in vitro, which represent conical cores of mature HIV particles (32,33). The formation of spherical structures were observed when the CA domain was extended at the N terminus by a small portion of MA (12,33), although the in vitro assembly of this construct is also presumably driven by the CA domain, as it occurred at high ionic strength. In contrast, our Gag assembly reactions appeared not to be CA-driven as they occurred under low salt condition, but produced a spherical particle. A similar finding has been reported by Campbell and Rein (34). From these data, we speculate that whichever domain of Gag could trigger for Gag assembly, a final shape of Gag assembly products depends on whether the MA domain (or even a small portion of MA) is present within Gag constructs used for assembly reactions. This interpretation is supported by a recent proposal that creation of the intermolecular salt bridge at the C terminus of CA domain occurs after cleavage of the MA/CA junction and redirects Gag assembly from spheres to cones (33).
It is well known that protein-protein interaction is stimulated by factors such as temperature and Mg 2ϩ ion concentration. In our experiment, the initial assembly to 60 S intermediates occurred only by dialysis at 4°C, but the higher order of assembly to 600 S required incubation at 37°C in the presence of Mg 2ϩ . This indicates that the higher Gag assembly state requires more stimulating factors. However, it is possible that a higher concentration of Gag protein for assembly reaction could compensate for these requirements, as it is reported that those factors had little effect on the yield of assembly product when a high concentration of CA protein was used (32).
Retroviral genomic or even non-cognate RNA is incorporated into assembling Gag VLPs via the zinc finger motifs located in the NC domain. Recent studies on in vitro assembly reactions with CA-p2-NC suggest that RNA serves as a scaffold that effectively concentrates the protein in the microenvironment (31), although the protein has also been reported to assemble in the absence of RNA but only at a high concentration of salt or protein (32). Recent in vitro assembly reaction with the nearly full-length Gag protein, which was carried out at room temperature, was completely dependent on additional RNA (34), but in contrast, RNA was not absolutely required for our in vitro assembly reaction at 37°C (Fig. 5). Since the Gag assembly from 60 S to 600 S was not inhibited by RNaseA treatment but was accelerated by addition of RNA, we suggest that RNA requirement for Gag assembly reaction is reduced by incuba- FIG. 6. Electron microscopy of completely assembled Gag proteins. Dialyzed Gag protein was incubated in the presence of 5 mM MgCl 2 and calf liver RNA at 37°C for 1 h and pelleted through a 30% sucrose cushion. The pellets were observed by ultrathin section electron microscopy. Panels: A and B, in vitro assembly products of 600 S; C, Gag VLPs prepared as described in the legend for Fig. 4C. Scale bars represent 100 nm. tion at higher temperature. We believe that the in vitro assembly of the nearly full-length Gag protein described here with RNA to form a spherical particle requires an understanding of the physiological situation of authentic Gag assembly in vivo, because Gag VLPs are produced from various eukaryotic cells including insect cells cultured at 27-28°C; in the latter cell case, RNA may be used as an essential scaffold. | 2018-04-03T04:51:51.731Z | 1999-09-24T00:00:00.000 | {
"year": 1999,
"sha1": "e808e2f5285dce51605e1821ef74ecd786ce2ac6",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/274/39/27997.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "432a34ad9c26bfff6acc0a88e9206dd628e6ac26",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
1439499 | pes2o/s2orc | v3-fos-license | Medical management of heavy menstrual bleeding
Women with benign heavy menstrual bleeding have the choice of a number of medical treatment options to reduce their blood loss and improve quality of life. The role of the clinician is to provide information to facilitate women in making an appropriate choice. Unfortunately, many options can be associated with hormonal side effects, prevention of fertility and lack of efficacy, leading to discontinuation and progression to surgical interventions. Herein, we discuss the various options currently available to women, including antifibrinolytics, nonsteroidal anti-inflammatory preparations, oral contraceptive pills and oral, injectable and intrauterine progestogens. In addition, we describe the more novel option of selective progesterone receptor modulators and their current benefits and limitations.
Effective medical management of heavy menstrual bleeding (HMB) relies on excellent communication between a woman and her doctor. Information provision on mode of action, benefits, potential risks and alternatives of each option will allow a woman to choose the most appropriate treatment for her personal circumstances. Various medical treatment options are available, but many women proceed to surgery due to treatment failure or hormonal side effects. Surgery introduces risk of bowel, bladder and ureteric damage, as well as haemorrhage, infection and even death [1]. There is a clear unmet clinical need for better medical treatments for this benign but incapacitating condition.
Abnormal uterine bleeding may be a result of aberrations in: • Duration of bleeding; • Frequency of bleeding; • Regularity of menses or; • Volume of menstrual loss.
The clinician must carefully assess each symptomatology during the consultation to enable accurate diagnosis and management of HMB. The need for current or future fertility must also be elicited in routine historytaking to facilitate informed decision making of the women seeking treatment. After exclusion of anatomical disorders (PALM [polyps, adenomyosis, leiomyoma, malignancy]) and nonanatomical disorders (COEIN [coagulopathies, ovulatory dysfunction, endometrial, iatrogenic, not otherwise classified]) [2], managing the symptom of HMB is a priority. The clinician's role is to provide accurate information about treatment options, allowing the woman to choose the treatment most appropriate for her. Written information is often helpful and evidence-based patient information leaflets are available online. Treatment success should be determined by improvement in the woman's quality of life.
This review aims to provide a practical guide to well-established medical treatments for HMB. These treatments are divided into nonhormonal options and hormonal preparations and, where possible, their appropriate-
Nonpharmacological management
A careful explanation of the cause of HMB is essential in the management of women with HMB. Exclusion of pathology will often allay fears and occasionally p revent the need for pharmacological treatments. Regular exercise and maintenance of a healthy BMI should be recommended to every woman with HMB. Although the evidence for cause and effect is limited, a high BMI will increase the risk of ovulatory dysfunction and subsequent heavy or irregular menstrual loss [3][4][5]. Exercise and a healthy diet will also help limit iron deficiency anaemia, raise energy levels and improve quality of life.
Antifibrinolytics
Women suffering from HMB have been shown to have over activation of the fibrinolytic system during the menstrual phase of their cycle [6]. This leads to accelerated degradation of the fibrin clot that forms to induce hemostasis ( Figure 1). Therefore, an increase in fibrinolysis results in increased blood loss during endometrial shedding.
Tranexamic acid is an antifibrinolytic medication commonly used to counteract this aberration in women with heavy menstrual bleeding. It has a short half-life, necessitating regular administration of 1 g orally three-to four-times per day during menses. As it is only required during days of heavy bleeding (∼4 per month), side effects are minimal but may include gastrointestinal symptoms. Tranexamic acid is also acceptable to women who are trying to conceive, or those who experience significant side effects with hormonal preparations. There are few contraindications to tranexamic acid but it should be used with caution in women with a personal history of thromboembolism. Tranexamic acid is reported to result in approximately 50% reduction in menstrual blood loss [7,8].
NSAID preparations
Studies examining women with objectively measured heavy and normal menstrual bleeding have repeatedly demonstrated that increased local inflammation is associated with increased menstrual blood loss. The proinflammatory cytokine TNF-α was significantly elevated in menstrual effluent of women with HMB versus those with normal loss [9]. The enzyme involved in prostaglandin synthesis, COX-2, was also raised in endometrial samples from those with HMB, leading to increased prostaglandin signaling [10]. The resulting exaggerated inflammation within the endometrium may lead to increased and prolonged tissue damage at the time of menstruation. Therefore, limitation of the production of inflammatory mediators is helpful in the treatment of women with HMB.
NSAIDs exert their anti-inflammatory effect through inhibition of cyclooxygenase, which is the enzyme that catalyses the transformation of arachidonic acid to prostaglandins and thromboxanes ( Figure 2). Mefenamic acid is the most commonly used NSAID for treatment of HMB and results in a reported blood loss reduction of 25-50% [11]. However, other NSAIDs show similar efficacy to the more commonly prescribed mefenamic acid [7]. Like antifibrinolytic medications, NSAIDs offer a nonhormonal treatment for women wishing to conceive or avoid hormonal side effects but have the additional benefit of analgesic properties. Side effects include gastrointestinal effects and these preparations are not suitable for those women who have previously had peptic ulcer disease or who are thought to have HMB due to a coagulation disorder.
Despite significant reductions in blood loss, 52% of women treated with mefenamic acid for 2 months maintained a blood loss of greater than 80 ml per cycle [11]. NSAIDs and antifibrinolytic medications can be used together but should be stopped after 3 months if there is no symptomatic improvement. If they are beneficial, they may be continued indefinitely and can also be used as adjuvant therapy with hormonal preparations.
Hormonal treatments for HMB
Human endometrial function is governed by the ovarian steroid hormones. Most research to date has Reproduced with permission from [12] © Elsevier (1994). Medical management of heavy menstrual bleeding. Special Report focused on the role of estrogen and progesterone on the endometrium but the role of other steroids that have an impact on endometrial function (i.e., androgens and glucocorticoids) should also be considered. During the secretory phase of the menstrual cycle, progesterone is the dominant hormone and is a potent anti-inflammatory agent. In the absence of pregnancy, the corpus luteum regresses and progesterone levels sharply decline. It is this marked reduction in ovarian hormones that triggers an influx of inflammatory mediators into the endometrial environment, leading to shedding and menstruation. Maintenance of progesterone exposure limits endometrial inflammation and prevents menstruation. It is therefore unsurprising that the most effective medical treatments available for HMB are hormonal preparations. It is worth remembering that these preparations will also limit or remove fertility for the duration of their use.
Levonorgestrel-releasing intrauterine system (LNG-IUS; Mirena ® )
This popular intrauterine system (IUS) contains an androgenic progestogen, levonorgestrel (LNG). LNG is slowly released from the IUS to act on the local endometrial environment, preventing proliferation. It may also impact on the frequency of ovulation. The LNG-IUS can decrease menstrual loss by up to 96% after 1 year of use ( Figure 3) [12] and is licensed in the UK for treatment of HMB for 5 years. After 5 years, the device should be removed and a new LNG-IUS device may be fitted immediately if desired. It is an excellent contraceptive when in situ and has the advantages of a 'fit and forget' method, rather than relying on patient compliance. The LNG-IUS is also associated with reduction of dysmenorrhea [13]. As its actions are local, progestogenic side effects are limited, for example, bloating, breast tenderness, mood changes.
The LNG-IUS is contraindicated in pregnancy, unexplained vaginal bleeding and uterine sepsis [1].
Risks usually outweigh benefits in women with systemic lupus erythematosus (SLE) and those with severe liver disease. Usually hormonal treatments are avoided in women with current breast cancer, but concerns about progression of the disease may be less with LNG-IUS than with oral preparations. The LNG-IUS may be considered individually, and in consultation with the woman's breast surgeon [14]. Extra care must be taken during insertion in women with distortion of their endometrial cavity due to leiomyoma/fibroids or congenital abnormalities. In these cases, it may be safer to use an alternative hormonal treatment or to insert the IUS under hysteroscopic guidance.
Women should be counseled about potential complications of LNG-IUS use including: • Unscheduled bleeding: this occurs in the majority of women during the first 3-6 months of use. Women should be advised that they may experience daily spotting but that this usually settles after 6 months. Perseverance for a minimum of 6 months is required for benefits to be appreciated and for unscheduled, usually light, bleeding to subside (Figure 3). Approximately one in five women will experience on-going problems with persistent bleeding [15]. This is thought to be due to endometrial vascular fragility, secondary to sustained progestogen exposure, a decrease in steroid hormone receptors and a lack of local estrogen effects [16]. A proportion of women will benefit from adjuvant tranexamic/mefenamic acid treatment or, if no contraindications, a 3-month course of a combined oral contraceptive pill. Unfortunately, those with persistent problems or intolerable side effects will require a lternative management of their HMB [17]; future science group
Special Report Maybin & Critchley
• Infection: women have an increased risk of infection for the first 3 weeks after insertion. Some clinicians recommend that women should not use tampons in this time to minimize this risk. If they notice an offensive discharge they must seek medical advice and may require antibiotic treatment. After 3 weeks post-insertion, their risk of infection returns to the same as women without an IUS; • Expulsion of IUS: up to one in five LNG-IUS devices can be expelled from the uterine cavity after insertion, with the greatest risk of this during the first 6 weeks post-insertion. The rate of expulsion is higher in nulliparous women [14]. Women should be advised to check the threads by digital self-examination on a regular basis, particularly if relying on the IUS for contraception. If they are not happy to do so, they should have a speculum examination 6 weeks after insertion to ensure the IUS is in place before relying on it for contraception; • Perforation: a rare but serious complication of LNG-IUS insertion is uterine perforation, occurring in 1:1000 cases [14]. Distortion of the endometrial cavity, uterine infection or being less than 4 weeks postpartum will increase the risk of perforation substantially. Suspicion of perforation at the time of insertion warrants ultrasonic assessment. A woman should be advised to seek medical help if post insertion cramps are not eased with routine analgesics. Should the IUS threads not be visible, ultrasound assessment ± abdominal x-ray is indicated to exclude perforation ( Figure 4).
Combined oral contraceptive pill
The combined oral contraceptive (COCP) contains estrogen and progestogen and is usually given for 3 weeks followed by a 'pill free' week in which the woman experiences a hormone withdrawal bleed. The COCP produces an estimated reduction in blood loss of 50% and has the additional benefit of regulation of bleeding [18]. Therefore, it is a particularly attractive option for women experiencing frequent or irregular heavy bleeding, once pathology has been excluded. The COCP can be 'tri-cycled', in other words, three packets taken consecutively without 'pill-free' weeks. This will reduce the number of menses experienced as well as the volume of blood loss and is an attraction option for many women [19].
The risks of the COCP are mainly due to its estrogen content and include increased risk of thromboembolism, stroke, cardiovascular disease or breast cancer. Therefore, it is contraindicated in women with a BMI >35, smokers over 35 years, women with hypertension, vascular disease, migraine with aura, current/recent breast cancer, those with a personal or strong family history of venous thromboembolism or with a known thrombogenic mutation [14]. The COCP also has a detrimental effect on breast milk production and is contraindicated in breastfeeding women [14]. In the absence of risk factors, women can use the COCP until menopause if desired.
Progesterone only pill
In contrast to the combined pill, the progesterone only pill (POP) is associated with irregular and unpredictable blood loss. Therefore, it is not usually recommended as a treatment for HMB. However, if no other options are acceptable or safe for a woman to use, a trial of a POP may be appropriate. As these pills do not contain estrogen, they are a safer alternative to the COCP. Some POPs induce amenorrhea in up to 20% of users, for example, desogestrel containing POPs, and are effective treatments for a small proportion of women [20].
Injectable progestogens
Intramuscular or subcutaneous injection of high dose progestogens (e.g., depot medroxyprogesterone acetate [DMPA]) can induce amenorrhea in up to 50% of users [21]. This method of administration offers women an alternative to tablets or intrauterine devices. Injections are usually given every 12 weeks to maintain progestogen exposure and ensure contraceptive efficacy. The principle mechanism of action of injectable progestogens is inhibition of folliclestimulating hormone (FSH) release from the anterior pituitary. Therefore, follicle development in the ovary is inhibited and ovulation is prevented. This ovulatory suppression has the additional benefit of reducing dysmenorrhea. However, as a consequence of follicular suppression, there is also a reduction in estradiol production. Therefore, women may have a transient reduction in their bone mineral density with long-term use. The clinical impact of this loss of bone mineral density remains uncertain; a retrospective cohort study of general practice records showed that users of DMPA had a fracture rate of 9.1 per 1000 person-years compared with a rate of 7.3 for nonusers [22]. However, DMPA users had an increased incidence of fractures before they had even commenced DMPA use and there was no increase in fracture incidence with increased duration of use. Considering these inconclusive data and the complete restoration of bone mineral density on cessation of use, it is a suitable preparation for most women. Side effects can limit compliance and include weight gain, greasy skin and hair, acne and bloating [14]. Medical management of heavy menstrual bleeding. Special Report
Oral progestogens
Norethisterone is the most commonly used oral progestogen in the treatment of HMB. This should be prescribed as a 5 mg tablet, to be taken three-times per day from day 5 to 26 of the menstrual cycle. This regimen has been shown to reduce blood loss by >80% [23]. In contrast, norethisterone administration in the luteal phase only was of no benefit to women with HMB and is not recommended [24]. Despite a significant reduction in HMB with norethisterone treatment from day 5-26, patient satisfaction may limit long term use due to a high incidence of progestogenic side effects. Therefore, it is more commonly prescribed as a short term measure, for example, to terminate a heavy bleed or regulate menstruation for a holiday or an important life event.
Gonadotropin-releasing hormone agonists
These are synthetic peptides administered by an intramuscular, subcutaneous or intranasal route and utility should really be for short-term use. These continuous delivery preparations have a much longer half-life than the natural gonadotropin-releasing hormone (GnRH) released in a pulsatile manner from the hypothalamus. This sustained presence of GnRH results in low FSH and luteinizing hormone (LH) production and GnRH agonists induce a profound hypogonadal state, in other words, a medical menopause. As there is no stimulation of the endometrium from the resulting low ovarian hormone levels, menstruation does not take place. GnRH agonists are particularly useful in the treatment of uterine fibroids (leiomyoma), which can reduce considerably in size when ovarian hormone levels are suppressed. GnRH agonists may be used prior to surgical intervention in women with fibroids, or for those in whom surgery is not suitable or desirable [25].
Studies have demonstrated excellent efficacy, with an amenorrhea rate of up to 90% with GnRH agonist use [26,27]. However, these compounds are associated with very significant side effects secondary to estrogen deficiency that limit use; namely flushing, vaginal dryness, headaches and decreased libido. Most of the side effects can be attributed to low estrogen levels and limitation of these menopausal symptoms can be achieved with 'add-back' hormone-replacement therapy (HRT). This is necessary after 6 months of use to protect bone mineral density. Usually the GnRH agonist is commenced alone, to achieve maximal effects on menstrual blood loss and fibroid shrinkage and discontinued after 6 months. 'Add-back' HRT is introduced if the woman continues treatment for greater than 6 months, or sooner if warranted by symptomatology.
Selective progesterone receptor modulators
An exciting new group of pharmacological agents is in development and has the future potential to provide effective oral treatment for HMB. These selective progesterone receptor modulators (SPRMs) impart a tissue-specific partial progesterone antagonist effect and act upon progesterone receptors in the endometrium and the underlying myometrial tissue. They have the additional benefit of maintenance of estradiol levels, meaning hypoestrogenic side effects are not an issue.
The mechanism by which these SPRMs reduce menstrual blood loss is still to be fully defined but distinct histological morphology has been identified with their use (progesterone receptor modulator associated endometrial changes [PAEC]). Ulipristal Acetate (UPA) is the only SPRM to have been licensed for use in clinical practice, albeit restricted to 3 months pretreatment of fibroids prior to surgical removal. Study of the endometrium of women taking this treatment regimen showed altered architectural features including extensive cystic dilatation of the epithelial glands, inactivity or features of abortive subnuclear vacuolization, occasional mitoses and apoptosis. Histology returned to normal after discontinuation of treatment [28,29]. Study of a different SPRM, asoprisnil, revealed a decreased uterine artery future science group Special Report Maybin & Critchley blood flow after 3 months of treatment, which may contribute to their efficacy [30].
The recent introduction of UPA followed evaluation in two concurrent randomized controlled trials [31,32]. 'PEARL I' assessed the efficacy of UPA 5 mg and 10 mg daily on uterine bleeding and fibroid volume when compared with placebo. 'PEARL II' assessed UPA versus the gonadotropin-releasing hormone analogue leuprolide acetate in the treatment of symptomatic uterine fibroids prior to surgery. Both trials demonstrated control of HMB in over 90% of women and amenorrhea in over 70% women. Control of HMB was achieved significantly more quickly in the UPA group versus GnRH agonist. There was a statistically significant reduction in the size of fibroids (12-21% decrease). Compliance with treatment over 3 months was high in both studies (96 and 98%) and reported side effects were limited to minor complaints. Headache (4%) and breast complaints (4%) were the most common side effects reported but there was no difference between active drug and placebo groups. There are no publications to date on the clinical utility of SPRMs in the management of women with HMB who do not have fibroids or who have other conditions associated with HMB, such as adenomyosis.
These studies have concluded that short term use of UPA is effective in treating HMB associated with uterine fibroids (3-10 cm in size). However, UPA also has the potential to provide a safe, fertility preserving, rapidly effective and convenient oral medical treatment for women with HMB whether associated with fibroids or not. Clinical trials are currently in progress to assess SPRMs in this group of women. This further research is required to fully understand their mechanism of action, longer-term safety and effectiveness prior to recommending their use as a long-term medical treatment option for women with HMB with and without fibroids.
Conclusion & future perspective
A number of medical options and routes of administration exist for the hundreds of thousands of women in the UK who suffer from HMB. Unfortunately, most are associated with hormonal side effects and limited efficacy. SPRMs offer hope of a new, fertility sparing class of medical therapies for these women that may provide a long-term treatment option. Continued research into the causes of HMB will yield new medical therapies for this common, debilitating disorder to improve the quality of life of many women. Executive summary • Heavy menstrual bleeding is a common and debilitating condition that has a significant impact on a woman's quality of life, her family and a more widespread effect on society as a whole. • Various medical treatment options are available but side effects often limit compliance and efficacy. • Nonhormonal options are limited to tranexamic or mefenamic acid. • Hormonal options include the levonorgestrel-releasing intrauterine system, the combined oral contraceptive pill or progestogen preparations. • Gonadotropin Releasing Hormone analogues can be a useful short-term option, particularly for women with fibroids. • There is a clear unmet need for effective, acceptable medical treatments for HMB. Selective progesterone receptor modulators may provide a novel therapeutic option for these women in the future. | 2018-04-03T03:53:03.354Z | 2016-01-01T00:00:00.000 | {
"year": 2016,
"sha1": "da40ebbaa4cfc0c19a01e30d3f9050e503e2bd46",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.2217/whe.15.100",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "255b2aee21f35ea869fa408bdadb5efc7a5deb23",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
8010955 | pes2o/s2orc | v3-fos-license | Binding Pocket Optimization by Computational Protein Design
Engineering specific interactions between proteins and small molecules is extremely useful for biological studies, as these interactions are essential for molecular recognition. Furthermore, many biotechnological applications are made possible by such an engineering approach, ranging from biosensors to the design of custom enzyme catalysts. Here, we present a novel method for the computational design of protein-small ligand binding named PocketOptimizer. The program can be used to modify protein binding pocket residues to improve or establish binding of a small molecule. It is a modular pipeline based on a number of customizable molecular modeling tools to predict mutations that alter the affinity of a target protein to its ligand. At its heart it uses a receptor-ligand scoring function to estimate the binding free energy between protein and ligand. We compiled a benchmark set that we used to systematically assess the performance of our method. It consists of proteins for which mutational variants with different binding affinities for their ligands and experimentally determined structures exist. Within this test set PocketOptimizer correctly predicts the mutant with the higher affinity in about 69% of the cases. A detailed analysis of the results reveals that the strengths of PocketOptimizer lie in the correct introduction of stabilizing hydrogen bonds to the ligand, as well as in the improved geometric complemetarity between ligand and binding pocket. Apart from the novel method for binding pocket design we also introduce a much needed benchmark data set for the comparison of affinities of mutant binding pockets, and that we use to asses programs for in silico design of ligand binding.
Introduction
Computational protein design has advanced rapidly in recent years. A particularly exciting and dynamic area is the design of interactions between proteins and small molecule ligands. This includes the design of receptors that bind ligands of choice, which for example can be used as biosensors [1], as well as the design of enzymes that do not only bind a substrate, but also contain the catalytic machinery to process it [2][3]. In all these designs, an existing protein is used as a scaffold, and its binding pocket is altered or a new one is introduced that should interact with the target ligand.
With this approach, enzymes have been designed that catalyze chemical reactions for which no natural catalysts exist, such as a kemp eliminase [4][5], a diels-alderase [6], and a retro-aldolase [7]. It has also been used to design a metalloenzyme by repurposing parts of the already existing catalytic machinery in the scaffold protein, namely the reactivity of a zinc metal center to hydrolyze organophosphates [8]. Furthermore, similar methods have been applied to change substrate specificities as well as affinities. For example human guanine deaminase was changed to bind ammelide through the remodeling of a loop that now provides a key interaction to the new target substrate [9], the substrate specificity of gramicidin S synthetase was changed from phenylalanine to leucine [10], and mutations in dihydrofolate reductase from Staphylococcus aureus were predicted that decrease binding to an inhibitor molecule while stabilizing native protein function [11].
While these are impressive results, there is still much room for improvement in the computational methods. Specifically, it seems to be difficult to accurately design a protein for high affinity binding to a ligand or transition state [12]. The majority of the enzyme designs mentioned have low affinities for their substrates when compared to naturally occurring enzymes [13][14]. In a rare report of a failed attempt, the unsuccessful design of a high-affinity ligand binding site for a D-Ala-D-Ala dipeptide into an endo-1,4xylanase scaffold was discussed. Designs by the employed design software ROSETTA did not show the predicted high affinity in the experimental tests underscoring the challenge of protein-ligand interface design [15]. In this respect long-range electrostatics and dynamics, accurate modeling of solvation and electrostatics at the interface, as well as the inclusion of explicit water molecules have been named as most problematic areas [13][14][15][16]. In order to improve protein-ligand interface design and to overcome current limitations it will be necessary to test design protocols more systematically.
In this respect, we noticed that in computational design studies there is a lack of more general benchmark sets. Related molecular modeling techniques are regularly assessed using test sets. For example protein-ligand docking algorithms have been compared in detail [17][18] [19][20]. Also the CASP and CAPRI experiments allow unbiased testing of protein structure prediction and protein-protein docking methods [21]. In contrast only a few computational design studies tested their employed methodology. One example is the redesign of the binding pocket of ribose binding protein for its native ligand using molecular mechanics methods. Among the resulting binding pocket sequences, the wild type sequence was ranked second best, while the first and third ranks had only a single mutation and bound ribose with tenfold decreased affinity [22]. Also the aforementioned algorithm to introduce one key interaction to a ligand using loop modeling techniques was tested on eight proteins. For six of them the method produced a loop of the same length and similar configuration as in the crystal structures [9]. Both benchmark tests are very specific, they cannot be used to generally and systematically assess a method's proficiency in designing binding to a small molecule. Also the broader benchmark set that was used to assess the ability of the enzyme design methods ROSETTAMATCH and SCAFFOLDSELECTION to identify suitable scaffold proteins that can host a desired catalytic machinery [23][24] are not suited for this purpose. Such a test set, however, would be very helpful for assessing the potential and the shortcomings of available methods.
In this study, we present POCKETOPTIMIZER, a computational pipeline that can be used to predict mutations in the binding pocket of proteins, which increase the affinity of the protein to a given small molecule ligand. It can be used for the analysis of few mutations as well as for the design of an entire binding pocket. It uses several molecular modeling modules. Side chain flexibility is sampled by a conformer library, which we compiled following Boas and Harbury [22]. The use of conformer libraries has been reported to be advantageous, especially in the context of bindingsite geometries [25] [26][27]. A receptor-ligand scoring function is used to calculate protein ligand binding strength. The modular architecture of POCKETOPTIMIZER allows easy and systematic comparison of methods that perform the same task. As the first test we utilize this to examine two scoring functions in this study, the scoring function provided by CADDSuite [28] and Autodock Vina [29]. In order to assess the performance of POCKETOPTIMIZER and other methods that address the same task, we compiled a benchmark set. It consists of mutational variants of proteins and their small ligands with available experimental structural and binding affinity data. We also used this benchmark to test the enzyme design application included in the ROSETTA molecular modeling software. ROSETTA was used for the majority of the design studies mentioned earlier, and it is the most successful freely available protein design software to date [30]. We find that both methods perform similarly. In our benchmark POCKETOPTIMIZER succeeds slightly better in predicting the correct affinity-enhancing mutations. We discuss the strengths and weaknesses of our method and describe to which protein design problems it can be applied with good chances of success. The findings emphasize the merit of a systematic approach to evaluate computational protein design methodologies, to identify their strengths, and to pinpoint possibilities for improvement. And our modular program POCK-ETOPTIMIZER provides a suitable framework to test and implement these approaches.
Computational Receptor Design Pipeline PocketOptimizer
We developed POCKETOPTIMIZER for the design of proteinligand interactions. In combination with a program such as SCAFFOLDSELECTION [24] it can also be used for enzyme design. POCKETOPTIMIZER is a combination of customizable molecular modeling components. Amino acid flexibility is modeled by a side chain conformer library, ligand flexibility is addressed by systematically sampling poses of the ligand in the binding pocket. The score that is optimized is a combination of protein packing energy calculated with the AMBER force field [31], and proteinligand binding energy calculated using a scoring function. To identify the most promising design, the global minimum energy conformation of a protein pocket with the ligand based on the combined energy score is calculated [32][33]. Intermediate results like conformers or score tables are stored in standard file formats, making it easy to compare different approaches for a given subtask. Notably, we used two receptor-ligand scoring functions in this study, the scoring function included in CADDSuite [28] and Autodock Vina [29]. Figure 1 depicts the workflow of the POCKETOPTIMIZER pipeline.
The program POCKETOPTIMIZER is designed as a modular pipeline that allows exchange of program parts, e.g. the use of Figure 1. Workflow of PocketOptimizer. The input specific for a design is depicted in circles, parts of the pipeline are shown in pointed rectangles, and output components in rounded rectangles. The output is stored in standard file formats (SDF and PDB for structural data, csv for energy tables). This allows the easy replacement of a component with another that solves the same task (e.g. replacing the binding score function). doi:10.1371/journal.pone.0052505.g001 different available docking functions or force-fields. In contrast to other existing design programs this pipeline aims to provide a platform for the incorporation and testing of available modules so that the contribution of individual parts can be distinguished. In its current implementation of POCKETOPTIMIZER we chose to use a conformer library over rotamers. The program is geared towards the design of protein-ligand interaction, however it can also be used for prediction of protein packing only. Currently not incorporated are backbone flexibility and negative design capabilities.
POCKETOPTIMIZER source code and documentation can be obtained from the authors or from www.eb.mpg.de/researchgroups/birte-hoecker/algorithms-and-software.html.
Benchmark Set
We compiled a set of twelve proteins with structural and experimental affinity data for the assessment of computational design methods for protein-ligand binding. For this, we systematically searched the PDBbind database [34], which lists high quality crystal structures of protein-ligand complexes together with experimentally determined binding data. Each protein in our set has at least two mutational variants (usually the wild type and one or more mutants) accompanied by an affinity measure (the inhibitory constant K i or dissociation constant K d ) for the same ligand. The positions of amino acids that differ between the variants are always located in the binding pocket or active site. For each protein, there is at least one crystal structure of a variant with the ligand, for ten of the twelve there are two or more crystal structures that allow us to compare a design model of a variant with the respective crystal structure. The proteins and ligands in our benchmark set are very diverse. All ligands are shown in Figure 2. Each protein in the set belongs to a different fold as defined by SCOP [35], underscoring their structural diversity. This diversity allows to test design methods on a wide range of problems and avoids bias. Table 1 lists the benchmark proteins and their associated data.
Benchmark Results
The optimization scheme of POCKETOPTIMIZER simultaneously chooses sequence and conformation. It can go over many alternatives. For the benchmark, however, it was necessary to restrict the sequence to the mutations for which experimental data was available. We tested the performance of POCKETOPTIMIZER on the benchmark set using Autodock Vina and CADDSuite receptor-ligand scores as well as ROSETTA's enzyme design application. Each method was used for the same set of design calculations. Each available crystal structure was used as a scaffold for the design of each mutational variant. We obtained a design for each mutation in each scaffold structure by forcing the methods to select a particular mutation in a separate run. This allowed us to compare the predicted binding and total energy scores as well as the designed conformations with the experimental data. Figure 3 shows the RMSD values between the designs and the respective crystal structures. This is a measure of how well the respective method models the conformation of the binding pocket residues and the ligand pose in the pocket. ROSETTA performs better in modeling side chains in the binding pocket. The difference between the pocket RMSDs of ROSETTA and each of the two POCKETOPTIMIZER variants is statistically significant with a p-value ,0.01 according to a Mann-Whitney test. This might not come as a surprise considering that the ROSETTA molecular modeling software is extensively used and optimized for protein packing tasks, especially protein structure prediction. POCKETOPTIMIZER on the other hand focuses on the identification of residues interacting favorably with the ligand. The observed differences in ligand pose RMSD are not statistically significant ( Figure 3). To assess whether the methods can differentiate correctly between protein variants that have a large affinity difference, we looked at pairs that have an affinity difference of at least 50-fold. This cutoff translates to roughly 2.3 kcal/mole and was chosen to make sure that only pairs with clear, trustworthy affinity differences well outside experimental error are investigated. Table 2 lists the number of pairs in which the order of the mutants according to energy score is the same as the order according to affinity, meaning the design method would produce the correct ranking. Here, POCKETOPTI-MIZER performs in the same range as ROSETTA, with 69% correctly predicted pairs opposed to 64%. When comparing the two receptor-ligand score functions we used in our approach it seems that Autodock Vina has some advantage over the CADDSuite score. The total scores of the different methods are also listed. Based on these scores POCKETOPTIMIZER performs even better with 71% and 76% correctly predicted pairs. However, since we are looking at affinity prediction, the binding score appears to be more appropriate for the comparison.
We further examined how well the energy scores correlate with the affinities. For this we plotted the predicted energy of each design against the logarithmic affinities for all seven test cases with more than two mutations ( Figure 4). The scores should correspond to the binding free energy, which in turn is proportional to the logarithm of the affinity of binding. Here, all mutants with experimental affinity values of a test case are included, regardless of the extent of the affinity difference. Overall we find that the energy values follow the affinity logarithm only in some cases.
Discussion of Benchmark Results
When looking at a pair of protein variants, POCKETOPTIMIZER is able to correctly predict which variant has a better binding affinity if that difference is based on the introduction or abolition of a direct interaction of the mutable residue's side chain with the ligand. This is especially noteworthy for pairs where one residue forms a hydrogen bond with the ligand, while the other does not. This was predicted correctly in seven of eight cases where the better binding variant forms an additional hydrogen bond. It also works well if the variable side chain of one mutation variant is bulkier than its counterpart in another variant, and therefore packs better against the ligand, i.e. forms more van der Waals (vdW) interactions with the ligand and shields it better from solvent, improving the solvation energy contribution. A potential downside of this effect of vdW contact improvement is that POCKETOPTI-MIZER sometimes seems to prefer larger side chains even if they are detrimental to binding for other reasons. This tendency could lead to an overpacking of the designed pocket. When differences in binding have more complex causes, such as rearrangements in the pocket's side chains that affect the ligand interaction indirectly by influencing other pocket side chains, the program generally fails to capture these differences.
Both scoring functions used within POCKETOPTIMIZER, from Autodock Vina and CADDSuite, produce results that are quite similar. The overpacking effect discussed before is less pronounced in Vina, which explains its slightly better performance in predicting which variant of a pair binds better (see Table 2). Generally, the order of the designs by energy scores calculated by our method does not depend on which variant's crystal structure was used as the scaffold. Only in a few cases a significant difference can be observed, notably for carbonic anhydrase II and trypsin.
In some cases, the POCKETOPTIMIZER designs did not contain a conformational configuration that avoids vdW clashes in the binding pocket. In one test case, namely for neuroaminidase, the program was unable to identify any acceptable pocket conformation. One limitation of POCKETOPTIMIZER and a probable cause for such problems is the assumption of a fixed backbone in our designs. An adjustment of the backbone conformation might have helped to accommodate the tyrosine. It is also conceivable that our way of systematically sampling possible ligand poses could have failed to generate a pose that is sterically compatible in the neuroaminidase case.
Rosetta's enzyme design application does not suffer from unresolvable vdW clashes. It includes minimization steps in its algorithm that can resolve potential clashes introduced by discrete conformational sampling. However, Rosetta apparently cannot convey its superiority in modeling the binding pocket side chains to the prediction of the correct binding score order. It is unable to predict the rearrangements of side chain conformations that lead to binding affinity changes in the more complicated test cases. The Table 1 energy term for hydrogen bonds in ROSETTA seems to have less influence on the output than in our program. This causes ROSETTA to miss existing hydrogen bonds between ligand and side chains. The binding scores and their differences predicted for different mutants are more dependent on the scaffold structure used in Rosetta designs than it is in POCKETOPTIMIZER. This can be seen in Figure 4: the lines for designs of both POCKETOPTIMIZER variants, Vina and CADDSuite, are more similar to each other than the ones for ROSETTA designs. This is rather surprising, as we anticipated that the limited backbone flexibility included in the ROSETTA enzyme design protocol would lead to less dependency on these small input structure differences. A more detailed description of each test case, including what is known from experimental and structural studies about the factors that influence binding differences in the test cases, as well as the success of the methods in reproducing these factors, is provided in the Information S1.
Conclusion
We developed a pipeline of molecular modeling tools named POCKETOPTIMIZER. The program can be used to predict affinity altering mutations in existing protein binding pockets. For enzyme design applications it can be combined with a program such as SCAFFOLDSELECTION [24]. In POCKETOPTIMIZER receptor-ligand scoring functions are used to assess binding. For its evaluation, we compiled a benchmark set of proteins for which crystal structures and experimental affinity data are available and that can be used to test our and other methodologies. We subjected POCKETOPTI-MIZER as well as the state-of-the-art method ROSETTA to our benchmark test. The overall performance of both approaches was similar, but in detail both had different benefits. ROSETTA handles the conformational modeling of the binding pocket better, while POCKETOPTIMIZER has the advantage in predicting which of a pair of mutants of the same protein binds the ligand better. This prediction was correct in 66 or 69% of the tested cases using POCKETOPTIMIZER (CADDSuite or Vina score, respectively) and in 64% of the cases using ROSETTA.
The results show that POCKETOPTIMIZER is a well performing tool for the design of protein-ligand interactions. It is especially suited for the introduction of a hydrogen bond if there is an unsatisfied hydrogen donor or acceptor group in the ligand, and for filling voids between the protein and the ligand to improve vdW interactions. For affinity design problems that require a more complex rearrangement of the binding pocket, e.g. a mutation making room for another side chain to interact with the ligand, none of the tested methods appear to perform well.
There are also some other obvious effects that can influence binding, but that are not addressable with the current methods, e.g. protein dynamics or rearrangements of the backbone. Such problems are probably harder to address than the more complicated test cases dealt with in this study, so that we do not expect that current methods can tackle them with much success. Some apparent problems of POCKETOPTIMIZER, however, such as the occurrence of unresolvable steric clashes between ligand and side chains should be mendable by better sampling of the conformational space and the introduction of backbone flexibility [36] [37][38]. It is conceivable that a continuous minimization step at the end of the design calculation could also be beneficial.
In conclusion, it seems that although POCKETOPTIMIZER performs well, and even better in some respects than the state-of-theart method ROSETTA, there is still room for improvement in computational design of protein-ligand binding. Our study highlights the usefulness of benchmark data sets and systematic testing in order to arrive at an informed assessment of computational design methods. In fact it would be interesting to test other available protein design schemes using our benchmark. A comparison of their performance should be very informative. Further, the benchmark will be useful in future test of parts of our modular design pipeline, e.g. by exchanging the force-field in POCKETOPTIMIZER its contribution can be tested rather than the overall design approach.
When we started to compile our benchmark set, we were hoping for considerably more test cases. The fact that out of the 6,005 protein structures currently contained in the PDBbind database, only ten suitable test cases could be extracted (twelve if the double cases of neuroaminidase and streptavidin are counted), was rather surprising to us. This emphasizes the need for more benchmark data. Thus, an explicit effort to systematically create experimental and structural data is required. For protein-ligand interaction design it would be desirable to have data that covers many mutations of several pocket positions, ideally also of a set of different proteins.
Benchmark Set
The basis for the benchmark set is the PDBbind database. It contains a set of crystal structures of proteins complexed with small ligands, and the corresponding experimentally determined binding affinity. [34]. Our analysis is based on release 2010. First, we aligned the sequences of all proteins in the database to each other, using the Needleman-Wunsch algorithm [39] as implemented in the EMBOSS suite [40]. The proteins were then clustered with single linkage clustering, a link was assumed if the sequence identity was $95%. One cluster was assumed to contain structures of variants of the same protein with some mutations. Several descriptors were calculated for the protein-ligand complexes. If the For each test case with more than two mutations, we plotted the top binding scores of CADDSuite, Vina, and Rosetta designs for each mutation on each scaffold structure together with the logarithm of the affinity. Here we show plots for Carbonic anhydrase II, HIV-1 protease, and Streptavidin test 1. All other plots are shown in Information S1. Values are scaled to fit in the same range. Shown on the x-axis of a plot are the mutants in order of affinity to the ligand (the leftmost has the lowest affinity, compare Table 1 for the actual values). The y-axis measures predicted binding scores for the designs, and the log affinities, scaled between 0 and 1. Both are proportional to the binding free energy, and can therefore be compared when scaled to the same range. The lowest predicted binding score or log affinity is set to 0, the highest respective value to 1. Each plot contains a line for the affinity logarithm (solid, black no marker). This line represents the goal, if a method predicts binding well, the binding score lines should closely follow the log affinity line. The other markers and lines show the scaled predicted binding scores. One line represents the designs calculated for all available mutants, calculated by using one crystal structure as the scaffold. (Crystal structure 1: dashed, blue, circle markers; structure 2: red, dotted, square markers; structure 3: green, dash-dot pattern, diamond markers; structure 4: cyan, two dashes one dot pattern, star markers). We chose to use lines for representation, because this makes it easy to visually compare the shape of the black log affinity line to the lines representing the design binding scores. Each row has plots for one test case, in parentheses the order of scaffold structures is listed: CA: Carbonic anhydrase II (1ydb, 1yda, 1ydd), HP: HIV-1 protease (1met, 1meu, 1mes), S1: Streptavidin test 1 (1swe, 1n43). doi:10.1371/journal.pone.0052505.g004 crystal structure contains water molecules in the binding pocket, waters that have a high probability to play a role in binding were identified and counted. This was done with the tool WATERFINDER included in CADDSuite [28][29][30][31][32][33][34][35][36][37][38][39][40][41] that estimates the strength of binding of a water molecule observed in a crystal structure to the protein. The number of rotatable bonds in the ligand is used as a measure of ligand size and flexibility. The ligands of all proteins in a cluster were pairwise compared using ligand fingerprints as implemented in OpenBabel [42] to measure their similarity and identity. For protein pairs of the same cluster with identical ligands, the pockets as defined by PDBbind were investigated for any mismatches corresponding to mutations. To identify suitable protein pairs, we searched our dataset for protein variants within a cluster that (1) have the same ligand bound, (2) contain at least one mutation in the binding pocket, (3) have no mutations elsewhere, (4) contain less than four water molecules potentially involved in binding, and (5) have a ligand with less than 15 rotatable bonds. As the results contained mostly single mutants, an additional search was performed looking for mutants with (1) at least two mutations in the pocket, (2) no mutations elsewhere, (3) allowing for less than 15 rotatable ligand bonds and (4) less than 7 potential binding waters molecules. The proteins identified by these searches were investigated further by visually inspecting their structure and looking at the corresponding literature. Suitable proteins were included in our set. Reasons for rejecting a protein were large conformational differences of the backbone in the binding pocket, the fact that affinity differences between variants is not caused by any protein-ligand interaction, but for example by changes in protein dynamics, and missing atoms of residues in the binding pocket in a crystal structure.
Design Pipeline PocketOptimizer
A diagram of the POCKETOPTIMIZER workflow is shown in Figure 1. The backbone conformation of the protein stays fixed in the calculations, as do the side chain conformations of residues that do not contact the ligand or a residue that is mutated between variants. Amino acid side chain flexibility is sampled by a conformer library we compiled for this purpose [25][26][27]. For this, a set of high-quality protein structures from the PDB was selected by requiring a maximal resolution of 1.2 Å at least 40 residues, no CAVEAT record. Hydrogen atoms were added using reduce [43]. Side chain conformers of these structures were further filtered by requiring a temperature factor below 30, no alternative conformations and no overlaps with other atoms in the structure according to probe [44]. The conformers were superimposed at the backbone atoms and clustered as described in reference [22], resulting in 2211 conformers. The generation of ligand conformers and binding pocket poses also closely follows reference [22]. Ligand conformers are created with OMEGA2 by OpenEye Software [45]. These are superimposed onto the ligand in the crystal structure, rotated around 6 approximately equally distributed axes through the ligand center of mass, and translated in x, y, z directions. The resulting ligand poses are filtered to exclude poses with obvious clashes with the protein backbone.
Binding energy scores between protein and ligand are calculated by a receptor-ligand scoring function. The first one is contained in CADDSuite [28]. It is composed of terms for electrostatic, vdW, solvation and hydrogen bond energy scores. The second score used by POCKETOPTIMIZER is Autodock Vina [29]. Protein packing energies are calculated using the AMBER force field [31] with electrostatics scaled by a factor of 0.01. In order to be compatible with the energy score optimization algorithm, the energy values have to be pairwise decomposable, i.e. of the form E total~P i E i z P i,j E i,j . E i are the self energies of the variables (side chain conformers or ligand poses), i.e. their inherent energies and the energies with the fixed protein parts, and E i,j the pairwise energies between the variables. As we are interested in improving binding affinity, we chose to upscale the binding energies by a factor of ten for CADDSuite scores and a factor of 100 for Autodock Vina scores to arrive at absolute values that are in the same range as the AMBER packing energies. The E i and E i,j energy tables are computed for all side chain conformers at the pocket positions and the ligand poses. The problem of finding the minimum energy conformation is formulated in graph-theroretic terms [32] and solved using the MPLP algorithm by Sontag et al. [33]. The energy minimum identifies the best design with corresponding score values and conformation. POCKETOPTIMIZER is realized as a collection of binaries and scripts that perform the different subtasks. It was developed and tested on Ubuntu Linux 10.04 operating system. AMBER packing energy calculations are implemented in C++ using BALL [41], so is the ligand pose generation tool. Protein-ligand energies for CADDSuite are calculated with a scorer binary implemented in C++ as well, vina energies are calculated using the vina binary provided with the Autodock vina software distribution. The side chain conformer library contains the structures of the amino acid side chains in PDB and SDF formats. Several Python scripts are provided that interface between the different parts and allow a convenient conducting of a protein design task with the POCKETOPTIMIZER pipeline. Intermediate result are stored in standard file formats, SDF and PDB formats for structural data, and CSV files for energy tables. This allows the user to easily inspect this data with standard tools. It also facilitates the possibility to use a different approach for one of the modules, e.g. a different docking function, while the rest of the pipeline can remain unaltered.
Setup for PocketOptimizer Benchmark
The protein structures were briefly minimized using CHIMERA's [46] AMBER implementation. Amino acids of the binding pocket positions that were allowed to change conformations in the calculations had to have a distance smaller than 4 Å of at least one side chain atom to the ligand or to one of the residues that are mutable. Ligand conformers were rotated by 620u around each axis and translated by 0.5 Å in each direction to create the ligand poses. If this resulted in more than 3000 poses, the conformers were filtered by similarity to the crystal structure conformation until meeting the max 3000 poses criterion. For proteins that contain metals in their binding pocket that are coordinated by the ligand, the ligand poses were filtered for poses that are geometrically compatible for coordination.
Rosetta Design Setup
The ROSETTA enzyme design application as implemented in ROSETTA 3.3 [30] was used with parameters closely following the relevant documentation. Protein structures were briefly minimized using the ROSETTA receptor preparation application provided for this task, generating ten resulting structures of which the one with the best energy was used for the design runs. Ligand conformers were generated using OMEGA2, ligand charges added with the QUACPAC program of OpenEye software [45], and ROSETTA ligand params files generated with the provided molfile_to_params python script as included in the 3.3 distribution. No catalytic constraints were used for the enzyme design application runs, effectively making it a receptor design application. 1000 designs were created for every protein and every mutation on that protein with experimental affinity data in the test set. The best design was determined by the ranking scheme suggested in the documenta-tion, it is the design with the best predicted binding energy among the designs with the 10% top total scores.
Supporting Information
Information S1 (PDF) Author Contributions | 2018-04-03T04:42:48.433Z | 2012-12-27T00:00:00.000 | {
"year": 2012,
"sha1": "f152bc6fc375492ea7dbad2e1e48b8e44fb2a82b",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0052505&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f152bc6fc375492ea7dbad2e1e48b8e44fb2a82b",
"s2fieldsofstudy": [
"Biology",
"Chemistry",
"Computer Science"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
254331574 | pes2o/s2orc | v3-fos-license | Analysis of the Design and Development Path of the Cultural and Creative Derivatives of Marquis of Haihun Site
: This paper firstly analyzes the culture of Marquis of Haihun site and explores the cultural and creative derivatives, followed by the value of these cultural and creative derivatives of Marquis of Haihun site and the design and development path of the cultural and creative derivatives of Marquis of Haihun site for the reference of relevant scholars
Introduction
In today's society, it is of great value to vigorously develop cultural and creative industries. The rapid development of social economy has driven the development of cultural and creative industries. The integration of cultural and creative concepts into the construction of Marquis of Haihun site, the construction of cultural and creative derivatives, and the realization of the effective dissemination of Marquis of Haihun site, all drive the economic development of Nanchang area [1] . At the same time, drawing on domestic and foreign experiences in the development of cultural and creative derivatives, it is of vital significance and value to develop derivatives that combine the specific characteristics of the Marquis of Haihun Kingdom [2] . How to transform intangible culture into tangible cultural products and integrate them into people's lives in a discreet manner is an issue worth studying. The road to disseminating culture is open to all, and as a product of the perfect combination of culture and design, there is still a long way to go in the study of its evolution and the application of its laws to the practical process of design.
Cultural overview of Marquis of Haihun site
Today, the Marquis of Haihun site is in the stage of protection and development, with a cemetery of about 46,000 square meters and a length of about 800 meters. Jiangxi Provincial Government has built a worldclass national archaeological park to achieve the purpose of integrating conservation and development. On the premise of safeguarding the authenticity and integrity of the site, cultural concepts such as the Haihun elements and Yuzhang characteristics have been proposed.
The study of Marquis of Haihun site is immeasurable in its cultural value, both from an archaeological and historical point of view. The most valuable artifacts in the tomb of Marquis of Haihun were not gold coins and coins but rather bamboo tubes. Before paper was invented in the Han Dynasty, China recorded documents using wood, bamboo, bronze, and tortoise shell as carriers for writing. More than 5,000 bamboo tubes and wooden documents have been unearthed from the tomb of Marquis of Haihun, which became an important discovery in the history of documentary archaeology in China. The Han Dynasty was a prosperous stage in the development of our culture, but there are relatively few documents recorded in history, and people remain ignorant about the rank, funeral culture, and funeral system of the Han Dynasty lords. The excavation of the tomb of Marquis of Haihun was enough to fill such historical gaps, allowing us to visualize the living scenes of the Han Dynasty lords and bringing out the living face of Han Dynasty history [3] .
The owner of the tomb, Liu He, is a historical figure of great research value. He rose from being the King of Changyi to the emperor, but eventually was deposed off his position as an emperor. Liu He became depressed until the day he passed away; his life was embodied in a series of drama. Liu He was the son of Emperor Wu of the Han Dynasty, Liu Che. Although Liu Che was not the founding emperor, he initiated many fine customs in the Han Dynasty and contributed to the development of spiritual civilization in the Han Dynasty. Unlike his father, Liu He grew up in an unrestrained environment, and his uninhibited nature made it difficult for him to endure the shackles of etiquette and rules, thus growing as a person with unrestrained character. Soon after Liu He ascended to the throne, Huo Guang decided that he was not a puppet emperor who could be controlled, so he joined hands with his supporting ministers to plan Liu He's dethronement. In the Day of Kuiyi, Emperor Liu He was dethroned on the 27th day of his reign, making him known as the "Dethroned Emperor of Han." He was the shortest-serving emperor in the history of the Western Han Dynasty. In the third year of Yuankang, Emperor Xuan of Han made Liu He the Marquis of Haihun. In April, Liu He went to Haishu County, Yuzhang County (now Xinjian District, Nanchang City, Jiangxi Province) to assume his reign. In the third year of Shenjue, Liu He died [4] .
Overview of cultural and creative derivatives
Cultural and creative derivatives belong to the category of cultural and creative products, which may present valuable information related to culture. Cultural and creative derivatives form specific cultural products after being designed and produced. Unlike ordinary cultural products, the unique identity of cultural and creative derivatives lies in the fact that they contain the connotation, concept, and innovative value of a certain culture. The application value of cultural and creative derivatives is extremely rich, with vast knowledge and added value. Cultural and creative derivatives are transforming culture into products, injecting the connotation of culture, design concept, and characteristics of products into cultural and creative derivatives, as well as analyzing and exploring the unique cultural nature [5] . Cultural and creative derivatives have strong application value, vast knowledge, and high added value.
Under the trend of diversified culture, creativity and personalization have gained widespread attention. As a result, cultural and creative industries are born in this environment. Different countries have different cultural characteristics, and even the cultural characteristics of different regions within a country vary. It is necessary to analyze the value connotation of cultural and creative derivatives according to the unique cultural and social value. One of the major elements of cultural and creative derivatives is economy. Cultural and creative derivatives are ultimately intended to enter the market and be sold as commodities. Hence, one of the important values of creative derivatives lies in its economic value. A key factor in determining whether a creative derivative has economic value is whether it is marketed and welcomed by the public.
There are various types of cultural derivatives, all of which have their own unique advantages in terms of design, production, and sales. The characteristics of cultural derivatives in different regions also vary. Cultural and creative derivatives can be divided into the following categories: (1) content-based cultural creative derivatives, including movies, television series, animation, and others; such cultural derivatives have deep connotations and are loved by the general public; (2) creative class of cultural and creative derivatives, which focuses on creative design, combining culture and innovation; they include traditional toys, cultural shirts, rechargeable batteries, and other daily necessities; by using these derivatives, the public would be able to get a good user experience and recognize the cultural and creative value contained in the products; (3) cultural and creative derivatives of extensible category, which includes exhibitions, cultural activities, etc.; cultural and creative derivative designers usually consider the spiritual and cultural needs contained in such products first rather than meeting the use needs of the public; such derivatives mainly provide cultural promotion services to the general public, and their functions and roles are fully utilized to expand the scope, which also brings richer spiritual enjoyment to the general public [6] .
Value and development of cultural and creative derivatives of Marquis of Haihun site
The first is the value of cultural relics dissemination. The vast majority of the cultural and creative derivatives of Marquis of Haihun site are from the cultural relics excavated from the site. These cultural relics are the source of inspiration for cultural and creative derivative designers. In addition to satisfying certain functional values, highlighting the cultural connotations in the relics are also of prime importance when designing cultural and creative products. The majority of historical relics are kept in museums, and the public needs to visit them if they want to learn about such history and culture. If the public does not have opportunities to visit due to time constraints, it will be difficult for the public to gain insights into such history and culture. The cultural creation derivatives of the site hold an important duty of cultural inheritance and transmission. Visitors can experience the cultural attributes of cultural relics during their visits to the museum and purchase certain cultural and creative derivative to further delve into the value and charm of culture from another perspective. The dissemination and transmission of cultural heritage is dependent not only on the display of cultural relics, but also on the design, production, and sales of cultural and creative derivatives, which complement each other. Hence, cultural derivatives have a significant functional value from the perspective of cultural inheritance.
The second is brand communication value. Brand culture communication value refers to the value of according more culture to a brand, establishing a brand, and generating the promotion effect of the brand. If an enterprise's brand is trusted by consumers, the enterprise may consider expanding the scope of its market and effectively implement brand strategy [7] . Brand culture is a display of people's good values and national spirit, which brings together the cultural connotation of the times and advocates the formation of healthy and upward values. For the development and design of cultural and creative derivatives of Marquis of Haihun site, it is not only necessary to strengthen the brand power through brand culture, but also make full use of the brand to promote the sales of these cultural and creative derivatives and boost the social function of the site.
The third is the value of tourism growth. With societal development and progress, the tourism industry, as a tertiary industry in China, has an influence on the development of regional economy. At this stage, many people are willing to devote their time and energy to tourism activities. Moreover, they are curious about the historical development of traditional culture. At present, Nanchang area lacks cultural resources of great weight and educational value, so constructing cultural derivatives of Marquis of Haihun site will inevitably bring great commercial value to tourism growth in Nanchang area. The management of the process from open protection to tourism should be strengthened in order to promote the sustainable development of the tourism business. Many constructions of the cultural site of Marquis of Haihun belong to the category of cultural and creative derivatives. The construction of the virtual experience hall of the Marquis of Haihun site is an outreach type of cultural and creative derivative, creating a new experience for the public [8] .
Design and development path for cultural and creative derivatives at the heritage site of Marquis of Haihun
The planning and design of cultural and creative derivatives at the heritage site of Marquis of Haihun can be explored from multiple angles, including visual images, music videos, live performances, animation games, food, and other aspects. Cultural and creative derivatives can be designed with the unique value of Marquis of Haihun. In addition, the official can make use of new media to communicate during the publicity and promotion period by creating a WeChat public account and an official micro broadcast so as to narrow the gap with the young generation. Furthermore, current and exciting hot topics can also be brought up, so as to achieve an ideal communication effect. On the other hand, middle-aged and older people tend to obtain information from newspapers, television, and radio. Hence, traditional paper media can be used in the cultural creation of Marquis of Haihun to promote the visibility of the product, thus serving the purpose of promoting the cultural and creative derivatives at the heritage site of Marquis of Haihun.
Visual identity
The concept of visual identity is applied in all areas of the market, and all industrial derivatives are required to have a unique visual identity. Visual image is a refinement to achieve the overall goal of enhancing the image of the product and a systematic image design with product design as the core. Taking the product as a carrier, the design must be able to objectively and accurately convey the spirit and concept of the product in terms of its cultural connotation, form, color, as well as the logo, graphics, and text attached to the product. The designer creates a series of designs; forms development and research concepts; uses processing techniques, production equipment, packaging, display, and marketing methods; carries out product promotion and advertising strategies for the cultural and creative products of Marquis of Haihun site to form a unified sensory image. The visual image of Marquis of Haihun site is crucial as it influences the design orientation of the entire derivative product [9] .
In the design process, designers can tap into the visual elements embedded in the heritage site. For example, the design of an animation character can be based on Liu He. Liu He's life has had its ups and downs and is highly topical as the tomb owner and the Marquis of Haihun. Designing him as an animation character will help promote the heritage site and its cultural and creative derivatives.
Music video
As an art that truly reflects the emotional life of the human society, music can bring about an ennobling effect and help people enjoy music aurally. Excellent music can enhance one's aesthetic ability and purify one's heart. The chimes, stone-chimes, string and wind instruments, reed pipes, and nearly two hundred wooden figurines of music excavated from the tomb of Marquis of Haihun have provided new supporting evidence for music research. Therefore, the music derivatives from Marquis of Haihun site can begin from the Han Dynasty ritual and music system by incorporating chimes as well as string and wind instruments into the orchestration [10] . In today's society, the younger generation prefers music that is easily understood, whereas middle-aged and elderly people prefer traditional Chinese music. Therefore, when arranging scores, one can combine pop music and traditional music, while integrating the musical characteristics of Han culture. Jiangxi singers can also be invited to sing songs accompanied by traditional Han instruments in Nanchang dialect. In addition, the cooperation with cultural and creative products for promotion and publicity is also believed to be beneficial in achieving a good dissemination effect.
The image category is similar to the music category. It also plays an excellent role in heritage promotion. With the deepening of cultural exchanges between China and foreign countries, there are more diverse ways to communicate cultural and creative products to the people under the video category. Film and television production teams should consider producing a series of film and television works or documentaries based on the historical theme of Han culture around a certain element or topic concerning the Marquis of Haihun.
Live performance
The live performance category is mainly based on the culture and folk customs of Nanchang area, which integrates the commercial value of deductive art. It is also a derivative product of China's tourism industry. The most representative live performances now are "Dunhuang," "Jinggang Mountain," and "See Pingyao Again." The Marquis of Haihun site is developing toward cultural tourism, containing numerous cultural messages from the Han Dynasty.
The live performances at the Marquis of Haihun site should also include actual historical events, especially concerning Liu He's tumultuous life story. The Marquis of Haihun heritage park shall be used for field performances along with the corresponding stage scene and lighting equipment, and the performers shall keep an appropriate distance from the audience during the performances.
Animation games
When it comes to cultural and creative derivatives, people tend to associate them with physical products. It is unlikely that they would think of games, which are virtual in nature. However, games do serve as an integral part of cultural and creative products. In our country, there are various types of culture-related games, which are rich in cultural elements. There are many games that are based on the spirit of culture in the real world on the market. Although games are often despised as they are thought to be addictive, causing young people to squander time in school, appropriate games reflect healthy values and carry the value of cultural transmission. The cultural advantages of Marquis of Haihun are used to design game products with Han culture elements as the theme, integrating obscure history and culture into easy games, which allow even young children to experience and gain an understanding of the historical development of Marquis of Haihun through the game. Appropriate games not only energize the brain but also allow the players to learn about traditional culture.
Food and beverage
Food and beverage products are considered hot products in recent years. The eating habits and dietary characteristics of different historical and cultural backgrounds vary. In the design and development of food and beverage products derived from the Marquis of Haihun site, designers should include the dietary characteristics of Han culture and develop corresponding design plans with the dietary characteristics acceptable to modern people. For example, the distilled wine of Western Han Dynasty was served in containers such as bronze francium, and the brewing process and containers are displayed in the museum, which fully reflect the characteristics of Han Dynasty's wine culture. Combined with the characteristics of the wine culture and wine brands in Nanchang, one of which is Nanchang beer, and the other is Site wine, the wine culture of Han Dynasty can be shared with the world, and Nanchang's wine brand can be promoted as well [11] .
Conclusion
In conclusion, studying the cultural and creative derivatives of Marquis of Haihun tomb helps to inject more historical and cultural connotations into them, enhances their spiritual value, and meets the personalized needs of consumers at different levels. The design and development of Marquis of Haihun's cultural and creative derivatives drive the development of Nanchang's cultural and creative industry, which not only improves the visibility of the site of Marquise of Haihun, but also enables the world to intuitively experience and recognize the profoundness of Han culture. In this way, China's profound history and culture can be popularized and inherited. I firmly believe that in the near future, "Marquis of Haihun" would become a | 2022-12-07T19:56:36.127Z | 2022-11-29T00:00:00.000 | {
"year": 2022,
"sha1": "3f099ce810101919d8645449e6b4cb7cb5aacee7",
"oa_license": null,
"oa_url": "https://doi.org/10.26689/jcer.v6i11.4488",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "41c16532e96c7031f37b3239ed91d1bba6fc3a09",
"s2fieldsofstudy": [
"Education",
"Art",
"History"
],
"extfieldsofstudy": []
} |
246487434 | pes2o/s2orc | v3-fos-license | Group cognitive behavioural therapy with virtual reality exposure versus group cognitive behavioural therapy with in vivo exposure for social anxiety disorder and agoraphobia: a protocol for a randomised clinical trial
Introduction Anxiety disorders have a high lifetime prevalence, early-onset and long duration or chronicity. Exposure therapy is considered one of the most effective elements in cognitive behavioural therapy (CBT) for anxiety, but in vivo exposure can be challenging to access and control, and is sometimes rejected by patients because they consider it too aversive. Virtual reality allows flexible and controlled exposure to challenging situations in an immersive and protected environment. Aim The SoREAL-trial aims to investigate the effect of group cognitive behavioural therapy (CBT-in vivo) versus group CBT with virtual reality exposure (CBT-in virtuo) for patients diagnosed with social anxiety disorder and/or agoraphobia, in mixed groups. Methods and analysis The design is an investigator-initiated randomised, assessor-blinded, parallel-group and superiority-designed clinical trial. Three hundred two patients diagnosed with social anxiety disorder and/or agoraphobia will be included from the regional mental health centres of Copenhagen and North Sealand and the Northern Region of Denmark. All patients will be offered a manual-based 14-week cognitive behavioural group treatment programme, including eight sessions with exposure therapy. Therapy groups will be centrally randomised with concealed allocation sequence to either CBT-in virtuo or CBT-in vivo. Patients will be assessed at baseline, post-treatment and 1-year follow-up by treatment blinded researchers and research assistants. The primary outcome will be diagnosis-specific symptoms measured with the Liebowitz Social Anxiety Scale for patients with social anxiety disorder and the Mobility Inventory for Agoraphobia for patients with agoraphobia. Secondary outcome measures will include depression symptoms, social functioning and patient satisfaction. Exploratory outcomes will be substance and alcohol use, working alliance and quality of life. Ethics and dissemination The trial has been approved by the research ethics committee in the Capital Region of Denmark. All results, positive, negative as well as inconclusive, will be published as quickly as possible and still in concordance with Danish law on the protection of confidentially and personal information. Results will be presented at national and international scientific conferences. The trial has obtained approval by the Regional Ethics Committee of Zealand (H-6-2013-015) and the Danish Data Protection Agency (RHP-2014-009-02670). The trial is registered at ClinicalTrial.gov as NCT03845101. The patients will receive information on the trial both verbally and in written form. Written informed consent will be obtained from each patient before inclusion in the trial. The consent form will be scanned and stored in the database system and the physical copy will be destroyed. It is emphasised that participation in the trial is voluntary and that the patient can withdraw his or her consent at any time without consequences for further and continued treatment. Trial registration number NCT03845101.
Introduction Anxiety disorders have a high lifetime prevalence, early-onset and long duration or chronicity. Exposure therapy is considered one of the most effective elements in cognitive behavioural therapy (CBT) for anxiety, but in vivo exposure can be challenging to access and control, and is sometimes rejected by patients because they consider it too aversive. Virtual reality allows flexible and controlled exposure to challenging situations in an immersive and protected environment. Aim The SoREAL-trial aims to investigate the effect of group cognitive behavioural therapy (CBT-in vivo) versus group CBT with virtual reality exposure (CBT-in virtuo) for patients diagnosed with social anxiety disorder and/or agoraphobia, in mixed groups. Methods and analysis The design is an investigatorinitiated randomised, assessor-blinded, parallel-group and superiority-designed clinical trial. Three hundred two patients diagnosed with social anxiety disorder and/ or agoraphobia will be included from the regional mental health centres of Copenhagen and North Sealand and the Northern Region of Denmark. All patients will be offered a manual-based 14-week cognitive behavioural group treatment programme, including eight sessions with exposure therapy. Therapy groups will be centrally randomised with concealed allocation sequence to either CBT-in virtuo or CBT-in vivo. Patients will be assessed at baseline, post-treatment and 1-year follow-up by treatment blinded researchers and research assistants. The primary outcome will be diagnosis-specific symptoms measured with the Liebowitz Social Anxiety Scale for patients with social anxiety disorder and the Mobility Inventory for Agoraphobia for patients with agoraphobia. Secondary outcome measures will include depression symptoms, social functioning and patient satisfaction. Exploratory outcomes will be substance and alcohol use, working alliance and quality of life. Ethics and dissemination The trial has been approved by the research ethics committee in the Capital Region of Denmark.
All results, positive, negative as well as inconclusive, will be published as quickly as possible and still in concordance with Danish law on the protection of confidentially and personal information. Results will be presented at national and international scientific conferences. The trial has obtained approval by the Regional Ethics Committee of Zealand (H-6-2013-015) and the Danish Data Protection Agency (RHP-2014-009-02670). The trial is registered at ClinicalTrial. gov as NCT03845101. The patients will receive information on the trial both verbally and in written form. Written informed consent will be obtained from each patient before inclusion in the trial. The consent form will be scanned and stored in the database system and the physical copy will be destroyed. It is emphasised that participation in the trial is voluntary and that the patient can withdraw his or her consent at any time without consequences for further and continued treatment. Trial registration number NCT03845101.
Strengths and limitations of this study
► The present study will be the first large randomised clinical trial to investigate virtual reality exposure therapy for social anxiety disorder and agoraphobia in group therapy. ► The present study is very closely integrated with clinical practice, making results highly transferable to similar real-life settings. ► Mixing patients with social anxiety disorder and agoraphobia in the same therapy groups have never been investigated systematically, which may confound the interpretation of results. ► Because the study is embedded in an outpatient hospital setting, the intervention was designed to be flexible. This increases the ecological validity but also the risk of systematic bias in treatment administration.
Open access BACKGROUND Social anxiety disorder is characterised by paying attention to oneself in an exaggerated manner and having marked fear of being negatively evaluated by other people. 1 2 Agoraphobia is characterised by avoidance or enduring with dread, situations in which escape is perceived difficult or where help might not be available in the event of a panic attack, panic-like symptoms or incapacitating symptoms such as loss of bladder and/or bowel control. 1 3 Both social anxiety disorder and agoraphobia are associated with marked functional consequences. 1 In Denmark, anxiety disorders represent the costliest disease burden in terms of lost production, due to their early onset, long duration and high prevalence. 4 The first-line treatment for social anxiety disorder and agoraphobia is cognitive behavioural therapy (CBT) with exposure therapy. 5 6 Several meta-analyses have found that patients with social anxiety disorder and agoraphobia respond well to CBT with exposure therapy, provided in individual as well as group format. 7-10 Exposure therapy aims to change expectations and emotional responses associated with feared stimuli, by exposing the patient to the stimuli and challenging the patients' expectancies of the likelihood and consequences of a feared outcome. 11 However, in clinical practice, in-vivo exposure stimuli can be difficult to access and control and patients or therapists sometimes reject the treatment, because they consider it too aversive or too logistically demanding. [12][13][14] Virtual reality exposure therapy for social anxiety disorder and agoraphobia Virtual reality (VR) technology allows the user to experience virtually mediated environments that are perceived as real or almost real, due to multisensory stimulation and blocking of real-world sensory input. Numerous possibilities for psychological intervention using VR are currently being researched owing to its immersive quality. 15 16 As a therapy tool, VR is most widely used to perform Virtual Reality Exposure Therapy (VRET), 16 17 either as a standalone treatment, for example, 18 or integrated into a CBT treatment, for example. 19 The use of VR allows flexible and controlled exposure to challenging situations in an immersive and safe environment. Therefore, using VRET can mitigate the challenges of in-vivo exposure therapy by producing greater user acceptance and access to situations that would otherwise be too difficult to control, too resource-intensive to find and/or have unacceptable confidentiality risks. 15 19 20 Based on this, VRET may improve the efficacy and costeffectiveness of psychotherapeutic interventions for anxiety disorders.
Recent reviews and meta-analysis of VRET, either as a standalone treatment or combined with cognitive interventions, conclude that VRET is more effective than waitlist and placebo control and equally as effective as first-line treatment controls for anxiety disorders. [21][22][23] However, in one meta-analysis, the authors find significantly worse treatment effects of VRET for social anxiety disorder, when compared with control groups that received equal amounts of in-vivo exposure. 24 It has been suggested that it is more difficult to produce VRET environments for social anxiety disorder, as compared with other phobic disorders because human interaction is complex and therefore difficult to realistically recreate 25 which may explain these results. Accordingly, the same meta-analysis found no significant difference in treatment efficacy for CBT with VRET versus CBT with in-vivo exposure for agoraphobia and specific phobia. 24 In general, there is a scarcity of high-quality randomised clinical trials evaluating the use of VRET for social anxiety disorder and agoraphobia. 16 26 27 For social anxiety disorder, there are five trials published, the largest having 97 participants. 18 19 28-30 For agoraphobia, there are six trials published, the largest having 80 participants. [31][32][33][34][35][36] All in all, the evidence base for using VRET compared with in-vivo exposure for social anxiety disorder and agoraphobia remain small. Therefore, larger studies that capitalise on the unique qualities of VRET are needed.
VR exposure in group therapy VRET has never been investigated in a group format. Group therapy for social anxiety disorder and agoraphobia is popular in outpatient settings because it has similar treatment efficacy [37][38][39] and is proposed to have better cost efficiency, compared with individual therapy. 37 39 However, the claim of cost efficiency for social anxiety disorder is disputed, at least in a UK mental healthcare setting. 40 Beyond that, therapeutic interpersonal processes such as peer learning and modelling has been suggested to be a distinct benefit of group therapy, 41 42 though this has never been systematically evaluated for mixed anxiety groups. A suggested drawback of group CBT compared with individual CBT is that in-vivo exposure in group therapy is restrained by the logistics of managing several patients simultaneously, leading to comparatively less individualised exposure exercises. 43 44 The use of VRET in group therapy may therefore be especially beneficial, since it should allow for individualised exposure, as well as a greater amount of exposure therapy because less time will be spent on logistical issues (transport, planning, waiting, and so on), while at the same time retaining the proposed benefits of the therapeutic interpersonal processes and cost-efficiency.
Treatment of social anxiety disorder and agoraphobia in the Danish mental health system In the Danish mental health services, patients with social anxiety disorder or agoraphobia as their primary diagnosis are generally offered group CBT. To reduce wait time, patients with these diagnoses are treated in the same therapy groups, generally referred to as 'mixed anxiety groups' or 'phobia groups'. These mixed anxiety groups are considered to be as effective as diagnosisspecific groups, due to the overlap in symptoms and diagnostic criteria, 45 high degree of comorbidity, 46 as well as Open access recent evidence of the acceptable treatment efficacy of CBT-based transdiagnostic therapies. 47 However, it is worth noting that the pragmatic mixed anxiety group format has never been systematically evaluated and that the official treatment recommendation remains diagnoses-specific CBT delivered in group or individually. 48 To maximise the study's clinical representativeness, as defined by Shadish et al, 49 the treatment structure in the present study, including the comperator, will mimic the treatment offered by the Danish mental health services.
Aim and objectives
In summary in-vivo exposure is considered effective, but can be challenging to perform. VRET may alleviate these challenges. However, the usefulness of VRET for social anxiety disorder and agoraphobia remains unclear. Larger studies that capitalise on the benefits of VRET are needed. Group therapy may be one way to capitalise on the benefits of VRET because it could allow for more individualised exposure exercises. Mixed anxiety groups are commonly used in Danish mental healthcare to reduce wait time, but have not been systematically evaluated. The treatment, inclusion and exclusion criteria described in the present study match the eligibility criteria for treatment and treatment format of the Danish mental healthcare system to maximise transferability of results to clinical practice. Therefore, the SoREAL trial aims to evaluate the treatment efficacy of VRET in mixed anxiety CBT groups (CBT-in virtuo) compared with mixed anxiety CBT groups where exposure therapy is performed in-vivo (CBT-in vivo).
Thus, in the SoREAL trial, the following hypotheses' will be tested: Primary hypothesis 1. Post-treatment, patients treated with CBT-in virtuo will have a lower level of anxiety symptoms compared with patients treated with CBT-in vivo, measured as total scores on the Liebowitz Social Anxiety Scale (LSAS) for patients with social anxiety disorder and the Mobility Inventory for Agoraphobia (MIA) for patients with agoraphobia converted to the percentage of maximum possible (POMP) scores and averaged within treatment arms.
Secondary hypotheses 1. One year after treatment, patients treated with CBT-in virtuo will have lower levels of anxiety symptoms compared with patients treated with CBT-in vivo. 2. Post-treatment and 1 year after treatment, patients treated with CBT-in virtuo will have lower levels of fear of negative evaluation compared with patients treated with CBT-in vivo.
Overall, we believe that the SoREAL trial will contribute with knowledge about the efficacy and feasibility of VRE for treating social anxiety disorder and agoraphobia in a clinical outpatient setting. The results of this trial may guide future applications of VR in clinical settings across a wide breadth of use cases.
METHODS AND DESIGN
This article was written in accordance with the Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) 2013 explanation and elaboration: guidance for protocols of clinical trials. 50 The SPIRIT Checklist was followed and the SPIRIT flowchart was used (see online supplemental file 1 and figure 1).
Recruitment
The SoREAL trial is embedded directly into five outpatient clinics offering group CBT for social anxiety disorder and agoraphobia. These clinics are part of the Danish mental healthcare system. To be eligible for treatment in these clinics, patients must be referred by their primary care physicians to a Centre for Visitation and Diagnosis in their area, where their symptomatology will be assessed. At the Centre for Visitation and Diagnosis, they must be referred to one of the five outpatient clinics involved in the study. At the outpatient clinic, the patient will again be clinically assessed, and a diagnosis and treatment plan will be formulated. If social anxiety disorder and/or agoraphobia is considered the primary diagnosis for the patient, they will be asked if they are interested in getting more information about the trial. If they consent to it, their contact details will be given to a researcher, who will invite them to an interview concerning the study.
Mini International Neuropsychiatric Interview (MINI), V. 7.0 for DSM-5 will be used to screen for diagnosis. Psychometric analyses of the MINI have demonstrated acceptable test-retest and inter-rater reliability. 51 52 Diagnostic screening is sufficient due to the thorough assessment from both Centre for Visitation and Diagnostics and the outpatient clinics which must have confirmed social anxiety disorder or agoraphobia as the primary diagnosis of the patient, for the patient to be eligible for the study. If eligibility is confirmed, informed consent is acquired (see online supplemental file 2, for a model consent form). Patients who cannot or will not participate in the study will be offered treatment as usual, which is identical to the control group treatment. Inclusion and exclusion criteria were based on the eligibility criteria for receiving the treatment package in Danish outpatient clinics.
Feasibility
Five psychotherapeutic outpatient clinics are involved in the study. All patients referred to these clinics with the relevant diagnosis, who also agree to be contacted, will be invited to an interview about their potential participation. Each of the clinics provide treatment for approximately 30 patients with social anxiety disorder and/or agoraphobia every year. Thus we anticipate that 450 patients will be eligible for the trial during a 3-year recruitment period. We expect a high eligibility rate, due to the previously mentioned assessment procedures the patients will have completed. We also expect a high acceptance rate, due to the novel use of VR technology and the use of a control group that is identical to the treatment they would be offered if they refused participation. See figure 2 for a flow diagram of the SoREAL trial.
Treatment format
The treatment for social anxiety disorder and agoraphobia offered at the outpatient clinics must follow the national guidelines for the treatment of these disorders. The guidelines are encapsulated in specified 'treatment packages'. For social anxiety disorder and agoraphobia, this package contains: ► 1 hour of assessment. ► 1 hour of individual therapy in preparation for group therapy ► 1 hour of psychometric testing. ► 14 sessions of 2 hours of group therapy ► 1.5 hours of next of kin involvement ► 1 hour of pharmacological treatment planning with a psychiatrist ► 2.5 hours coordination with social services, relapse prevention and follow-up meetings. Not all of this is necessary for every patient, but every patient can receive every part of the package, should they want to. The treatment in the present study must live up to the standards of the national guidelines. Patients are not allowed to be in any other form of psychotherapeutic treatment.
The therapeutic intervention is manual-based cognitivebehavioural CBT group adapted from the approach of Turk et al 53 Open access from Rosenberg et al 55 and inspiration from Bouchard et al. 56 The treatment will consist of 14 weekly 2-hour group sessions following the manual to ensure equal and uniform treatment for every patient throughout the study. The manual allows flexibility to ensure clinically representative conditions. 49 For example, it is allowed to change the order of the sessions if it is considered beneficial for the group and multiple exercises are optional. However, the time dedicated to exposure is fixed in both groups. Concurrent psychopharmacological treatment is allowed in both intervention arms.
Groups will consist of 8-9 patients with social anxiety disorder and/or agoraphobia as their primary diagnosis, and every session will be led by two trained clinicians (ie, psychologists, psychiatrists or psychotherapists) with practical experience in CBT and in vivo exposure. Throughout the course of the study, the clinicians involved will treat both CBT-in vivo and CBT-in virtuo groups. Medical consultation, acute individual sessions, supplementary social counselling and physical therapy are possible in both intervention arms. In both intervention arms, the sessions dedicated to exposure are scheduled from the fifth to the eleventh session with approximately 45 min of exposure in each session. From the fifth session and onwards, all patients in both interventions will have in-vivo exposure as homework. The cognitive therapy strategies used in the non-exposure sessions (first four and last two therapy sessions) are as follows: (1) introduction to CBT; (2) psychoeducation about anxiety and cognitive restructuring of dysfunctional assumptions and beliefs; (3) shifting self-focused attention and modifying cognitive distortions; (4) developing an understanding of safety behaviour and the rationale of exposure; (5) evaluation, discussion and feedback on the use of patientacquired techniques; and (6) relapse prevention. In both conditions, the exposure exercises aim to develop adaptive responses to anxiety-provoking situations, reinforce cognitive restructuring by framing exercises as behavioural experiments (though these were limited by the non-interactive medium), train attention exercises, train general cognitive strategies (eg, identifying negative automatic thoughts) and train social skills. See tables 1 and 2 for an overview of the content of the CBT sessions for both conditions.
In the in virtuo condition, exposure will take place during 8 out of the 14 group sessions, as in the CBT-in vivo condition. Patients will be exposed to VR situations, Introduction to core beliefs, additional exposure exercises.
10
Repetition of core beliefs, resources and skills, additional exposure exercises.
11
Exposure therapy, out of the clinic.
12
Repetition and evaluation of methods learnt/used so far, revising problem-goal list.
13
Evaluation, discussion and feedback on the different methods used by each patient. 14 Maintenance and relapse prevention, review of skills, review of progress and future goals, plan for continued exposures, relapse prevention strategies.
Open access
which are relevant to them, and which they are motivated to engage in. Patients in CBT-in virtuo condition will be assigned in vivo exposure homework between sessions in the same way as the CBT-in vivo group.
Fidelity to the treatment manual
The intervention is manual-based, which improves the standardisation of the treatment. Fidelity to the treatment manual will be assessed through a self-report questionnaire answered by the clinicians at five different time points throughout each group treatment. The questionnaire (and the timepoints when it is delivered) are designed to correspond to the treatment manual. This type of fidelity measurement has proved useful and adequate in trials where the effect of treatment is tested. 57 The VR headsets will also record statistics of the use of the 360 o films. This data show which specific scenes were watched and how much and can be matched to the individual patient. This data will be used to keep track of the VR usage throughout the study to see how well it matches the treatment manual.
Treatment completion and discontinuation
Criteria for treatment completion, partial treatment and no treatment were based on clinical guidelines for writing epicrisis as well as discussions within the research group. ► The attendance of 0 or more group therapy sessions will be coded as 'treatment completion'. ► The attendance of between four to nine group therapy sessions will be coded as 'partial treatment'. ► The attendance of less than four group therapy sessions will be coded as 'no treatment'. Treatment will be discontinued if participants do not show up to treatment 3 weeks in a row and cannot be contacted after multiple attempts by the therapists. Participants who have their treatment discontinued will still be included in the statistical analysis.
VR equipment
The patients receiving the in virtuo exposure will be immersed using an Oculus Go head-mounted display, enabling viewing of 360° spherically camera-recorded VR environments. The VR scenarios will thus be highresolution 360° stereoscopic films, that are played around the viewer. For audio, the patients will use high-quality sound-blocking headphones. For ease of use, the individual videos will be administered from an app that has been designed to be as intuitive to operate as possible. The patient will only have to put on the headset, adjust the focus and choose the desired environment by looking at it in the app. 360° video was chosen because it gives the most photorealistic visuals, while also being the cheapest to produce. The downside is that it does not allow direct user interaction (eg, the viewer cannot affect the environment in any way). To circumvent this, there are multiple junctions throughout the films where the actors will talk directly and unsolicited to the viewer (eg, greetings, common questions), while also allowing time for the viewer to respond. The actors respond in a generic way to the actions of the viewer. Unsolicited and direct referral from a virtual environment seems to be an essential factor in triggering realistic responses to it. 58 Though the noninteractability of the environment limits the flexibility of behavioural experiments, it does not make them impossible. For example, it is still possible to hypothesise about internal states (eg, 'I will clam up if I have to present in front of people') and identify and challenge negative automatic thoughts.
VR scenarios
Thirteen VR exposure scenarios relevant for social anxiety disorder and agoraphobia were chosen for the CBT-in virtuo condition. The 13 scenarios are as follows: Introduction to core beliefs, additional VRET exercises.
10
Repetition of core beliefs, resources and skills, additional VRET exercises.
11
VRET combined with in-vivo out-of-the-clinic exposure exercises.
12
Repetition and evaluation of methods learnt/ used so far, revising problem-goal list. 13 Evaluation, discussion and feedback on the different methods used by each patient. 14 Maintenance and relapse prevention; review of skills; review of progress and future goals; plan for continued exposures; relapse prevention strategies.
Open access
1. Standing in line in a supermarket. 2. Being in a crowded shopping centre 3. Attending a party. 4. Attending a formal meeting and giving a presentation 5. A job interview. 6. Small talking/discussing in a university canteen with young adults 7. Small talking/discussing in a canteen in a work setting. 8. Entering an auditorium 9. Leaving your apartment 10. Waiting for and taking the bus 11. Crossing a bridge 12. Taking an elevator 13. Taking a commercial aeroplane Each scenario has four to six scenes of increasing difficulty as well as a neutral scene to familiarise patients with the VR setting. All scenes skip to a looping version of a scene in the same environment after being played, to allow patients to achieve within-session habituation if needed. See online supplemental file 3 for screenshots and descriptions of the individual scenes, as well as links to view a selection of the scenes online. All identifiable persons depicted in the virtual environments are paid actors.
Patient and public involvement: development of VR scenarios and manual
The pilot phase was a continuous iterative process between the developers of the VR media, CBT-trained clinicians and a panel of patients with social anxiety disorder and/or agoraphobia. The process lasted approximately 16 months (12 for social anxiety disorder environments and 4 for agoraphobia) and consisted of regular meetings following each scenario's initial filming wherein the patients saw the VR scenario in question. Their experience (eg, anxiety level provoked from the films, the validity of the scenarios) was then used as a starting point for a discussion of further development and alterations to the scenarios. Towards the end of the development of the scenarios and application to launch them, two clinicians tested the usability of VRE in a group format. The clinicians and patients then gave further feedback on the films and the delivery of the exposure in the group. This guided the initial draft for a group CBT manual with VRE for social anxiety disorder and agoraphobia.
Assessment
Diagnostics MINI V.7.0 for DSM-5 will be used to screen for diagnosis. At the inclusion interview, all modules but P will be used to assess diagnostic eligibility. At the baseline interview, all modules but P will be used to assess diagnosis and detect comorbidity. At the post-treatment interview, all modules but P will be used to assess diagnosis and detect comorbidity. At the follow-up interview, all modules but P will be used to assess diagnosis and detect comorbidity.
Outcomes and sample size calculation
We originally designed the trial around inclusion of only patients with social anxiety disorder, basing the sample size calculation on the following parameters on the LSAS: with alpha=0.05, 80% power, and an expected SD of 21, 302 patients would be required to detect the minimal relevant difference of 6.8 on the LSAS total score between the groups.
On deciding to expand the diagnostic criteria for inclusion to also include patients with agoraphobia, it was necessary to change our primary outcome measure. For patients with agoraphobia, we primarily rate symptoms using MIA. To include both patients with social anxiety disorder and patients with agoraphobia, we thus decided to recalculate scores on these two scales to POMP as described below. Since the sample size calculation for LSAS was based on a Cohen's d=0.33, we also set the minimum clinically relevant difference on MIA, and by extension on the POMP, to d=0.33. Consequently, the required sample size remained unaffected by this change of primary outcome measures and is thus still 302 patients. See figure 3 for power calculations on secondary outcomes.
Primary outcome
Total scores on the LSAS for patients with social anxiety disorder and the MIA for patients with agoraphobia measured pretreatment, post-treatment and at 1-year follow-up converted to the POMP and averaged within treatment arms. POMP calculations can bring differently measured items to the same metric and do not change the multivariate distribution and covariance matrix of the transformed variables. Therefore, scales transformed with the POMP method can be used to examine meanlevel differences between groups. [59][60][61] Using POMPtransformed scores on two different measures of phobic anxiety makes it possible to include patients with different primary diagnoses in the same analysis, thus, avoiding the need for approximately double the number of participants to reach a sufficient sample size. The downside of this method is that differences in the sensitivity of the outcome measures and potential differences in treatment effect between patients with social anxiety disorder and agoraphobia, which has been observed in diagnosisspecific treatment, 62 are also averaged out, thus possibly skewing results.
Social anxiety disorder symptom severity will be measured using a danish version of the LSAS. LSAS assesses 24 situations typically feared by individuals with social anxiety disorder, rated on anxiety and avoidance, divided into subscales of performance anxiety and social situations. It has acceptable psychometric properties. 63 Agoraphobia symptom severity will be measured using a danish version of the MIA. The MIA assesses avoidance of 26 situations typically feared by patients who were agoraphobic. 64 The MIA has demonstrated excellent psychometric properties and has been validated in multiple languages, including Swedish. 65 66 Open access ► Treatment response on social anxiety disorder symptoms measured as LSAS below 50 or a 15 points drop. ► Treatment response on agoraphobia symptoms measured as MIA below 2 or a 0.5 points drop. ► Remission of social anxiety disorder symptoms measured post-treatment and at follow-up as LSAS below 25 73 and not qualifying for social anxiety disorder as measured using the MINI. ► Remission of agoraphobia symptoms measured posttreatment and at follow-up as MIA below 1.5 and not qualifying for agoraphobia as measured using the MINI. Other measures ► Unwanted negative side-effects induced by immersions in VR (commonly referred to as cybersickness) will be measured with the Simulator Sickness Questionnaire 78 (SSQ) at the end of VRE sessions. ► Deterioration and adverse effects of psychotherapy on social anxiety disorder symptoms measured posttreatment and at follow-up as a 6.8+point increase in total LSAS score. Patients who have deteriorated will be interviewed about their experiences in therapy. ► Deterioration and adverse effects of psychotherapy on Agoraphobia symptoms measured post-treatment and at follow-up as a 0.3 point increase in total MIA score. Patients who have deteriorated will be interviewed about their experiences in therapy. ► The experience of social presence, as described by
Explorative outcomes
Lee, 79 will be measured after each VR exposure session with a scale consisting of nine questions rated on a 1-7 Likert scale. This scale was developed specifically for this trial because existing scales are too specific for the VR equipment and content they were developed for. Social presence is measured instead of the more general construct of presence, because it has been theorised to be a critical element in the effective use of VRE for socially related fears. 80 81 Data from medical report The following data will be retrieved from the participants' medical report with consent, only if the participant cannot remember it: 1. Number of previous hospitalisations for mental health conditions or medical conditions. 2. Use of mental health services during the follow-up period 3. Current and previous psychopharmacological medication 4. Attendance rate of the CBT treatment.
Setting of assessment Assessment will take place at the outpatient clinics where the patients also receive treatment. Self-report questionnaires (MIA, FNES, CSQ, WAI, WSAS, WHO-5) will be answered by following a link sent to the patient's email address, which the patients can access either on a personal device or on one of the clinic's computers. If preferred by the patient, the self-report questionnaires can be filled out on printed copies of the scales while at the assessment interview. MINI, LSAS, PSP, HAM-D6 and TLFB will be administered by trained researchers and research assistants. After each session with VRE, specific questionnaires (Social Presence & Simulator Sickness Questionnaire) will be administered by the clinicians delivering the intervention. If necessary, due to the global COVID-19 pandemic, assessment interviews will be performed via telephone.
Randomisation
Randomisation is performed by randomising each therapy group, 1 week before the first treatment session. This means that no patient is included while their treatment allocation is known. The randomisation is done with a hidden allocation sequence generated from www. sealedenvelope.com and is centralised and handled with the randomisation module in Research Electronic Data Capture (REDCap) by a project manager uninvolved in the data collection. Block sizes will be unknown to the outcome assessors and clinicians. The factor for stratification is the treatment site. Allocation tables will be handled by external researchers with no affiliation with the project. An email of the group's assigned randomisation will be sent to the team leaders organising the logistics of the interventions in the psychotherapeutic clinics. Assigned randomisation of the groups will be stored by the research team data manager. The randomisation code will be stored at REDCap.
Blinding
The assessors are blinded when interviewing at pretreatment, post-treatment and at follow-up. Should unblinding occur, another researcher will perform the assessment. Blinded researchers will perform analysis and draft conclusions. There are no circumstances where unblinding of the assessors is permissible.
Data collection methods and management
See figure 1 for an overview of data collection. Selfreported data will be collected through surveys sent via REDCap or filled out on paper. Assessors are trained in the interview instruments and will do regular coratings of recorded interviews. Inter-rater reliability of clinicianrated outcome measures will be calculated throughout the trial. The interviewers will import data from the assessments directly into the electronic Case Report Form using the data entry system REDCap. 82 REDCap is an electronic data capture tool hosted at Center for IT, Medico and Telephony (CIMT) in the Capital Region of Denmark. For non-self-report measures, data will first be captured on paper and then entered electronically. REDCap complies with Danish legislation (the Act on Processing Personal Data) due to it having both comprehensive user rights and access control management and a complete audit trail on all data transactions. The data from individual patients are tied to a unique serial number. Assigned researchers and Good Clinical Practice (GCP) monitors will be the only people who can access the database. Non-electronic data will be stored locally in secure archives. Data will be exported from REDCap Open access without personal identifiers. Data will be exported to all well-known software packages: SPSS v. 28, SAS v. 15.2, Stata v. 17, R v. 4.1.2. and stored on a secure network drive under the control of CIMT. A data manager will ensure that all variables are correctly defined with variable and value labels. All derived variables will be correctly defined, and algorithms will be kept in individual files. All data will be scrutinised to identify errors in data entry. The sponsor and the principal investigators ensure that data are stored at least 10 years after the trial is ended.
Statistical methods
The analysis will all be from intention-to-treat. All included patients will also be included in the analyses. All statistical tests of significance will be two-tailed. The primary outcome analysis will be an intention-to-treat analysis. Missing data will be handled by multiple imputations (m=100). As predictors in the imputation model, we will select variables if they are independent predictors of the outcome or predictors of missing data (p<0.05 in a univariate model). Each group will have imputations done separately. Analysis of covariance will be used to calculate any significant results between the two groups, using the baseline value and the stratification variables.
The continuous variables will be imputed with linear regression. Binary variables will be imputed with binary logistic regression. Multinomial variables will be imputed with multinomial logistic regression. Ordinal variables will be imputed with ordinal logistic regression. For every type of variable, we will perform 100 imputations.
All distributions will be assessed for normality using visual inspection of histograms and Q-Q plots. If not normally distributed, variables will be log-transformed, and if unsuccessful, a non-parametric test will be used.
For dichotomous outcomes, we will perform multiple logistic regressions with treatment as usual as reference and stratification variables as covariates after having imputed missing values using a logistic regression model.
Dissemination
A trial protocol, including a plan for statistical procedures, has been published at wwwclinicaltrialsgov/ct2/ show/NCT03845101. This will ensure that the SoREAL trial is conducted and analysed as planned. Possible deviations and reasons for those will be described in publications. All data published will be verified for authenticity by controlling for internal inconsistency. All results, positive, negative as well as inconclusive, will be published as quickly as possible and still in concordance with Danish law on the protection of confidentially and personal information. Results will be presented at national and international scientific conferences. Lastly, results will be presented at relevant mental health centres in Denmark.
Data monitoring and auditing
Like in GCP monitoring, an independent committee will check the following data for the included patients: informed consent, inclusion and exclusion from intervention, serious adverse events and severe adverse reactions. It will be checked whether there is a link between trial allocation and the serious adverse events and severe adverse reactions.
Safety
In the clinical setting, the clinicians will register adverse events and adverse reactions and report all serious adverse events and severe adverse reactions to the sponsor. Other events or side effects will be collected from patient files and registers. International Conference on Harmonization of Technical Requirements for Registration of Pharmaceuticals for Human Use, Good Clinical Practice guidelines define serious adverse events and serious adverse reactions. The patients in the SoREAL trial are ensured by Danish law and the patient care regulation. Every patient in the SoREAL trial will have access to their results of the trial if they wish to. The clinicians will not have access to data collected from assessments done by the researchers.
Trial status
Inclusion began on 4 February 2019. Inclusion is expected to stop on 4 June 2023. Inclusion was delayed by approximately 3 months due to the COVID-19 pandemic.
Contributors Authorship is this based on the Vancouver guidelines. All authors have read, revised and approved the manuscript. MN and NR had the original idea for the trial. MN wrote the application for the NovoNordic Foundation and is the PI of the trial. CH generated the allocation sequence, carried out the power calculations and will be responsible for supervising the statistical analyses. NR was responsible for the non-experimental content of the CBT. CWC, KSM, CISS, PB and BA directed the development of the VR films. CWC, KSM, UKG, DS, PW, BA and PB developed the manual and guidelines for using VRET in group therapy. MH was responsible for outcome measures. BA and PB developed the Social Presence Scale and fidelity measures. BA set up randomisation, built and manage the database and is responsible for all participant assessment, including training and managing research assistants.
Funding MN and NR initiated the project. MN applied to Novo Nordisk Foundation, and the SoREAL trial was granted 5.000.000 DKK [NNF17OC0027780]. MN and NR have no affiliation to the Novo Nordisk Foundation. MN, PB and BA applied to TrygFonden and the trial was granted an additional 3.517.500 DKK [ID: 146169]. MN, PB and BA have no affiliation to TrygFonden. The project is entirely independent of the Novo Nordisk Foundation and TrygFonden and therefore, the funding body plays no role in the design of the study, the collection, analysis and interpretation of data and in writing the manuscript. Nor will the Novo Nordisk Foundation or TrygFonden play any role in future publications that may derive from the project.
Competing interests None declared.
Patient consent for publication Not applicable.
Provenance and peer review Not commissioned; externally peer reviewed.
Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and | 2022-02-04T06:18:09.005Z | 2022-02-01T00:00:00.000 | {
"year": 2022,
"sha1": "50cf3db80f027e4cd61e7f0cfbd931d44ac0da8a",
"oa_license": "CCBYNC",
"oa_url": "https://bmjopen.bmj.com/content/bmjopen/12/2/e051147.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "255385ce467bbca84bcb404777aae09a0db82840",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
264177030 | pes2o/s2orc | v3-fos-license | Cell culture‐derived extracellular vesicles: Considerations for reporting cell culturing parameters
Abstract Cell culture‐conditioned medium (CCM) is a valuable source of extracellular vesicles (EVs) for basic scientific, therapeutic and diagnostic applications. Cell culturing parameters affect the biochemical composition, release and possibly the function of CCM‐derived EVs (CCM‐EV). The CCM‐EV task force of the Rigor and Standardization Subcommittee of the International Society for Extracellular Vesicles aims to identify relevant cell culturing parameters, describe their effects based on current knowledge, recommend reporting parameters and identify outstanding questions. While some recommendations are valid for all cell types, cell‐specific recommendations may need to be established for non‐mammalian sources, such as bacteria, yeast and plant cells. Current progress towards these goals is summarized in this perspective paper, along with a checklist to facilitate transparent reporting of cell culturing parameters to improve the reproducibility of CCM‐EV research.
INTRODUCTION
Extracellular vesicles (EVs) are heterogenous membrane-delimited particles that are released from cells in physiological and pathological states (Buzas, 2023;György et al., 2011;Witwer & Théry, 2019) and include endosome-origin exosomes and plasma membrane-shed ectosomes or microvesicles.EVs are composed of luminal cargo, a surrounding phospholipid membrane and membrane-embedded and -attached macromolecular entities.A more loosely associated bio-corona may also be of critical importance (Buzas, 2022;Wolf et al., 2022;Yerneni et al., 2022).In vivo, EVs are present in biological fluids such as blood, cerebrospinal fluid, saliva, tears and urine and in solid tissues.In vitro, EVs can be prepared from cell culture-conditioned media (CCM; Figure 1a).EVs in CCM are released by the cultured cells but may also originate from supplements used for cell growth and/or differentiation.Since cells and medium supplements also release or contain non-EV particles, it is often necessary to separate EVs from non-EV components to identify EV-specific contents and functions (Figure 1b).
Among the different sources of EVs, CCM is especially important because CCM-derived EVs (CCM-EV), whether native or engineered, can exert therapeutic functions.CCM-EV have been administered to animals and patients to assess their therapeutic potential for several pathological conditions (Elahi et al., 2020;Kordelas et al., 2014;Kwon et al., 2020;Morse et al., 2005;Nassar et al., 2016;Park et al., 2021;Sengupta et al., 2020;Shekari et al., 2021;Warnecke et al., 2021).Table S1 gives an overview of clinical trials which have used or are currently using CCM-EV from cellular sources such as dendritic cells, mesenchymal stromal cells (MSCs), tumor cells, T cells and plant cells.Producing cells, cell culture components, culture conditions and CCM collection and processing will impact the biochemical composition of EVs, association with other entities such as a co-purifying bio-corona, and biological function.However, there are currently no reporting guidelines for CCM-derived EV.
To maximize the reliability and reproducibility of CCM-EV research, transparent reporting is needed for the production and characterization of CCM-EV.There are several guidelines on good practices for general cell and tissue culture for cellular therapy (Coecke et al., 2005;Pamies, 2018;Pamies et al., 2017).Furthermore, minimal criteria for reporting EV studies were proposed in MISEV2014 (Lötvall et al., 2014) and MISEV2018 (Théry et al., 2018), and a systematic assessment of articles published from 2012 to 2020 showed a clear association of study quality with citation of MISEV (Poupardin et al., 2021).In addition, working groups including members of the International Society for Extracellular Vesicles (ISEV), the International Society for Cell and Gene Therapy (ISCT), and the Society for Clinical, Research and Translation of Extracellular Vesicles Singapore (SOCRATES) have published best practice recommendations to produce MSC-EVs and assays for therapeutic applications (Gimona et al., 2021;Pachler et al., 2017;Rohde et al., 2019;Witwer et al., 2019).donors and tissue sources (Almeria et al., 2022;Fafián-Labora et al., 2017;Kang et al., 2016;Komaki et al., 2017;Mendt et al., 2018;Merckx et al., 2020;Nakamura et al., 2015;Rosenberger et al., 2019;Teng et al., 2015;Tracy et al., 2019).Also, different cancer cell lines produce different numbers of EVs (Charoenviriyakul et al., 2017;Hurwitz et al., 2016;Salomon et al., 2014).For cell lines, regular short tandem repeat (STR) profiling can confirm identity of cells and detect cellular cross-contamination (American Type Culture Collection Standards Development Organization Workgroup ASN-000, 2010; Barallon et al., 2010;Masters et al., 2001;Reid et al., 2004).
Recommendations
• Report the identity of EV-producing cells according to consensus in the research communities that regularly use such cells.For example, ISCT criteria define multipotent MSCs by positive and negative marker expression and differentiation potential (Dominici et al., 2006;Viswanathan et al., 2019;Witwer et al., 2019).Criteria for clinical-grade human induced pluripotent stem cell lines have also been established (Sullivan et al., 2018).Additional information regarding specific cell lines can be found at https://www.atcc.org/,and cell-line specific molecular information is available at https://depmap.org/portal.• If downstream EV analysis focuses on a particular marker, confirm that the EV marker of interest is also present in/on the cultured cells and thus likely originates from those cells and not from another source.
Outstanding questions
• In some cell monocultures, multiple populations of cells are nevertheless present (Costa et al., 2021;Sato et al., 2016;Wang et al., 2021) (i.e., cells with genetic, epigenetic, morphologic or functional differences), which contributes to the heterogeneity of EVs and makes it more difficult to interpret from which cell types the detected EVs are derived.How can we reduce or control the heterogeneity of source cells?
Recommendations
• Report (if available) known donor characteristics, including but not limited to biological sex, age, chronic diseases, infection status, medication and pregnancy/complications. • Screen cells for the presence of infectious agents, especially if clinical applications are expected.Note that some viruses can integrate into the genome.• Report pre-culture processing of the cells.
Initial seeding density
The initial concentration, or the number of cells per volume or surface area in the culture flask or plate, affects cell growth and differentiation, as well as the release of EVs.This dependence was shown for MSCs and cancer cells (Ludwig et al., 2019;Patel et al., 2017).
Recommendations
• Report the initial seeding density of cells, ideally as cells per cm 2 (for adherent cells) or cells per mL (for suspension cells).
• Reduce variation by using the same seeding concentration and cell splitting intervals across experiments.
• Optimal plating density has been reported for specific cells and culturing conditions (Sotiropoulou et al., 2006); otherwise, optimize seeding density.
Recommendations
• Count cell numbers in a standardized procedure (e.g., with technical replicates) before seeding and after final harvesting at a given passage.• Report cell density at the time of CCM collection (cells/cm 2 or cells/mL).For 3D cultures of spheroids or organoids, the reporting parameter might be the diameter of the spheroids or another appropriate measure.
Outstanding questions
• In 2D cultures, the term 'confluency' is often used.However, this remains a largely 'qualitative' parameter, and measurements may differ between individual researchers and laboratories due to different equipment and procedures.More objective measurements may be achieved by preparing and following detailed standard operating procedures that help to improve reproducibility.
Cell sub-culturing, population doubling and passage number
Transferring cultured cells from high density to low density for propagation ('sub-culturing' , 'passaging' , 'splitting' or 're-plating') can influence cellular properties including the expression of cell surface markers, senescence and genetic stability (Kassem et al., 1997;Meza-Zepeda et al., 2008;Wagner et al., 2008;Yang et al., 2018).Suspension cells are either simply diluted into fresh culture medium, or, in some cases, the whole medium is replaced by fresh medium, following a low-speed centrifugation step.However, adherent cells regularly must be detached from vessel of carrier surfaces with the help of enzymes and/or mechanical intervention before passaging, and detachment methods may affect the cells and their membrane.For example, human embryonic stem cells experienced genetic instability based on the sub-culturing method (Bai et al., 2015;Garitaonandia et al., 2015).The passage time is directly related to the population doubling time of a given cell culture.If primary cells are used, senescence is dictated by the total achievable population doubling number (Hayflick limit) (Shay & Wright, 2000) and depends on cell type, donors and culture conditions (Meza-Zepeda et al., 2008).EVs from different passage numbers of the same cells may differ in size, concentration and functions (Beer et al., 2015;Boulestreau et al., 2020;Dorronsoro et al., 2021;Fafián-Labora et al., 2017;Lehmann et al., 2008;Lei et al., 2017;Patel et al., 2017Patel et al., , 2018;;Sarkar et al., 2018;Takahashi et al., 2017;Takasugi et al., 2017;Venugopal et al., 2017).
Recommendations
• Report passage number, starting from the original stock and consider Master and Working Cell Bank passage numbers.It is recommended to consider the maximum number of passages that primary cells or cancer cells can undergo without changing their genotype and/or phenotype.• For adherent cells, indicate the method of passaging with details necessary for exact replication, including but not limited to the type of treatment (e.g., trypsin or EDTA with any commercial name, mechanical, or any other methods), the concentration of any reagents, time of treatment (in minutes), temperature of incubation and if/how any enzymatic treatments are stopped.• Report recovery time, if any, between sub-culturing and the start of EV collection.
• Consider avoiding the application of undefined animal-derived products during sub-culturing processes, the use of recombinant enzymes is preferable.
Outstanding questions
• To what extent does the effect of passage number and senescence differ between culture conditions, including static versus bioreactor, and adherent versus free-floating?
Cell viability
In contrast to healthy cells, dying cells may preferentially release EV subtypes such as apoptotic bodies and necroptotic bodies, although these may also have therapeutic activity (Atkin-Smith et al., 2015;Battistelli & Falcieri, 2020;Baxter et al., 2019;Brock et al., 2019;Caruso & Poon, 2018;Crescitelli et al., 2013;Dieudé et al., 2015;Galluzzi et al., 2018;Gregory & Dransfield, 2018;Kakarla et al., 2020;Lázaro-Ibáñez et al., 2014;Li et al., 2020;Liu et al., 2020;Park et al., 2018;Phan et al., 2020;Poon et al., 2019;Shlomovitz et al., 2021;Théry et al., 2001;Zheng et al., 2021).Since the proportion of live and dying/dead cells affects the proportion of EV subtypes in CCM, it is important to assess the viability of the cells as well as the contribution of dying or dead cells at the time of CCM collection.Cell death is commonly estimated using membrane permeability-based stains or metabolic assays.In bioreactors, metabolic readouts such as glucose and lactate provide insight throughout the cell culture (Mendt et al., 2018).However, only living cells contribute to metabolism, and these methods do not indicate the percentage of apoptotic or dying cells.An early ISEV position paper recommended that cell death percentage in culture should be less than 5% (Witwer et al., 2013); however, achieving 95% viability is not always possible-for example, in drug response studies-and assessing viability is also not always immediately possible, such as with some types of bioreactors and in multiple-or continuous-harvest 2D systems.
Recommendations
• Wherever possible, report the percentage of viable cells at the time of CCM collection.
• Wherever possible, document cellular morphology by taking representative images when harvesting.
Outstanding questions
• Can we understand the contribution of dying cell-derived EVs to the overall biological and potential therapeutic activity in a given EV preparation?• Can we develop monitoring systems, for example, glucose consumption or lactate production, for 2D and 3D cell expansion systems?
EV PRODUCTION MEDIUM
Basal cell culture medium contains nutrients, especially glucose (as the main source of energy), and amino acids, and can be supplemented with serum or other components.The EV production medium (the medium used for EV isolation) may be different from the basal cell culture medium.
. Basal medium, glucose and amino acids
Commonly used basal media for mammalian cell cultures, such as Roswell Park Memorial Institute (RPMI), Dulbecco's Modified Eagle Medium (DMEM), and alpha-modified Minimal Essential Medium (MEM), contain inorganic salts (sodium, ferric, magnesium and potassium salts), and micronutrients (vitamins and minerals).Any of these components may affect cell growth and differentiation, and EVs (Arigony et al., 2013;Arodin Selenius et al., 2019;Bhat et al., 2021;Kawakami et al., 2016;Watchrarat et al., 2017;Wu et al., 2009;Zhu et al., 2021).D-glucose is the major carbon source in the cell culture growth medium.Increased release of EVs has been observed in the presence of both elevated and reduced glucose levels, and this effect seems cell type-dependent (Burger et al., 2017;Garcia et al., 2015;Rice et al., 2015;Thom et al., 2017).Several reports have shown a change in the biochemical composition or biological function of EVs when producing cells cultured in a medium containing high concentrations of glucose (Davidson et al., 2018;De Jong et al., 2012;Huang et al., 2020;Lin et al., 2019;Thom et al., 2017;Wu et al., 2017;Zhu et al., 2019;Zhou et al., 2021).
Amino acids and proteins are required for cell growth.The effects of glutamine and leucine on cell proliferation and EV biogenesis have been documented (Dai et al., 2015;Fan et al., 2020;Kim et al., 2017;Rubin, 2019;Zhao et al., 2021).If stabilized versions of amino acids are used as supplements, a different concentration of this amino acid in the medium over time will be maintained than when using the native version.Moreover, free amino acids present in the CM may become incorporated into EVs.
Recommendations
• Report the basal medium used, including the catalog number.
• Report the nature and concentration of additives added during cell culture.
Recommendations
• Report the use of antibiotics and antimycotics, including if they are used during pre-conditioning and/or conditioning steps.
Outstanding questions
• What are the effects of antibiotics and antimycotics on the biochemical composition and function of EVs?
• If packaged into or associated with EVs, how do antibiotics and antimycotics affect the therapeutic function of EVs?
A position statement from the working group on cellular therapies of the International Society of Blood Transfusion (ISBT) discussed human PL (hPL) production, manufacturing, and quality management (Schallmoser et al., 2020), and the barriers to the translational use of hPL have been discussed in a joint publication of the Association for the Advancement of Blood and Biotherapies (AABB) and ISCT (Bieback et al., 2019).The use of hPL reduces the problem of animal components, although the use of PL for the manufacturing of therapeutic EVs requires specific precautions and pre-processing steps.
The presence of coagulation factors and fibrinogen in PL can lead to fibrin precipitates during the cell expansion process, which may be incompatible with processes such as filtration.Addition of heparin to the growth medium inhibits fibrin formation.Alternatively, addition of calcium chloride to PL may trigger coagulation and fibrin formation (Staubach et al., 2021), and the fibrin clot that is formed can be removed by centrifugation prior to use.
Recommendations
• Report the percentage, producing company (city, country), and catalog number of complex biological additives including sera.
• Report the source and percentage of PL used at the various steps of cell expansion and CCM production.
• Report the process of PL production, including fibrin depletion methods and pathogen inactivation.
• Report the concentration of heparin present in the growth medium, and consider the potential adverse effects of heparin on separation and downstream analysis of EVs (Atai et al., 2013;Beutler et al., 1990).
. EV depletion
Serum and PL contain EVs, DNA fragments, non-EV particles such as protein aggregates and lipoproteins, and micronutrients (Arigony et al., 2013;Lehrich et al., 2021;Urzì et al., 2022), which may complicate the isolation, assessment, and interpretation of the biochemical composition and function of CCM EVs.For example, FBS-derived EVs or particles may co-isolate with EVs produced by cultured cells (Lehrich et al., 2021).Various methods have been described to deplete EVs from the serum (Driedonks et al., 2019;Kim et al., 2021;Kornilov et al., 2018;Lehrich et al., 2018;Liao et al., 2017Liao et al., , 2019;;Mannerstrom et al., 2019).However, these methods do not remove all or exclusively EVs (Kornilov et al., 2018;Lehrich et al., 2018;Shelke et al., 2014;Tosar et al., 2017), and most studies do not report the degree of depletion.The biochemical composition and function of EVs released by cells cultured in the presence of EV-depleted serum may not be identical to those released in the presence of EV-containing serum.This is possibly due to the fact that serum-derived EVs themselves have multiple effects on cultured cells (Beninson & Fleshner, 2015;Cavallari et al., 2017;Gu et al., 2018;Ochieng et al., 2009;Urzì et al., 2022).
Recommendations
• In the case of serum-free media, report if no replacement supplement was added (e.g., in 'starvation' experiments), or if replacement supplements were added that are nutritionally complete.• Report the time duration of cell culture without serum or other complex additives.
• Document any observed change in cellular morphology or characteristics.
• Quantify the number of particles in non-conditioned medium using the same process as for conditioned medium at least once.
Outstanding questions
• EV depletion from serum may result in the removal of non-EV components such as lipids and proteins.Does EV-depleted growth medium support cell proliferation to the same extent as a non-EV depleted serum-containing medium?• The efficiency of EV depletion methods varies.Monitoring the efficiency of EV depletion is difficult because none of the currently available EV detection methods can detect all EVs.Performing a procedural control, that is, comparing the negative control of the same volume of complete, unconditioned medium processed the same way as CCM, is a functional alternative, but will not fully answer the question of depletion efficiency.How can we best report the efficiency of EV depletion?
Recommendations
• Report the concentration, company (city, country), and catalog number of small molecules, cytokines/chemokines, or other RNA/protein/lipid-based cell-modulatory reagents.Also indicate if they are used during pre-conditioning and/or mediumconditioning periods.
Outstanding questions
• Do cell-modifying factors associate with EVs or non-EV particles, and if so, what are their possible effects?
CULTURE CONDITIONS
Changes in biophysical and biochemical cell culture conditions may affect cell growth and the release and biochemical composition of EVs.External culturing conditions include the concentration of gasses, 2D or 3D culturing, pH, temperature, physical stimulus, the composition of culture vessels/substrates, and in-experiment manipulations of the cell culture medium.
Recommendation
• Report gas concentrations used during cell culture, especially of oxygen, but where applicable also of CO 2 , and nitrogen, as well as any interruptions in the gas conditions during the culture process.
Recommendations
• Report all culturing conditions, including, but not limited to, surface area, volume, and preconditioning with buffers.
• Report the composition of any support matrix, including, for example, substrate beads and vessel coatings.
• Indicate the procedure used to prepare and apply any support matrix.
Recommendations
• Monitor and report the pH of medium.
• Report if cells were grown using a pH buffering agent, and, if so, its concentration.
Outstanding questions
• How does pH affect the production and composition of EVs?
Recommendation
• Report all applied physical stimuli in detail, including, but not limited to, intensity, voltage, time and temperature.
. Replacement of medium
Medium replacement during cell culturing keeps the cells healthy by providing fresh nutrients and eliminating waste products.However, during medium replacement, the secretome of cells including EVs is removed, and cells start to secrete new EVs (Patel et al., 2017;Vis et al., 2020).
Recommendations
• Report the medium replacement protocol, including what percentage of the volume is replaced, at what interval (in hours) the medium is changed or replenished, intermediate washing steps (if any), changes to the medium composition, and continuousflow feeding (with recirculation or not).• In some cases, different media formulations are used at various stages of an experiment.If so, indicate when and how, as well as the details of each formulation and why each specific formulation was used.
Outstanding questions
• What are the effects of maintaining cells in the same medium for extended durations and of medium replacement on EV release, biochemical composition, and function?
COLLECTION (HARVEST) OF CCM
Duration and frequency of CCM collection may affect the yield and surface protein expression of EVs (Mendt et al., 2018;Patel et al., 2017).There are three main approaches to CCM harvest: single harvest, multiple harvests, and continuous harvest.The duration of conditioning differs between studies and assays from hours to days.During conditioning, cells continuously release and uptake EVs; therefore, the time interval during which cells are cultured affects the biochemical composition, function, and release of EVs due to changes in the release and possible re-uptake of EVs, along with cell growth and differentiation (; Davidson et al., 2018;Flores-Bellver et al., 2021;Kim et al., 2016;Li et al., 2015).
. Recommendations
• For a single harvest: report the total time (duration) of conditioning.
• For multiple harvests: report the number of harvests, duration of conditioning for each, and whether all or some fraction of medium is collected (i.e., complete or partial medium replacement), specifying the fraction where applicable.• Report how many (if any) of the multiple harvests are pooled.
• Report on any pre-pooling analyses and inclusion criteria.
• For continuous harvest, report the duration of continuous collection and harvest rate (i.e., volume per unit time).
• Report the duration of conditioning and keep it constant across experimental repeats.
. Outstanding questions
• How do continuous harvest and multiple harvests affect EV release, biochemical composition and function?
• Since cells release EVs into and simultaneously uptake EVs from CCM, cultures may reach an equilibrium between EV release and uptake, and this may occur at different times for different cultures and conditions.What are the effects of conditioning time on EV release, biochemical composition and function?
MICROBIAL CONTAMINANTS OF CCM
CCM may be contaminated with viruses, bacteria, fungi, and other unwanted bioactive components such as endotoxins.Active bacterial and fungal contaminations are often apparent in culture and in stored EV samples by visual examination or aided by light microscopy, in which case such materials can be discarded.However, many contaminations are not easily detected.The presence of Mycoplasma, for example, may not be readily apparent, and Mycoplasma species can release EVs into CCM (Gaurivaud et al., 2018).Mycoplasma-infected cells release EVs that differ in function from EVs released by non-infected cells (Cronemberger-Andrade et al., 2020;Quah & O'neill, 2007;Yang et al., 2012).Bacterial or fungal-derived EVs and other bioactive components such as endotoxins may also be introduced into CCM from raw materials or culture vessels even if actively replicating organisms are not present.Viruses and viral components may also be present in cells and/or culture components and may cause unintended effects (Barone et al., 2020;Merten, 2002).Contaminants such as Mycoplasma and viruses may not be eliminated by 0.22-micron filtration of culture medium, conventionally used to eliminate active bacterial contamination.
. Recommendations
• Report if any screening of Mycoplasma or other contaminants in parental cell lines/cultures was done and the results thereof.Report details of any antimicrobial treatment.
DISCUSSION
In this perspective paper, we aim to raise awareness of the effects of cell culturing parameters (Figure 2a) that may affect EV release (Figure 2b), biochemical composition, and/or function (Figure 2c).Cells can ingest components that are present in the media and re-package them into or onto EVs (Figure 2d).Moreover, cell culturing parameters can influence cellular processes other than EV biogenesis that indirectly affect EVs (Figure 2e).We have compiled a checklist of these parameters in Table 1.The aim of the checklist is to collect as much information as possible to improve reproducibility.The information can be summarized in the Methods section, and the checklist can be added as supplementary information to the manuscript.We are aware that some
Indirect effects
Biological processes ?
3D culturing Glucose
Glucose Glutamine recommendations may be harder to implement than others.For instance, measuring pH of a culture medium at different stages of the culture is not a routine process; thus, it may not become readily or routinely implemented.At present, multiple task forces and working groups have developed or are developing checklists, and the goal of ISEV is to develop on online repository, either as a stand-alone or combined and integrated with an already existing platform such as EV-TRACK.Such an online repository will be useful to re-evaluate collected data and can be used to update future recommendations accordingly.
Biochemical composition and potency of EVs
There is hardly any discussion on the mechanisms by which changes in cell culture impact EVs, but these should no longer be overlooked.Cellular processes such as autophagy can affect EV production both directly (Xing et al., 2021;Xu et al., 2018) and indirectly (Moruno et al., 2012;Wang et al., 2021), influencing the number and type of released EVs.Comparing autophagy levels in cultures used for preparing EV batches may explain inter-experiment / batch differences in EVs.Since hypotonic dialysis can be used to load drugs into EVs (Mehryab et al., 2020), allowing therapeutic agents to cross the membrane (Xie et al., 2021), the osmolarity of the culture medium may affect EV cargo.Given the effects of amino acids on the mammalian target of rapamycin (mTOR) signaling (Jewell et al., 2013) and the connection between mTOR and EV release (Zou et al., 2019), it is reasonable to assume that amino acid concentrations will impact EV production.Finally, oxygen concentration modulates cellular senescence, which may impact EVs, as mentioned in the cellular sub-culturing section (Seno et al., 2018;Welford & Giaccia, 2011;You et al., 2019).
Of all cell culture supplements, serum and PL are among the most ubiquitous and challenging.They are rich in EVs, their EVs support cell growth and function in ways that cannot easily be achieved with a 'defined' medium, and EV depletion from these supplements is time-consuming and/or inefficient (Lehrich et al., 2021).In addition, they may indirectly influence cells and provide cells with materials that can be re-packaged into newly produced EVs or incorporated as a bio-corona (Figure 2d).Hence, more investigations are needed to critically evaluate the consequences of serum/PL removal or EV depletion on each cell type.
In summary, the effects of cell culture parameters on EVs are complex, and our current understanding of these influences is far from complete.A distinction should be made between culturing cells for basic EV research versus therapeutic applications since the latter involved additional considerations around safety and regulation.Missing and vague information about manufacturing and processing parameters can lead to misinterpretations and false conclusions and hampers study reproduction.We therefore emphasize the importance of rigorous reporting of all cell culture parameters to enable researchers to better compare experimental in vitro and in vivo data and pre-clinical and clinical outcomes.Only with full reporting can we achieve our common goal of comprehensively understanding the biological functions of EVs.We hope that these suggestions (Table 1) will be a useful starting point for further discussions and that they will promote good reporting practices in CCM-EV research.
When serum is diluted prior to removal of EVs, indicate whether, how (fold dilution), and with what (buffer, medium) the serum was diluted.• For ultracentrifugation-based depletion of EVs, report the centrifugation speed and time, rotor specifications (K value, angle, tube volume), and temperature.• For tangential flow filtration-based depletion of EVs, report the details including membrane/device manufacturer, material type, pore size, filtration surface area, flow rate, and temperature.• Monitor and report the level of EV depletion by comparing pre-and post-depletion material Cell culture parameters and their effects on production of extracellular vesicles.(a) All parameters affect cells during conditioning and release of extracellular vesicles (EVs).(b) Parameters influencing release of EVs, and (c) biochemical composition of EVs.(d) Cells might also re-package cell culture medium supplement components into/onto released EVs.(e) Cell culturing parameters may affect processes that indirectly affect EVs.
•
Especially if EVs are administered to animals or humans, measure and report the level of endotoxins in prepared EVs accord- Cell culturing parameter checklist. | 2023-10-18T15:08:45.419Z | 2023-10-01T00:00:00.000 | {
"year": 2023,
"sha1": "867099e76392d6f8048cf2d84fc27fc39f931d3a",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/jex2.115",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f500e00e746b00ca446eb23065a9187e636605e0",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
145819913 | pes2o/s2orc | v3-fos-license | Non-microscopic Middle Ear Cholesteatoma Surgery: A Case Report of a Novel Head-Up Approach
Supplemental Digital Content is available in the text
INTRODUCTION
Traditional microscopic ear surgery is performed while watching the images through the microscopic eyepieces, namely in a heads-down position. This heads-down surgery increases the risk of musculoskeletal fatigue and injuries of the neck and back because surgeons are bound to the eyepieces during the surgery (1)(2)(3)(4). Heads-up surgery, which is performed watching a monitor, is considered to be ergonomically better (1)(2)(3)(4)(5), because surgeons can work in a physiologically comfortable position (6). In detail, Yu et al. (1) reported that a biomechanical analysis found that the differences in angles can result in loads on the neck joint that are twice as high in the microscope than with the heads-up displays. This means that the angle of the neck joint during heads-up surgery is close to the natural angle of the neck joint (1). Capone et al. (4) reported that musculoskeletal symptoms were observed in 81.5% of surveyed plastic surgeons. Microscope usage of 3 h or more per week was associated with cervical and thoracic pain of those plastic surgeons (4).
Heads-up surgery also has several other advantages. Eckardt et al. (7) previously reported on comparison data between 3D heads-up surgery and microscopic surgery: 3D heads-up surgeries were performed utilizing a Lei-caM822 surgical microscope (Leica Microsystems), and the TrueVision Visualization 3D System (TrueVision Visualization System). Microscopic surgeries were performed utilizing the surgical microscope. They compared the working techniques at 3 times magnification under microscopes with that of 3D heads-up surgery on 20 volunteers who lacked experience with a microscope. They found that significantly fewer mistakes were made with the heads-up method than with the microscope method. The 2 methods were also judged to be similar regarding speed and ease of microscopic manipulations and sharpness of image (7). The other advantages of heads-up surgery include that the surgeons can perform while watching digitally modified images, and we can share the same 3D surgical views on a monitor with all people in the operation room (7).
Transcanal endoscopic ear surgery (TEES), which is a heads-up surgery, is considered to be a less invasive surgical procedure because we can treat ear diseases with small skin incisions and less dissection of the bone and mucosal tissue. Indeed, during cholesteatoma surgery, if the cholesteatoma is confined to the tympanic cavity, we can complete the surgery utilizing only TEES. When the middle ear disease involves the mastoid cavity, there are currently 2 surgical options to treat the diseased tissue in the mastoid cavity: transcanal mastoidectomy via the external auditory canal and postauricular transcortical mastoidectomy (PTM). We can achieve transcanal mastoidectomy with TEES, but there are several drawbacks. One is the requirement to remove a large portion of the posterior bony wall to reach the mastoid cavity. Subsequent reconstruction of the larger bony defect of the posterior wall is more difficult and time-consuming. The transcanal mastoidectomy via the external auditory canal is, thus, more invasive surgery than PTM mastoidectomy. In contrast, if we utilize PTM, which has been performed under surgical microscopes, we can preserve the bony external canal. When considering endoscope-based ear surgery, there are several significant drawbacks with microscopic PTM. One is that we have to change the way in which surgeons interact with the surgical fields during the transition between endoscope and surgical microscope; endoscopic surgery is heads-up surgery and microscopic surgery is heads-down surgery. Additionally, the surgical microscope occupies a large space and is heavy. Transition between endoscope and surgical microscope and setting up are both time consuming. Considering the above, an ideal surgical modality for treating the mastoid cavity for endoscope-based ear surgery is heads-up PTM.
Recently, it has been reported that a new surgical visualization system called the 3D exoscope system can be a viable alternative to surgical microscopes in both ophthalmology (8,9) and neurosurgery (6,10). The 3D exoscope is meant to be exterior to the body surface like a microscope (vs. an endoscope which is interior to the body cavity) and to have dual image sensors for 3D visualization. The images obtained from the 3D exoscope are visualized on a monitor, and a surgeon observes 3D stereoscopic images wearing 3D glasses. A 3D exoscope enables us to perform the surgery in a heads-up position. Additionally, it gives us a similar surgical environment to a microscope: both exoscope and microscope are exterior to the body surface, and surgical images of both are 3D. Consequently, if we utilize a 3D exoscope for PTM in middle-ear diseases with mastoid involvement, we may be able to complete treating those diseases surgically in a heads-up position.
Herein, we report on our experience of PTM utilizing the surgical 3D exoscope. To the best of our knowledge, this is a novel report describing ear surgery utilizing a 3D exoscope.
MATERIALS AND METHODS
This is a case review of the first 2 sequential patients on whom R.M. performed heads-up surgery utilizing both 4 mm diameter, 18 cm length, 08 and 308 rigid endoscopes (HOP-KINS1 II Telescopes, KARL STORZ, Tuttlingen, Germany) and a surgical 3D exoscope system (VITOM1 3D system, FIG. 1. Surgical 3D exoscope system. The surgical 3D exoscope system consists of a 3D exoscope, a holding arm, a separate controller, and a tower containing a light source, a camera controller, and a 32-in full HD passive 3D monitor. The image surrounded by the line frame shows the holding arm with a camera head; an inserted image shows a 3D exoscope. KARL STORZ) ( Figs. 1 and 2). The surgical 3D exoscope system consists of a 3D exoscope (Vitom1 3D, KARL STORZ), a holding arm (Versacrane TM , KARL STORZ), a separate controller (Image1 Pilot, KARL STORZ), a tower containing a light source (Power Led 300, KARL STORZ), a camera controller (Image1 S Connect, KARL STORZ), a llnk module (Image1 S D3, KARL STORZ) and a 32-in full high definition (full HD) passive 3D monitor (KARL STORZ). In this system, surgical images are obtained via the two 4K (3840 Â 2160 pixels) Complementary metal-oxide-semiconductor image sensors of the camera head, which is located outside the body surface and over the surgical field. These images are displayed on the full HD (1920 Â 1080 pixels) 3D monitor screen. 3D exoscope surgery is performed while watching the monitor image and wearing 3D polarization glasses. These 2 cases are pars flaccida-type middle-ear cholesteatoma involving the mastoid cavity (Table 1).
In patient 1, we removed cholesteatoma tissue via TEES. After the endoaural skin incision, the tympanomeatal flap was elevated, and the retracted tympanic membrane was cut at the entrance of the retracted cholesteatoma epithelium to the epitympanum. Then, the scutum bone was dissected utilizing curettes, chisels, and an otologic drill (Visao1 High-Speed Otologic Drill, Medtronic Xomed Inc. Jacksonville, USA). This was done to visualize the outer border of the cholesteatoma of the protympanum and the epitympanum, but not of the aditus ad antrum, the antrum, or the mastoid cavity. Then, the cholesteatoma epithelium which invaded into the protympanum and the epitympanum was removed via TEES. After the TEES removal, we performed a smaller PTM under 3D exoscope, during which we intentionally removed the cortical mastoid bone less than is usual with microscopic PTM (Figs. 2C and 3A and B). During the smaller PTM, we exposed the antrum, but not the aditus ad antrum, the epitympanum, or the major part of the mastoid air cells. The only mastoid air cells that were removed were lateral to the antrum. The debris of the cholesteatoma in the mastoid cavity was debulked under the surgical 3D exoscope. Then, under endoscope, via the opening of the PTM, the remaining cholesteatoma tissue in the mastoid cavity was cut around the anterior part of the antrum, and then, it was removed. Subsequently, the remaining cholesteatoma tissue between the epitympanum and the antrum was pushed anteriorly to the epitympanum under 308 endoscope. Subsequently, the cholesteatoma tissue was removed and the tensor tympani fold was opened via TEES. In patient 2, the cholesteatoma destroyed the skull base bone of the middle fossa and bony outer wall of the lateral semicircular canal (LSC) (Fig. 3A). In this patient, initially the cholesteatoma epithelium, which invaded into the tympanum, protympanum, and the epitympanum, was similarly removed via TEES. Then, PTM was similarly performed via a retroauricular skin incision, and the cholesteatoma debris was debulked, utilizing the surgical 3D exoscope system (Fig. 3B). The cholesteatoma tissue at the mastoid cavity, excluding the area of the LSC, was then removed under endoscope via the mastoidectomy. The cholesteatoma tissue, which adhered to the endosteum of the LSC, was removed endoscopically utilizing Yamauchi et al.'s (10) underwater technique; water was delivered via a lens-cleaning system (Endoscrub1 lens-cleaning system, Medtronic Xomed Inc.) (Fig. 4B). The remaining cholesteatoma tissue between the antrum and the epitympanum was pushed forward to the epitympanum under 308 endoscope, and then, it was removed via TEES. The tensor tympani fold was opened also via TEES.
RESULTS
All cholesteatoma tissues in both patients were successfully removed either through the external auditory canal or by a smaller PTM which was performed utilizing the surgical 3D exoscope system. The transition between endoscope and the surgical 3D exoscope system was very quick; it takes approximately 6 s for the transition from endoscopic surgery to the surgical 3D exoscope surgery (see Video, Supplemental Digital Content 1, http://links.lww.com/MAO/A786, which demonstrates this quick transition). There were no harmful side-effects, including no deterioration of their bone conduction levels postoperatively. We could not find any residual cholesteatoma utilizing TEES during their second-stage surgery which was performed 9 months postoperatively.
DISCUSSION
We showed the feasibility of PTM via the surgical 3D exoscope system. We also showed that the combination of TEES and PTM utilizing the surgical 3D exoscope gives us an optimal surgical environment. The most significant advantage of this combination is that the transition between the surgical 3D exoscope system and endoscope was seamless; it was quick and smooth unlike the transition between microscope and endoscope. Two reasons for this seamless transition follow. First, the surgical 3D exoscope camera and its holding arm are lighter and more compact than surgical microscopes. Second, both surgeries utilizing the surgical 3D exoscope system and endoscopes are performed in a heads-up position, allowing us to share the same monitor during the whole surgery. In addition, when considering the heads-up surgery, it is considered to be ergonomically better (1)(2)(3)(4)(5); moreover, the combination of TEES and PTM utilizing the surgical 3D exoscope gives us a comfortable surgical environment.
Through our experience with the surgical 3D exoscope system, we found some drawbacks. One is that refocusing the surgical 3D exoscope system, which is performed using a separate controller (Image1 Pilot, KARL STORZ), is uncomfortable; an autofocus system or refocusing using a foot controller might help. Because the surgical 3D exoscope system has a digital zooming system, the higher magnification utilizing the system caused a deterioration of the surgical images, although the image quality of the exoscope was equal to the microscopic view at lower magnification. This needs to be ameliorated by incorporating some type of an optical zooming system. There are several other surgical 3D exoscope systems which are currently commercially available. Some of the surgical 3D exoscope systems have an optical zooming system, though, those exoscopes are bigger and more expensive, and those do not have an autofocus system. We expect that a compact surgical 3D exoscope system for endoscope-based ear surgery, which has higher image quality at higher magnification and an autofocus system, will be developed in the near future.
The significant difference between exoscopic surgical manipulation and microscopic manipulation is the relationship between visual line and surgical site: in microscopic manipulation, the visual line is directed at the surgical site, whereas in exoscopic manipulation, the visual line is directed at the monitor in front of the surgeon, not at the surgical site. The relationship between visual line and surgical site in exoscopic manipulation is the same as in normal endoscopic surgery. But exoscopic surgical manipulations are performed using both hands, whereas endoscopic surgical manipulation is performed using a single hand. Because of the difference of single hand and both hands, we felt uncomfortable when we perform exoscopic surgery for the first time, but we can easily get accustomed to this difference. Through our experience of exoscopic PTM in 2 cases, there is no doubt that we think that we can perform surgical procedures in which we manipulate the lateral part of the temporal bone, such as with a PTM, without any difficultly. Furthermore, we think we can also utilize it for other surgical procedures such as complete mastoidectomy, canal wall down tympanoplasty, and preparation for ossicular prostheses. Through future clarification of the feasibility to perform the surgical procedures at deeper sites using exoscope, we might be able to utilize it for facial nerve decompression and cochlear implant surgery. 3D surgical exoscopes with higher quality images at higher magnification may be a realistic alternative to surgical microscopes in the near future. | 2019-05-07T13:03:08.846Z | 2019-05-03T00:00:00.000 | {
"year": 2019,
"sha1": "96c801291a3322493cb805dbe538c954b3c93e18",
"oa_license": "CCBYNCND",
"oa_url": "https://journals.lww.com/otology-neurotology/Fulltext/2019/07000/Non_microscopic_Middle_Ear_Cholesteatoma_Surgery_.17.aspx",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "96c801291a3322493cb805dbe538c954b3c93e18",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
210753929 | pes2o/s2orc | v3-fos-license | Calculation Method of Cable Ampacity Based on Field-circuit Combined Mode
Electricity is usually transmitted through power cables in densely populated areas. It is significant to predict the ampacity and temperature field accurately to ensure safe and economical operation of cables. In this paper, a field-circuit combination algorithm is proposed. The equivalent thermal circuit method and the finite difference method are effectively combined to calculate the temperature of cable and soil area respectively. This method is more accurate than the equivalent thermal circuit method and faster than the traditional numerical method. It is proved that the new method in this paper can meet the needs of engineering by comparing with the results of equivalent thermal circuit method in homogeneous soil and coordinate combination method in inhomogeneous soil respective.
Introduction
Accurate calculate cable ampacity is great significance to improve the economics of cable applications with the wide application of cables in power grids construction. The equivalent thermal circuit method for calculating cable ampacity is in accordance with the IEC-60287 standard recommended by the International Electrotechnical Commission [1]. When the actual laying environment conditions are different from the standard conditions, the correction factor is usually used for correction. The selection of the correction factor is a very complicated problem due to lack of reliable correction basis. Numerical calculation method has advantages of low cost and ability to simulate complex working conditions, which has high practical value for improving the calculation accuracy of ampacity. The numerical methods commonly used to calculate temperature field include: finite element method [2][3][4], boundary element method [5], finite difference method [6], and simulated thermal charge method [7]. The equivalent thermal circuit method provided by IEC standard is a rapid method with high accuracy when dealing with cable ampacity in uniform soil [8], however the calculation error in non-uniform soil environment is large. The calculation accuracy of numerical method is not affected by soil environment complexity but it's slow. Combining the advantages of the two, this paper use the finite difference method to calculate the soil temperature field in the cable laying area, and combines the equivalent thermal circuit method to obtain the core temperature quickly and accurately.
Equivalent Thermal Circuit Method for Calculating Cable Core Temperature
The core, the insulation layer, the metal sheath, etc. will generate losses in running cables, and heat is generated to form a heat flow field. Each layer structure can be represented by an equivalent thermal resistance when heat flow is conducted outward through the various layers of cable. Take the Figure 1 as an example. θ c and θ sh are the temperature (℃) of the inner and outer surfaces of the insulating layer respectively; D c and D sh are the inner and outer diameters (m) of the cable insulation layer respectively; Wc is the core loss (W); ρ T is the thermal resistance of the insulating layer coefficient (K•m/W); R 1 is the thermal resistance of the insulation layer (K•m/W). Assume that the cable consists of core, insulating layer, water blocking layer, metal sheath, and outer sheath. Figure 2 shows the overall equivalent thermal path of the cable. W in and σ 1 W c are the insulation layer loss and the metal sheath loss (W) respectively. R 2 , R 3 and R 4 respectively represent the thermal resistance of the water blocking layer, the outer sheath and the surrounding medium (the thermal resistance of the metal portion is negligible). If the total number of cables is M, the depth of a cable is L, and the thermal resistance coefficient of the cable group adjacent to the soil is ρ T4 , then the R 4 of the cable is [8]: θ a is the temperature of the medium surrounding the cable. Using the equivalent thermal circuit method to calculate the core temperature, the soil in the laying area is required to have a single thermal conductivity. If the thermal conductivity of the laying area soil is not unique, the temperature of the core cannot be accurately determined. In this case, a numerical method is needed to solve the problem.
The New Method-field-circuit Combination Algorithm
Calculate temperature field in cable laying area by finite difference method [6]. The calculation area including the cable is divided into right angle grids, the soil and cable layers in the area are divided into different grids. The thermal conductivity of the corresponding materials in the grid is assigned to the right grid, where the core is located as heat source. In order to improve the calculation accuracy and reduce the calculation amount, the cable area adopts an encrypted grid. The farther from the cable, the more sparse the grid is. The temperature error of the cable core obtained by this method is large, but the soil temperature error around the cable is small. At the periphery of the cable, take a few layers of soil concentric with the cable and the same thickness, the total thickness is d, as shown in Figure 4, the thermal resistance of the soil layer can be expressed as: Cable thermal differential equations, initial conditions and boundary conditions can accurately describe the transient temperature field distribution of a particular cable.The heat conduction differential equation of the cable transient temperature field in the Cartesian coordinate system shown in Figure 3 is [9]: Where ρC P is the product of the medium density and the specific heat at a certain point in the field, T is the temperature, K is the thermal conductivity of the medium, and Q is the heat source.The heat transfer equation can be discrete into: In the formula: Figure 3. Grid of rectangular coordinates Based on the temperature field of the right-angle grid, the quadratic interpolation is used to obtain the temperature of the outermost layer of the soil layer. The average temperature of the soil layer is: Substituting this result into equation (2) in combination, the core temperature can quickly and accurately determine by the equivalent thermal circuit method. This method can not only solve the problem of uneven soil which cannot be solved by the equivalent heat path method, but also the calculation efficiency of the numerical method is significantly improved. Since the insulation loss and metal sheath loss are much less than the core loss, the former two can be equivalent to the core loss. In addition, in order to improve the calculation speed more effectively under the premise of ensuring the calculation accuracy, the layer structure other than the cable core can be processed by the harmonic averaging method [10]: The thickness of each layer is equivalent to a medium, and its thermal conductivity λ T can be expressed as:
Ampacity Calculation and Method Verification
The load current of each cable is the same. The heat dissipation of each cable is affected by the embedding mode and other cables. The ampacity should be based on the cable with the worst heat dissipation condition. In order to effectively reduce the number of iterations, the following steps can be used to calculate the ampacity: (1) First set t m and △t to the maximum allowable steady-state temperature of the cable core and the allowable error respectively, then obtain the initial value of the load current I O by the image method [7]; (2) Input I O into the program to find the steady-state temperature t of the cable core. (3) When iteratively calculates to t m -△t <t< t m , I O is the ampacity of the cable group. In order to verify the correctness of the method, the ampacity of 66kV direct buried cables with singlecircuit and double-circuits is calculated, and the calculation results are compared with the IEC standard and coordinate combination method [6]. The calculation object is XLPE insulated cable. The nominal cross section of the copper cable core is 630mm 2 . The metal sheath is grounded by threesection cross-connection. The soil temperature in the laying area is 25℃, the thermal conductivity is 0.83 W/m•K, and the thermally conductive coefficient of back-filled sand is 0.3W/m•K. The laying method is shown in Figure 5. Table 1 shows the calculation results of the three methods, in which I 1 and I 2 are the ampacity of single and double circuits respectively, and t 1 and t 2 are the calculation time of single and double circuits, respectively. As shown in Table 1, it can be seen that when the soil of laying area is uniform, the calculation results of the field-circuit combination method and the coordinate combination method are in good agreement with the IEC standard; when the sand is backfilled, the field-circuit combination method and the coordinate combination method are in good agreement. It is worth noting that the calculation time of the field-circuit combination method is significantly lower than the coordinate combination method.
Conclusion
Compared with the IEC standard method and the coordinate combination method, the field combination method proposed in this paper has the advantages of high numerical calculation accuracy and high analytical calculation efficiency. It can solve the ampacity of cable group in complex soil condition, and the calculation efficiency is obviously higher than the simple numerical calculation.The calculation of the field-field combination method for the current-carrying cable carrying capacity of clustered soil has important practical significance and high engineering application value. | 2019-10-31T09:09:57.536Z | 2019-10-24T00:00:00.000 | {
"year": 2019,
"sha1": "0a3a857c5ffcaadb0702a008ebfdf685f3435d87",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/611/1/012079",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "915ba9c8ddcc57803811623d5f730d8ea1635c3e",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
} |
252600202 | pes2o/s2orc | v3-fos-license | YM155 and chrysin cooperatively suppress survivin expression in SMARCB1/INI1-deficient tumor cells
SMARCB1/INI1 deficiency is seen in several malignant tumors including malignant rhabdoid tumor (MRT), a highly aggressive pediatric malignancy. Loss of SMARCB1/INI1 function alters diverse oncogenic cellular signals, making it difficult to discover effective targeting therapy. By utilizing an in vitro drug screening system, effective therapeutic agents against SMARCB1/INI1-deficient tumors were explored in this study. In the in vitro drug sensitivity test, 80 agents with various actions were screened for their cytotoxicity in a panel of five SMARCB1/INI1-deficient tumor cell lines. The combination effect was screened based on the Bliss independent model. The growth-inhibitory effect was determined in both the conventional two-dimensional culture and the collagen-embedded three-dimensional culture system. Survivin expression after agent exposure was determined by Western blot analysis. All five cell lines were found to be sensitive to YM155, a selective survivin inhibitor. In the drug combination screening, YM155 showed additive to synergistic effects with various agents including chrysin. Chrysin enhanced YM155-induced apoptosis, but not mitochondrial depolarization upon exposure of SMARCB1/INI1-deficient tumor cells to the two agents for 6 h. YM155 and chrysin synergistically suppressed survivin expression, especially in TTN45 cells in which such suppression was observed as early as 6 h after exposure to the two agents. Survivin is suggested to be a therapeutic target in MRT and other SMARCB1/INI1-deficient tumors. Chrysin, a flavone that is widely distributed in plants, cooperatively suppressed survivin expression and enhanced the cytotoxicity of YM155.
Introduction
Malignant rhabdoid tumor (MRT) is a rare pediatric tumor affecting various anatomic sites such as the kidney (rhabdoid tumor of the kidney), brain (atypical teratoid/rhabdoid tumor), or soft tissues. MRT is a highly aggressive tumor. Long-term survival can be expected after complete surgical resection; however, the prognosis of patients with unresectable tumors or metastatic diseases is extremely poor with an expected long-term survival rate of less than 10% among patients with soft tissue MRT who were treated nonoperatively [1,2]. MRT may initially respond to chemotherapy to some degree; however, it eventually acquires resistance [3].
Loss of expression of SMARCB1/INI1 protein is a characteristic feature of MRT, and pathological determination of INI1 expression in the tumor is useful for its diagnosis [4]. Loss of SMARCB1/INI1 protein expression is not specific for MRT; it is seen in other tumors as well, including some cases of epithelioid sarcoma [5]. SMARCB1/INI1 has been shown to act as a tumor suppressor gene, and loss of function of both alleles gives rise to SMARCB1/INI1-deficient tumors [6,7]. Loss of SMARCB1/INI1 function leads to dysregulation of several cellular processes associated with oncogenesis such as the CDK4/CDK6/cyclinD1, Sonic Hedgehog pathway, and WNT/β-catenin pathway. Alterations in multiple cell signal pathways resulting from the loss of SMARCB1/INI1 function hamper the development of a specific signal inhibition therapy for MRT and other SMARCB1/INI1-deficient tumors.
Survivin is a member of the inhibitor of apoptosis (IAP) family of proteins, and is highly expressed in a broad range of solid tumors including childhood cancers such as ependymoma, malignant peripheral nerve sheath tumor, and hepatoblastoma [8][9][10]. Survivin has been identified in different cellular subfractions conferring various cellular processes including proliferation and maturation. Mitochondrial survivin plays a role in protecting cells from apoptosis [11]. Because survivin is not expressed in differentiated normal tissue, survivin can be one of the candidates as a therapeutic target of cancers. Survivin expression was reported in rhabdoid tumor of the kidney [12]; however, it has not been evaluated in other SMARCB1/INI1-deficient tumors.
In this study, we examined the in vitro growth-inhibitory effects of 80 agents including YM155, a survivin inhibitor, in five cell lines derived from MRT or other SMARCB1/ INI1-deficient tumors. We also evaluated the combined effects of YM155 and other agents to discover a new therapeutic approach against SMARCB1/INI1-deficient tumors.
Materials and methods
Cell lines and cell culture TTN45, RTK (GIF), and RTK (J)-4N are cell lines derived from tumors that were clinically diagnosed as MRT [13]. YCUS-5 derived from epithelioid sarcoma has been reported previously [14]. KCS1 is a cell line that we newly established from the recurrent tumor in the mediastinum of a 4-year-old girl. The tumor in this patient was initially considered to be pleuropulmonary blastoma. However, the typical pathological features of pleuropulmonary blastoma were not seen in the recurrent tumor, and it was pathologically diagnosed as an INI1-deficient tumor without rhabdoid feature. We confirmed loss of SMARCB1 expression by RNAseq in these five cell lines (data not shown). Cells were maintained in RPMI1640 medium (FUJIFILM Wako Pure Chemical Corporation, Osaka, Japan) supplemented with 10% fetal bovine serum (FBS) in an atmosphere with 5% CO 2 at 37 °C.
Reagents
YM155 was purchased from Selleck Chemicals (Houston, TX, USA) and dissolved in sterile water (to a final stock solution concentration of 200 μM). Chrysin dimethylether (chrysin), which was purchased from EXTRASYNTHESE S.A. (Lyon, France), was dissolved in DMSO (to a final stock solution concentration of 10 mg/ml). Stock solutions were stored at − 80 °C.
Drug sensitivity screening
Drug sensitivity screening, which we previously performed in leukemic cells, was performed to evaluate the drug sensitivity of SMARCB1/INI1-deficient tumor cell lines [15]. Briefly, 80 agents with several different classes of action were dissolved in DMSO or deionized water according to the manufacturer's instructions, and diluted in FBS-free RPMI1640. Ten μl of agent-containing medium and its 5 −1 , 5 −2 , and 5 −3 serially diluted media were loaded in a 384-well plate (drug-store plate). In the control wells, RPMI1640 without agent was added. The list of agents and their highest concentrations used in the assay are shown in Supplemental Table 1.
Cells were suspended in RPMI1640 medium with 20% FBS at a concentration of 1 × 10 5 live cells/ml, and 10 μl of the cell suspension was injected into each well of a 384well plate (cell-culture plate). After 1-day incubation in a humidified environment at 37 °C under 5% CO 2 , the agentcontaining medium was transferred to the cell-culture plate from the agent-store plate at 10 μl/well. After incubation in a humidified environment at 37 °C under 5% CO 2 for 3 days, the cell viability in each well was measured using the CellTiter-Glo luminescent assay (Promega, Madison, WI, USA).
The effect of the drug or agent was expressed as the drug effect score (DES) proposed by Szulkin et al. as previously reported [15,16]. DES was calculated based on the different degrees of sensitivity at each different concentration, by weighted counting of the survival percentage and the drug concentration as follows: DES = {(100 − % survival at 5 −3 dilution)*ln(125) + (100 − % survival at 5 −2 dilution)*ln(25) + (100 − % survival at 5 −1 dilution)*ln(5) + (100 − % survival at no dilution)}/ {ln(125) + ln(25) + ln(5) + 1}. The reference DES is the mean value when the assay was performed using peripheral blood mononuclear cells from healthy volunteers. The cell line was considered to be sensitive to a drug or agent when its DES was larger than the corresponding reference DES.
Drug combination assay
To screen the effects of the combination of YM155 with other agents, cell lines were applied onto 2 wells of 384well plates for drug sensitivity screening. In one of the wells, YM155 at 10 nM (final concentration) and the partner agent were added; in the other well, only the partner agent was added. Cell survival in each well was measured as described above. For the validation study, cell survival after exposure to YM155 at eight serially diluted concentrations with or without chrysin at 4 μg/ml was measured in the same way. The effect of the combination of the two agents was expressed as the combination index (CI) based on the Bliss independence model, which is one of the most popular models to assess the combined effects of drugs [17]. The Bliss independence model has a limitation that the model does not take into account heterogeneity of drug actions; however, the methodological simplicity is suitable for the screening of the combined effects. The CI was calculated as follows: where E A or E b is the effect (1 − survival rate) of agent A or B at concentration a or b, respectively, and E A+b is the effect of the combination of agent A at concentration a and agent B at concentration b. When the CI was equal to, less than, or greater than 1.0, the combination was judged to be additive (CI = 1.0), synergistic (CI < 1.0), or antagonistic (CI > 1.0), respectively.
Collagen-gel-embedded three-dimensional culture of cell lines
Recent studies suggest that the three-dimensional (3D) culture system may provide a better tool to mimic physiological drug function [18,19]. The growth-inhibitory effects of YM155 and chrysin were evaluated in the collagen-gelembedded 3D-culture, using a collagen gel culture kit (Nitta Gelatin, Osaka, Japan). Collagen gel was constituted by a mixture of Cellmatrix I-A, ten times Ham's F-12 medium, and reconstruction buffer at a ratio of 8:1:1. Cells were suspended in the collagen gel at a density of 2 × 10 5 cells/ml, and 50 μl of the gel was injected into each well of a Falcon 96-well plate (Corning, Corning, NY, USA). After semisolidification of the gel by incubation at 37 °C for 20 min, 40 μl of RPMI1640 medium with 20% FBS was added in each well. After 1-day incubation, 10 μl of medium containing serially diluted YM155 and/or chrysin at 4 μg/ml was added onto each well. After incubation for 72 h, cell survival was evaluated by the CellTiter-Glo luminescent assay.
Apoptotic change of nuclei after agent exposure
Cells were seeded onto a 24-well glass bottom plate (IWAKI, Shizuoka, Japan) at a density of 5 × 10 4 cells/500 μl with control medium (0.1% DMSO), medium containing YM155 alone at 10 nM, medium containing chrysin alone at 4 μg/ml, or medium containing a combination of YM155 at 10 nM and chrysin at 4 μg/ml. These concentrations were determined by selecting the concentration that caused 50% growth inhibition in the drug sensitivity screening test. After incubation for 48 h, cellular nuclei were stained with NucBlue Live ReadyProbes Reagent (ThermoFisher Scientific, Waltham, MA) according to the manufacturer's instructions. Cells were observed for apoptotic change of nuclei under fluorescence microscopy (KEYENCE, Osaka, Japan).
JC-1 assay
To evaluate mitochondrial damage after agent exposure, the MitoPT JC-1 assay (ThermoFisher Scientific, Waltham, MA) was performed. Cells were seeded onto a 24-well glass bottom plate at a density of 5 × 10 4 cells/500 μl. After incubation for 6 h in control medium (0.1% DMSO), medium containing 2-[2-(3-chlorophenyl) hydrazinylyidene] propanedinitrile (CCCP; as an inducer of mitochondrial depolarization) at 50 μM, medium containing YM155 alone at 10 nM, medium containing chrysin alone at 4 μg/ml, or medium containing the combination of YM155 and chrysin at the indicated concentrations, cells were stained with MitoPT JC-1 at 37 °C for 30 min. Because mitochondria depolarization is indicated by a decrease in red fluorescence and an increase in green fluorescence in the JC-1 assay, the numbers of intact (JC-1 red) and damaged (JC-1 green) cells were counted under fluorescence microscopy. The analysis by JC-1 was performed in triplicate.
Western blotting
Cells were seeded onto a 6-well plate at a density of 1 × 10 5 cells/ml. Then, YM155 at 10 nM, chrysin at 4 μg/ml, or their combination was added to each well. DMSO at 0.1% without agents was added in the control wells. Cells were incubated for 6 or 24 h. Cellular proteins were extracted in RIPA buffer (ThermoFisher Scientific). Twenty-five μg of protein was loaded onto 4-12% SDS-PAGE gels and blotted onto a nitrocellulose membrane using the iBlot 2 Dry Blotting System (ThermoFisher Scientific). Blots were blocked and probed with antibodies against survivin (1:500 dilution; Abcam, Cambridge, UK) or with antibodies against β-actin (1:10,000 dilution; Sigma-Aldrich, St. Louis, MO, USA). The blots were incubated with horseradish peroxidase-conjugated secondary antibodies and detected using iBind Western System (ThermoFisher Scientific). Finally, the protein bands were scanned with a Gel imaging Instrument (KURABO, Osaka, Japan).
Proteins extracted from Jurkat cells were used as a positive control for survivin (data not shown).
Statistical analysis
The statistical significance of differences between the control and treated samples was evaluated by Student's t test. Statistical analyses were performed using the analysis function of SigmaPlot 14 software (Systat Software Inc., San Jose, CA).
Drug sensitivity screening
In the high-throughput drug sensitivity screening, a cell line was considered as being sensitive to the tested agent when its DES was larger than the corresponding reference DES (the value in peripheral blood mononuclear cells obtained from healthy volunteers). We found that all cell lines were sensitive to several agents as shown in supplemental Fig. 1. In Table 1, the mean DES among five cell lines for each drug after subtracting the corresponding reference DES is presented. Among the tested agents, YM155, a survivin inhibitor, and topotecan, a topoisomerase inhibitor, were potentially effective agents with the relatively high DES and with the broadest spectrum. As the therapeutic benefit of topotecan has been well established in refractory or recurrent solid tumors of children [20,21], we proceeded with further examination of YM155 to explore its therapeutic potential for SMARCB1/INI1deficient tumors in this study.
Drug combination screening
The combination therapy by cytotoxic chemotherapy and targeting drugs is a promising approach for cancer therapy. Thus, as the next step, we screened the combination effects of YM155 with other agents. The CIs between a agent at four serially diluted concentrations and YM155 at 10 nM were calculated based on the Bliss independence model. As a result (Supplemental Fig. 2, Table 2), the CI values were around 1.0, including some values that were less than 1.0, in all 79 tested agents, suggesting that combined use of YM155 may provide additive to synergistic effects to a wide range of anticancer agents. Among those agents that had additive to synergistic combination effects with YM155 in the screening assay, we next focused on chrysin as a partner of YM155, because chrysin was previously shown to antagonize the cytotoxic effects of various anticancer drugs [22]. Fig. 1 Dose response curves of YM155 with or without Chrysin in the 2D or 3D-culture system. a To validate the combination effect between YM155 and chrysin, cell lines were applied onto 2 of 384well plates, in one of which chrysin was added to each tested well at 4 μg/ml (final concentration). YM155 at eight serially diluted concentrations was loaded onto the wells, and cell survival in each well was measured by CellTiter-Glo luminescent assay. An open or closed circle in the figure indicates cell survival with or without chrysin, respectively. b The growth-inhibitory effects of YM155 and chrysin were evaluated in the collagen-gel-embedded 3D-culture, using a collagen gel culture kit (Nitta Gelatin, Osaka, Japan). Cells suspended in the collagen gel were injected into each well of a 96-well plate. After 1-day incubation, medium containing serially diluted YM155 and/or chrysin at 4 μg/ml was added onto each well. After 72 h incubation, cell survival was evaluated by the CellTiter-Glo luminescent assay To validate the results of the drug combination screening, the combination effect of YM155 and chrysin was evaluated using the CellTiter-Glo luminescent assay (Fig. 1a). MRT and epithelioid sarcoma cells were seeded onto a 384-well plate, and then exposed to 1.5 times serially diluted YM155 at concentrations of up to 40 nM, with or without chrysin at 4 μg/ml. The CIs of YM155 and chrysin were around 1.0 across the tested YM155 concentrations in all cell lines but there were several combinations that were judged as synergistic (CI < 1.0). The CIs at each concentration of YM155 and chrysin at 4 μg/ml are shown in Table 3. The combination effect of YM155 and chrysin was also evaluated by collagen-gel-embedded 3D-culture assay (Fig. 1b). Compared to the two-dimensional (2D) culture, the combination effects of YM155 and chrysin in the 3D-culture system were not consistent. In RTK (J)-4N and KCS1 cells, the combination effects were considered to be rather antagonistic (CI > 1.0) at lower concentrations of YM155. However, otherwise, the CIs were around 1.0 or < 1.0, especially in TTN45 or RTK (GIF), in which the CIs were less than 1.0 at all tested concentrations.
Induction of apoptosis by YM155 and chrysin
It has been shown that YM155 induces apoptosis in cancer cells by activating the mitochondrial apoptotic pathway [23]. To evaluate if chrysin enhances YM155-induced apoptosis, we evaluated the nuclear morphological change in TTN45 and RTK (GIF) cells after exposure to YM155 and/or chrysin by nuclear staining with NucBlue (Supplemental Fig. 3). We found that YM155 exposure resulted in cells with apoptotic features, which were further enhanced in concert with chrysin, although chrysin alone did not induce morphological apoptotic features.
Mitochondrial depolarization after incubation of SMARCB1/INI1-deficient tumor cells with YM155 and/or chrysin was measured by the MitoPT JC-1 assay (Fig. 2, Supplemental Fig. 4). YM155 at 10 nM In TTN45 cells, chrysin showed a significant protective . 2 Loss of mitochondrial transmembrane potential after agent exposure. Cells were seeded onto a 24-well glass bottom plate. After 6 h incubation in control medium, medium containing 2-[2-(3-chlorophenyl) hydrazinylyidene] propanedinitrile (CCCP; as a positive control) at 50uM, medium containing YM155 alone at 10 nM, medium containing chrysin alone at 4 μg/ml, or medium containing the combination of YM155 at 10 nM and chrysin at 4 μg/ml, cells were stained with MitoPT JC-1 (ThermoFisher Scientific, Waltham, MA). The numbers of cells with intact or damaged mitochondrial transmembrane potential (JC-1 red or green, respectively) were counted under fluorescence microscopy. The bar with the error bar indicates the mean ratio of JC-1 red/green cells with standard deviation in triplicate counting. *P < 0.05, **P < 0.01 effect against YM155-induced mitochondrial damage. These results suggest that the mitochondrial pathway is involved in YM155-induced apoptosis in SMARCB1/ INI1-deficient tumor cells; however, it is not likely that chrysin directly enhances YM155-induced mitochondrial apoptosis.
Combination of YM155 and chrysin reduced survivin expression
Survivin expression was evaluated by Western blotting in SMARCB1/INI1-deficient cell lines after 6 or 24 h exposure to YM155 at 10 nM, chrysin at 4 μg/ml, or their combination (Fig. 3). YM155 has been characterized as an inhibitor of survivin expression; however, upon 24 h exposure, YM155 noticeably reduced survivin expression in only YCUS-5 cells. In contrast, when YM155 was combined with chrysin, survivin expression was repressed in all cell lines. Such change was observed as early as 6 h after incubation with the two agents in TTN45 cells. Chrysin alone did not affect survivin expression. Thus, the mechanism by which the combination of YM155 and chrysin reduced the viability of the cancer cell lines includes synergistic inhibition of survivin expression.
Discussion
MRT is a rare, aggressive soft tissue sarcoma that mostly develops in infants [24]. Loss of SMARCB1/INI1 expression is a characteristic of MRT, but is not an exclusive feature of MRT [4]. SMARCB1/INI1 is a core subunit of the SWI/SNF (BAF) chromatin-remodeling complex, and loss of function of SMARCB1/INI1 has been shown to lead to several cellular events associated with proliferation such as Cyclin D1 expression, activation of the Hedgehog pathway, and activation of the WNT/β-catenin pathway [6,7]. Loss of SMARCB1/INI1 is thought to be a driver event of oncogenesis in MRT and other SMARCB1/INI1-deficient tumors [6,7]. However, a specific therapy targeting SMARCB1/ INI1 loss has not been developed because loss of wild-type SMARCB1/INI1 functions results in diverse cellular signal alterations.
In this study, we performed in vitro drug sensitivity screening in an attempt to discover a novel therapy against SMARCB1/INI1-deficient tumors. We found that YM155, a survivin inhibitor, effectively inhibited the survival of MRT and other SMARCB1/INI1-deficient tumor cell lines. Recently, EZH2 inhibition was suggested to counteract the epigenetic alterations caused by SMARCB1/INI1 loss and to have a therapeutic potential in MRT [25]. Tazemetostat, a specific EZH2 inhibitor, was included in our tested agent panel; however, tazemetostat did not reduce the viability of Survivin is a member of the IAP family, which plays roles in regulation of cell proliferation and cell death [11]. Survivin is overexpressed in various types of cancers including rhabdoid tumor of the kidney [12], although its expression is not seen in most normal differentiated tissues, suggesting that survivin is an attractive therapeutic target of cancer. Survivin inhibits apoptotic and autophagic cell death and its overexpression is associated with the aggressive phenotype and reduced drug sensitivity of cancer cells [11]. YM155, a small molecule inhibitor of survivin expression, has antitumor effects in several cancers, and results of clinical trials of YM155 in patients with non-small cell lung cancer [26], lymphoma [27], breast cancer [28], melanoma [29], or prostate cancer [30] have been reported, although the effects of YM155 in MRT and other SMARCB1/INI1-deficient tumors had not been determined. Based on our results, YM155 is expected to have a therapeutic effect against SMARCB1/ INI1-deficient tumors. YM155 has been reported to be able to induce cancer cell death via the pathways independent from survivin inhibition [31]. Because survivin inhibition was not evident by YM155 alone except for YCUS5 cells, other pathways than survivin inhibition might be involved in YM155-induced SMARCB1/INI1-deficient tumor cell death. However, LQZ-7I, a survivin dimerization inhibitor [32], also inhibits growth of RTK (J)-4N or YCUS5 cells at 50% inhibitory concentration of 7-8 µM (data not shown), supporting survivin can be a therapeutic target of SMARCB1/INI1-deficient tumors.
Combined use of drugs with different modes of action is a clinically promising approach to enhance the effect on the target and to disperse organ toxicities. Since survivin inhibition was suggested to be therapeutic in SMARCB1/ INI1-deficient tumors, we screened agents for their synergy with YM155. Among 79 agents with various modes of action, most of the agents had more than additive effects upon simultaneous addition of YM155. Among these agents, we decided to perform further validation studies with chrysin, since the result seemed to be contradictory to the findings of our previous study [22]. In the validation study with a fixed dose of chrysin in contrast to the drug combination screening where a fixed dose of YM155 was used, the CIs were approximately 1.0 at various concentrations of YM155 in all tested cell lines. Thus, chrysin was shown to have at least an additive effect in combination with YM155. The 3D-culture system provides a more physiological in vitro condition to cells than the 2D-culture system, and cancer cells in the 3D-culture respond to chemotherapeutic drugs differently compared to cancer cells in the conventional 2D-culture [18,19]. In this study, we utilized the collagen gel droplet embedded-drug sensitivity test (CD-DST) [33][34][35] with some modifications to apply the method to the high-throughput drug screening. In the CD-DST, the various drugs added in the culture medium have been successfully evaluated for their cytotoxicity to 3D-cultured tumor cells in collagen gel droplet. Using the modified CD-DST, the combination effects of YM155 and chrysin were also judged as being mostly additive to synergistic, except for some data points at lower concentrations of YM155 where the combination effects were judged as antagonistic, presumably due to the computational problem derived by decreased cytotoxic effects of chrysin in the 3D-culture.
Chrysin, a bioactive natural flavone, has been shown to have several bioactivities including antioxidant, anti-inflammatory, and anti-tumor effects [36]. We previously showed that 5,7-dimethoxyflavone which is a bioavailable derivative of chrysin induced cell cycle arrest in acute lymphoblastic leukemia cells and antagonized cytotoxic effects of simultaneously added chemotherapy drugs [22]. In contrast, another study showed that chrysin increased the sensitivity of pancreatic cancer cells to gemcitabine by inhibiting the activity of carbonyl reductase 1 which is associated with resistance to gemcitabine [37]. Thus, the combination effect conferred by chrysin might be different depending on the partner drug or the tumor type.
In this study, chrysin enhanced YM155-induced apoptosis in SMARCB1/INI-1-deficient tumor cells, although chrysin at 4 μg/ml alone did not induce apparent apoptotic features upon 48 h exposure. YM155 is thought to induce both intrinsic and extrinsic apoptosis by inhibiting survivin expression. Because chrysin has been reported to induce apoptosis in some cancers by activating the mitochondrial apoptotic pathway [38], the changes in mitochondrial transmembrane potential in SMARCB1/INI-1-deficient tumor cells were determined after exposure to YM155 and chrysin. We found that YM155 at 10 nM induced mitochondrial damage upon incubation with SMARCB1/INI-1-deficient tumor cells for 6 h; however, further enhancement by adding chrysin was not observed at this time point. Among its various biological activities, chrysin has been shown to act as a histone deacetylase inhibitor (HDACi) [39]. HDACi represses nuclear factor-kappa b targeting gene expression including survivin expression; in fact, chrysin was reported to decrease survivin expression in melanoma cells [40]. In the present study, chrysin significantly enhanced YM155-induced suppression of survivin expression especially inTTN45 cells, where an apparent decrease in survivin expression was seen after exposure to the combination of YM155 and chrysin for as short as 6 h. These results suggest that synergistic suppression of survivin expression underlies the anti-tumor effect of the combination of YM155 and chrysin in SMARCB1/ INI-1-deficient tumor cells.
In summary, our data suggest that survivin can be a therapeutic target in MRT and other SMARCB1/INI-1-deficient tumors. Chrysin, a dietary flavonoid, suppressed survivin expression in concert with YM155. Considering poor bioavailability of chrysin [41], these results are not applicable to clinic practice instantly, however, can provide important suggestions leading development of effective and less toxic therapy. | 2022-09-30T13:05:54.050Z | 2022-09-29T00:00:00.000 | {
"year": 2022,
"sha1": "f9bcc104191a86e2c304edacd043857f51d15768",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-1786163/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "Springer",
"pdf_hash": "918b555d3ddc745b1f9e525fbcf87b2fea240107",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
39095277 | pes2o/s2orc | v3-fos-license | The Bordetella Adenylate Cyclase Repeat-in-Toxin (RTX) Domain Is Immunodominant and Elicits Neutralizing Antibodies*
Background: The protective antigen adenylate cyclase toxin (ACT) has not been included in current pertussis vaccines partly due to incomplete understanding of its protective epitopes. Results: The repeat-in-toxin (RTX) domain is immunodominant in mice and contains neutralizing epitopes. Conclusion: The RTX domain induces similar neutralizing antibody responses as ACT. Significance: The RTX domain may be an alternative to ACT for inclusion in future vaccines. The adenylate cyclase toxin (ACT) is a multifunctional virulence factor secreted by Bordetella species. Upon interaction of its C-terminal hemolysin moiety with the cell surface receptor αMβ2 integrin, the N-terminal cyclase domain translocates into the host cell cytosol where it rapidly generates supraphysiological cAMP concentrations, which inhibit host cell anti-bacterial activities. Although ACT has been shown to induce protective immunity in mice, it is not included in any current acellular pertussis vaccines due to protein stability issues and a poor understanding of its role as a protective antigen. Here, we aimed to determine whether any single domain could recapitulate the antibody responses induced by the holo-toxin and to characterize the dominant neutralizing antibody response. We first immunized mice with ACT and screened antibody phage display libraries for binding to purified ACT. The vast majority of unique antibodies identified bound the C-terminal repeat-in-toxin (RTX) domain. Representative antibodies binding two nonoverlapping, neutralizing epitopes in the RTX domain prevented ACT association with J774A.1 macrophages and soluble αMβ2 integrin, suggesting that these antibodies inhibit the ACT-receptor interaction. Sera from mice immunized with the RTX domain showed similar neutralizing activity as ACT-immunized mice, indicating that this domain induced an antibody response similar to that induced by ACT. These data demonstrate that RTX can elicit neutralizing antibodies and suggest it may present an alternative to ACT.
Whooping cough is a highly infectious disease caused primarily by the bacteria Bordetella pertussis. While disease incidence has dropped dramatically due to the initiation of widespread vaccination programs using killed bacteria in the 1940s, in recent years rates have rebounded, reaching a 60-year high in the United States in 2012 (1)(2)(3). This trend is especially trou-bling for unimmunized infants, who are most susceptible to the disease and exhibit the highest rates of morbidity and mortality. Modified vaccination strategies, including booster immunization of adolescents, adults, and pregnant women, have been implemented to reduce transmission to neonates.
This increase in disease incidence coincides with the switch from whole cell to acellular vaccines in the 1990s, and has been attributed to several factors, including increased awareness, mismatch between vaccine and circulating strains, a Th1/Th2 immune response instead of the more effective Th1 response, and a shorter duration of protection conferred by acellular vaccines (4). Recently, Warfel et al. (5) demonstrated that acellular vaccines protect against disease symptoms but not subclinical infection or transmission in a novel non-human primate model. Taken together, these data provide a compelling argument for modification of the current vaccine.
Currently licensed acellular vaccines contain chemically detoxified pertussis toxin and up to four surface adhesins, including filamentous hemagglutinin, pertactin, and fimbriae 2/3. Exciting approaches in development to enhance vaccinemediated protective immunity include a genetically attenuated B. pertussis for intranasal delivery (6), nanoparticle formulations, including purified antigens and novel adjuvant formulations (7), as well as inclusion of additional highly conserved antigens in the current vaccine (8,9). A strong candidate for inclusion in any of these is the adenylate cyclase toxin (ACT), 2 which aids in immune evasion and is produced by three closely related Bordetella species, including B. pertussis, Bordetella parapertussis, and Bordetella bronchiseptica (10,11).
ACT-deficient Bordetella strains have shown significantly compromised colonization and persistence in various mouse models (12)(13)(14), whereas some hypervirulent strains express higher ACT levels (15). Moreover, active or passive immunization with polyclonal anti-ACT antibodies protected mice against lethal respiratory challenges by B. pertussis and B. parapertussis (15) and shortened the period of bacterial colonization in the respiratory tract (16). Finally, natural infection of humans results in a strong anti-ACT antibody response (17).
ACT is a large ϳ177-kDa protein consisting of two functionally discrete regions as follows: the catalytic domain (residues 1-385) and a pore-forming or hemolysin region that is part of the larger repeat-in-toxin (RTX) family, represented in Ͼ250 bacterial strains (Fig. 1A). After translocation into the cytosol, the catalytic domain binds eukaryotic calmodulin with low nanomolar affinity (18) and rapidly converts available ATP to cAMP via its adenylate cyclase activity (19). The resulting supraphysiological cAMP levels disrupt signaling and bactericidal activities in phagocytic cells (20 -22). The C-terminal ϳ1300 residues exhibit homology to the Escherichia coli ␣-hemolysin. This region consists of a hydrophobic domain capable of forming a cation-selective transmembrane channel (residues 525-715) (23), a modification region bearing two acylation sites at residues Lys-860 and Lys-983 (24), the RTX domain (residues 1006 -1600), consisting of ϳ40 calcium binding sites formed by glycine-and aspartate-rich nonapeptide repeats, and finally, a C-terminal secretion signal (residues 1600 -1706). The RTX region also harbors the receptor-binding site, with specificity for the ␣ M  2 integrin (also called CR3, Mac-1, and CD11b/ CD18) present on phagocytic leukocytes (25,26). Both posttranslational acylation by the co-expressed enzyme CyaC and calcium ion-mediated structural changes are essential for receptor binding, cAMP intoxication, and pore forming activities (24,27).
Despite evidence indicating ACT is a protective antigen, few neutralizing antibodies have been described, and the location of neutralizing epitopes remains unclear. Moreover, ACT is prone to aggregation and degradation when produced by Bordetella or recombinantly by E. coli, precluding its inclusion in current acellular vaccine formulations (28). Therefore, we aimed to identify neutralizing antibodies and their domain specificity and to determine whether any single domain, possessing desirable expression and protein stability characteristics, can recapitulate the antibody responses induced by the holo-toxin.
To enhance folding and solubility, the hydrophobic domain, encompassing the region between the catalytic and RTX domains (residues 399 -1096), was cloned into pMalc-5x vector (New England Biolabs) between NdeI and BamHI sites, downstream of the maltose-binding protein (MBP). The primers for PCR were 5Ј-gggcgcaCATATGcgccaggattccggct-3Ј and 5Ј-atcggcGGATCCttaatggtgatgatggtgatgggcgctggcctcggaaggctggtgcac-3Ј; with the boldface nucleotides encoding a Cterminal His 6 -tag. The cyaC gene was inserted downstream of the hydrophobic domain between the BamHI and HindIII sites, with an upstream ribosome binding to allow for co-expression.
ACT and Domain Expression and Purification-Full-length ACT was expressed from the plasmid pT7CACT3 with co-expression of the palmitoylating enzyme CyaC in E. coli strain XL-1 Blue (29). The holo-toxin was purified using a single-step calmodulin-agarose affinity chromatography as described by Sebo and co-workers (29). Purified ACT was stored in 50 mM Tris, 8 M urea, 2 mM EDTA, pH 8.0, at 4°C for short term or Ϫ80°C for long term storage. The protein concentration was determined by absorbance at 280 nm using a molecular extinction coefficient of 143,590 M Ϫ1 cm Ϫ1 as calculated from its amino acid sequence (30). ACT from BEI Resources was used as a reference for purity and toxicity.
The catalytic and RTX domains of ACT were expressed in E. coli strain BL21(DE3). Briefly, 250 ml of TB media were inoculated from starter cultures to an absorbance at 600 nm (A 600 ) of 0.05, grown at 37°C until A 600 ϭ 0.3-0.6, at which time 0.4 mM isopropyl -D-thiogalactopyranoside (IPTG) was added to induce expression. After 4 h of growth at room temperature, the cells were harvested, resuspended in Buffer A (50 mM Hepes, 250 mM NaCl, 2 mM CaCl 2 , 40 mM imidazole, pH 8.0), and lysed with a French press (Thermo Scientific). After a 20-min centrifugation step at 20,000 rpm (JA-20 rotor), the supernatant was applied to a HisTrap column on an ÅKTA FPLC (GE Healthcare), followed by elution with a linear gradient of Buffer B (Buffer A ϩ 500 mM imidazole). The hydrophobic domain was expressed in E. coli strain BL21 as above, and purification included immobilized metal affinity chromatography (IMAC) resin followed by an MBPTrap affinity column (GE Healthcare) and elution with 10 mM maltose.
Biophysical Characterization-To assess the oligomeric status of ACT and its domains, fractions eluted from HisTrap or IMAC were supplemented with either 100 l of 100 mM EGTA or 100 mM HBSC (HBS ϩ 2 mM CaCl 2 ), incubated on ice for 1 h, and loaded onto Superdex S200 column (except Superdex75 for CAT 400 ) equilibrated with HBS (50 mM Hepes, 150 mM NaCl, pH 7.8) or HBSC, respectively. Running buffers were HBS or HBSC to ensure the absence or presence of calcium ion. To assess secondary structure characteristics, purified ACT domains were dialyzed into 10 mM Tris-H 2 SO 4 , pH 8.0, and the concentrations were adjusted to 100 g/ml. To observe the effect of calcium ions on protein conformation, CaCl 2 were added to a final concentration of 2 mM and incubated at room temperature for 1 h. CD spectra between 180 and 260 nm were collected on a J-815 CD spectrometer (Jasco) at 25°C, at 1-nm intervals using a 1-mm rectangular cell. Each spectrum repre-sents the average of three scans subtracted with the spectrum of buffers (with or without CaCl 2 ). Data were fitted with the CDSSTR program on the DichroWeb server to estimate the percentage of secondary structures (31).
Murine Immunization-All protocols were approved by the University of Texas at Austin IACUC (protocol number 2012-00068), and all mice were handled in accordance with IACUC guidelines. As a source for antibody libraries, two 6-week-old BALB/c mice were primed intraperitoneally with 17 g of ACT (dialyzed against PBS to remove urea) in complete Freud's adjuvant. Four weeks later, the mice were bled through a tail vein and boosted subcutaneously with the same amount of PBSdialyzed ACT in incomplete Freund's adjuvant. Two weeks later, the mice were sacrificed, and blood was collected by cardiac puncture. Spleens were removed sterilely, sliced into pieces, and immediately immersed in 1 ml of cold RNAlater solution. After soaking overnight at 4°C, the solution was removed, and the spleens were stored at Ϫ80°C.
To assess the immunogenicity of individual ACT domains, 4 -6 BALB/c mice per group were immunized subcutaneously with equal moles of ACT and individual domains (10 g for ACT, 2.6 g for CAT 400 , 6.7 g for HP 1096 *, and 4.4 g for RTX 985 ) in complete Freund's adjuvant. Four weeks later, the mice were boosted subcutaneously with the same amount of antigen in incomplete Freud's adjuvant, a process that was repeated at 6 and 8 weeks. Blood was collected before immunization, 4 weeks after the first injection, and 2 weeks after each boost. Anti-ACT antibody titers were determined by ELISA (described below), with titer defined as the 50% effective concentration (EC 50 ) from a four-parameter logistic fitting to the ELISA data. Neutralization of ACT-induced cAMP intoxication of J774A.1 macrophage cells (ATCC number TIB-67) was tested with a 1:400 sera dilution.
Phage Display Antibody Library Construction-Total RNA was extracted from frozen spleens with TRIzol (Invitrogen) and the RNeasy mini kit (Qiagen) or PureLink RNA kit (Invitrogen) according to the manufacturers' instructions. The quality and concentration of total RNA were assessed by agarose gel electrophoresis and A 230:260:280 ratio (ϳ1:2:1 for pure RNA) measured by NanoDrop 2000 (Thermo Scientific). For first-strand cDNA synthesis, 5 g of total RNA was used. To maximize diversity, two separate reactions were performed using combinations of Superscript II ϩ d(T) 23 VN primer or Superscript III (Invitrogen)ϩ random hexamer (Thermo Scientific), following the manufacturers' instructions. The two sets of cDNA were pooled as template for amplification of the V L and V H repertoires using the primer sets and PCR conditions described by Krebber et al. (32). The PCR products were gel-purified, with 10 ng each of V L and V H used as template in an overlap PCR to generate V L -linker-V H fragments (scFv). This product was gelpurified and digested overnight with SfiI prior to directional ligation with similarly SfiI-digested pMopac24 vector (33). Ten individual electroporations were performed to transform XL1-Blue cells. The transformants were pooled, and an aliquot was 10-fold serially diluted and plated to count library size; the rest were plated on eight 150-mm 2ϫYT agar plates (10 g/ml tetracycline, 200 g/ml ampicillin, and 2% glucose). After incuba-tion overnight at 37°C, the bacterial lawns were scraped off in 2ϫYT medium and pooled to form the master library.
Phage Production, Purification, and Panning-Aliquots of the master library were used to inoculate 250 ml of 2ϫYT medium with 10 g/ml tetracycline, 200 g/ml ampicillin, and 2% glucose in 1-liter flasks to an A 600 of ϳ0.1. The cultures were grown at 37°C for 2-3 h until the A 600 reached ϳ0.6, induced, and rescued by adding 1 mM IPTG and M13KO7 helper phage (multiplicity of infection of ϳ20), incubated for 30 min without shaking at 37°C, and then returned to a shaking incubator at room temperature. Three hours after adding helper phage, the culture was supplemented with 50 g/ml kanamycin prior to overnight incubation with shaking. Phage were then purified by double precipitation with 0.2 volume of precipitation solution (2.5 M NaCl, 20% PEG-8000). The concentration of viable phage was assessed as colony-forming units (cfu), with serially diluted phage added to log-phase XL1-Blue cells, followed by plating on 2ϫYT agar plate with 200 g/ml ampicillin, and enumeration of colonies after overnight incubation.
Two rounds of panning were performed using ACT as bait. Eight ELISA plate wells (Costar) were coated with 50 l of 2 and 1 g/ml ACT in PBS at 4°C overnight for the first and second rounds, respectively. Input phage (100 l) were diluted into 900 l of 5% nonfat milk in PBST (PBS, 0.05% Tween 20) and incubated for 1 h before transferring 100 l to each of the 8 wells. After a 1-h incubation at room temperature, followed by five (or 10 for round 2) washes with PBST, bound phage were eluted with 100 l per well of 0.1 N HCl for 10 min at room temperature. The eluted phage was pooled and immediately neutralized with 48 l of 2 M Tris base. Half of the output phages was added into 5 ml of log-phase XL1-Blue culture grown with 10 g/ml tetracycline at 37°C to retain the F plasmid, incubated for 30 min without shaking and 1 h with shaking at 225 rpm in 37°C, spun down, and then plated on six 150-mm 2YT agar plates (200 g/ml ampicillin, 10 g/ml tetracycline, and 2% glucose). After overnight incubation at 37°C, colonies or lawn were scraped, pooled, and mixed thoroughly, and aliquots were used for second round of panning as described above.
Input and output phage titers (colony-forming units) were determined by infecting and plating E. coli as described above. Sequence diversity was monitored throughout all steps by performing colony PCR of random colonies on the phage titration plates, followed by BstNI fingerprinting and agarose gel electrophoresis. Clones with unique fingerprints were confirmed by DNA sequencing.
To produce monoclonal phage clones from panning outputs, single colonies from output plates were inoculated into sterile 96-well plates containing 100 l of 2YT medium (2% glucose, 200 g/ml ampicillin, and 10 g/ml tetracycline) and grown at 37°C overnight with shaking. The next morning, 10 l of the overnight culture was inoculated into another plate with 90 l per well of fresh medium containing 0.25% glucose and antibiotics, and grown at 37°C for 3 h, then 50 l of 2YT (200 g/ml ampicillin, 3 mM IPTG, and M13KO7 helper phage) was added and then shaken at room temperature for 3 h before adding 50 l of 2YT (200 g/ml ampicillin, 1 mM IPTG, 200 g/ml kanamycin). The plate was then shaken at room temperature overnight.
Antibody Expression and Purification-To convert phagedisplayed scFvs to soluble single chain antibody fragments (scAbs), consisting of a variable light chain domain (V L ) connected by a flexible (Gly 4 Ser) 2 to a variable heavy chain domain (V H ) and followed by a human constant domain to enhance expression and solubility, the scFv region was removed from pMopac24 phagemid vector by SfiI digestion and directionally ligated into SfiI-digested pMopac54 plasmid (34). For scAb production, 100 ml of TB supplemented with 200 g/ml ampicillin and 1% glucose were inoculated at A 600 ϭ 0.02 and grown overnight at room temperature. The next morning, cells were pelleted at 5000 ϫ g for 10 min at room temperature, resuspended in 100 ml of TB medium with ampicillin but no glucose, and grown at room temperature for 1 h before induction with 1 mM IPTG. After another 4 h, cells were harvested by centrifugation at 5000 ϫ g for 10 min at 4°C. Osmotic shock was performed as described (34). scAbs in the dialyzed shockates were purified by IMAC resin followed by size exclusion chromatography with a Superdex 200 column on FPLC (GE Healthcare). Protein concentrations were measured by BCA assays (Pierce) using a BSA standard with purity assessed by SDS-PAGE.
To convert scAbs into full-length IgG with enhanced stabilities and in vivo half-lives, the V L and V H genes were subcloned onto Ig-Abvec and IgG-Abvec vectors as described by Smith et al. (35). For IgG production, paired Ig-Abvec and IgG-Abvec plasmids were transiently transfected into CHO-K1 cells using Lipofectamine 2000 (Invitrogen) according to the manufacturer's instructions. Culture media were collected at 1-2-day intervals, neutralized with 1 Tris, pH 8.0, and pooled, and IgG was purified by ammonium sulfate precipitation followed by HiTrap protein A column. The purity and presence of aggregates were assessed by SDS-PAGE and size exclusion chromatography using a Superdex S200 column. The concentration was determined by A 280 using extinction coefficients calculated from deduced amino acid sequences (30).
Analysis of Antibody Binding by ELISA-For monoclonal phage screening, the 96-well phage production plates described above were spun at 3000 ϫ g for 20 min with 40 l of supernatant transferred to a coated (2 g/ml ACT in PBS) and blocked (5% nonfat milk in PBST, M-PBST) ELISA plate, containing 60 l of M-PBST per well, and incubated at room temperature for 1 h. After four washes with PBST, 50 l of 1:2000 HRP-conjugated anti-M13 antibody (GE Healthcare) was added and incubated for1 h at room temperature. The plate was washed four times, and 50 l/well 3,3Ј,5,5Ј-tetramethylbenzidine substrate was added and incubated at room temperature. The reaction was quenched by adding 1 M HCl to the 50 l/well, and the absorbance at 450 nm recorded with a SpectraMax M5 (Molecular Devices). Wells with absorbance higher than 2-fold of background were identified for further characterization.
Binding assays to assess domain specificity and relative affinity of soluble scAb or IgG proteins were performed in a similar manner. Purified ACT (1 g/ml) or domains (equimolar with ACT) were coated on ELISA plates, followed by blocking and serial dilutions of purified antibodies, and finally detection with goat anti-mouse IgG HRP-conjugated antibody (to detect murine antibodies), goat anti-human chain HRP-conjugated antibody (to detect scAbs), or goat anti-human IgG (Fc-spe-cific) HRP-conjugated antibody (to detect recombinant IgG). For competition ELISA, the antibody of interest was used at a fixed concentration determined to yield 70 -80% of the maximal signal, and mixed with an equal volume of serially diluted competitor antibody, followed by detection as above.
To assess the reactivity of sera from humans exposed to B. pertussis, purified ACT (1 g/ml) or domains (equimolar concentrations as ACT) were coated on ELISA plates. Plates were blocked as above; human sera were serially diluted in M-PBST, and bound antibodies were detected with goat anti-human IgG (Fc-specific) HRP-conjugated antibody.
Nine randomly selected samples were tested in duplicate. The absorbance value for each sample binding to a domain was normalized to that sample's signal on an ACT-coated well at a 100-fold dilution (in the linear dose-response range) as follows: (A 450 (domain coated well) Ϫ A 450(uncoated well) )/ (A 450 (ACT coated well) Ϫ A 450(uncoated well) ). Human sera were obtained from Vanderbilt University Medical Center under a protocol approved by the local institutional review board (IRB 061262, 070258, and 090806). Use of the samples was approved by the University of Texas at Austin (2009-05-0096). The study was conducted in accordance with the Declaration of Helsinki, with written informed consent obtained from each participant prior to study entry. The original consent forms allowed for sample use in subsequent studies. CaCl 2 levels in all ELISAs were maintained at Ͼ2 mM unless indicated, and each assay was performed at least twice.
In Vitro cAMP Intoxication and Neutralization Assay-J774A.1 cells were grown in DMEM (Sigma) supplemented with 10% fetal bovine serum, 1 mM sodium pyruvate, and penicillin/streptomycin. To measure the cAMP intoxication of J774A.1 cells by ACT or antibody neutralization, J774A.1 cells were seeded at 4 ϫ 10 4 /cm 2 in 24-well plates (Costar) 1 day before the assay. ACT alone or with antibodies was diluted in DMEM without supplements in a final volume of 1 ml; base DMEM contains 1.8 mM CaCl 2 . ACT was used at 125 ng/ml in all assays unless otherwise specified; antibodies were present at 160-fold molar excess. ACT and antibody mixtures were incubated at room temperature for 30 min, and cells were washed twice with plain DMEM prior to addition of 480 l of antibody/ ACT solution to duplicate wells. The plate was incubated at 37°C for 30 min in a CO 2 incubator, followed by two washes with cold PBS. Lysis solution (500 l; 0.1 N HCl, 0.1% Triton X-100) was added into each well, and the plate was rocked on ice for 10 min. The lysates were transferred into 1.5-ml tubes and boiled for 10 min to inactivate ACT and cAMP-hydrolyzing enzymes. To assess cAMP intoxication in CHO cells, ACT was used at 250 ng/ml, with all other assay conditions held constant.
The resulting cellular lysates were clarified by centrifugation at 13,000 ϫ g for 5 min. The supernatant was diluted 6-fold with 200 mM Hepes, 150 mM NaCl, 0.05% Tween, pH 8.0, prior to cAMP measurement using a competition ELISA as described by Karimova and Ladant (36). All assays were performed at least in duplicate, with cAMP concentrations normalized to total protein concentration in the lysates as measured by BCA assay (Pierce). FEBRUARY 6, 2015 • VOLUME 290 • NUMBER 6
JOURNAL OF BIOLOGICAL CHEMISTRY 3579
To evaluate ACT neutralization in the context of the whole bacterium, B. pertussis Tohama I was grown on a Bordet Gengou agar plate supplemented with 15% defibrinated sheep blood (BD Biosciences) at 37°C for 4 to 5 days. Bacteria were then inoculated into modified synthetic Stainer-Scholte medium and grown at 37°C with shaking at 225 rpm for 20 -24 h to an A 600 of 0.7-1.0. Bacteria were pelleted by centrifugation at 5000 ϫ g for 10 min, resuspended in PBS, and diluted to an A 600 of 0.4 in DMEM ϩ 10% heat-inactivated FBS, and then mixed with an equal volume of serially diluted neutralizing or control IgGs in DMEM ϩ 10% heat-inactivated FBS, and incubated at room temperature for 20 min before addition to adherent J774A.1 cells and incubation of 1 h at 37°C. Intracellular cAMP level was then measured as above.
J774A.1 Cell Lysis Assay-Cell lysis was monitored by enzymatic activity of lactate dehydrogenase released into the medium upon cell lysis. J774A.1 cells were seeded 10 5 cells/well in 96-well round-bottom plates 1 day before the assay. 100 l each of ACT alone (0.25 g/ml) or ACT preincubated with scAbs (10 g/ml, 160-fold molar excess) in plain DMEM were added to triplicate wells and incubated for 2 h at 37°C. The plate was then centrifuged at 250 ϫ g for 5 min, and lactate dehydrogenase activity in the supernatants was measured by a colorimetric assay with the CytoTox 96 kit (Promega, WI). After subtracting the background signal from control wells without ACT, the sample absorbance at 490 nm was normalized to ACT-only controls as follows: Analysis of ACT-Integrin Binding by ELISA-Recombinant murine ␣ M  2 integrin (R&D Systems) was coated at 1 g/ml in PBS at 4°C overnight and blocked with M-PBST. ACT or purified domains were serially diluted and incubated for 1 h at room temperature, followed by detection with polyclonal rabbit anti-ACT antibody and HRP-conjugated goat anti-rabbit antibody. To assess the effect of antibodies on ACT binding to integrin, ACT (1 g/ml) was mixed with an equal volume of serially diluted antibody (5-, 1.6-, 0.5-, and 0-fold molar excess) and incubated for 1 h at room temperature before transferring to a blocked ␣ M  2 ELISA plate. The bound ACT was detected as described above.
Flow Cytometry Analysis of ACT Binding to Cells-ACT was dialyzed against HBSC (50 mM Hepes, 150 mM NaCl, 2 mM CaCl 2 , pH 8.0) to remove urea and then biotinylated with 100fold molar excess of EZ-Link Sulfo-NHS-LC-Biotin (Thermo Scientific) at room temperature for 2-3 h before quenching with 1 M Tris, pH 8.0, and dialysis against HBSC overnight in 4°C. Biotinylated ACT (210 l of 0.8 g/ml) was incubated with an equal volume containing 120 g/ml purified scAb (M2B10, M1H5, M1F11, and M1C12) in DMEM ϩ 1% BSA at room temperature for 30 min. Then 200 l of the incubated mixtures was added to 4 ϫ 10 5 washed J774A.1 cells in duplicate and incubated on ice for 30 min to allow ACT binding but not internalization. After two washes with FACS buffer (HBSC ϩ 2% FBS), 200 l of 1:500 phycoerythrin (PE)-conjugated streptavidin (BioLegend) was used to detect cell-associated biotinylated ACT. After 20 min of incubation on ice and three washes, the cells were finally resuspended in 600 l of FACS buffer and analyzed on an LSR Fortessa II. Data analysis was performed with Flowjo software (Version 10).
RESULTS
ACT Is Prone to Aggregation and Proteolysis-The native ACT holotoxin secreted from B. pertussis readily aggregates at the bacterial surface and is prone to proteolytic degradation (28), while recombinant toxin expressed in the E. coli cytoplasm forms inclusion bodies (37)(38)(39). To increase the yield of purified protein, 8 M urea was used to extract the aggregated protein from bacterial cell pellets; even so, the protein is highly susceptible to proteolysis with early reports observing enzymatic activity in 43-and 45-kDa fragments (40). Efforts to remove urea, such as dialysis and dilution, result in significant aggregation and fragmentation (41).
As a result, standard purification protocols include solubilization of the cell pellet with urea, followed by calmodulin affinity or sequential anionic and hydrophobic interaction chromatographic steps followed by storage in 8 M urea. Assays using the toxin call for dialysis or dilution of urea-solubilized ACT into assay media immediately before experimentation, most likely resulting in an ensemble of fully and partially folded ACT molecules and uncertainty regarding the exact concentration of active toxin molecules. ACT purified in our laboratory exhibited similar in vitro activity when incubated with J774A.1 cells as described by Eby et al. (42): 125 ng/ml produced 10,000-ϳ40,000 pmol of cAMP per mg of total cellular protein in 30 min at 37°C.
However, dialysis or dilution of ACT into buffer without urea led to protein that was poorly behaved and retained by the SEC column (Ͻ5% of protein applied to the column was eluted). In contrast, overnight dialysis into HBSC containing 1 M urea resulted in high molecular weight aggregates that eluted off the SEC column at a size corresponding to ϳ600 kDa. An alternative refolding approach, rapid 10-fold dilution into HBSC (final urea concentration 0.8 M) followed immediately by SEC, yielded a broader aggregate peak and a smaller peak corresponding to the expected size for monomer (Fig. 1B). The different monomer yields may reflect a time-dependent aggregation process or the presence of folding intermediates with different aggregation propensities in the two refolding procedures. A recent report reiterated the challenges of refolding ACT (43).
Individual ACT Domains Are Biophysically Superior to ACT-To identify which, if any, ACT domains are predominantly recognized by polyclonal antibody responses, we expressed individual domains in E. coli with affinity tags to facilitate purification (Table 1). Based on prior reports (18, 29, 44 -46), the N-terminal catalytic domains (residues 1-373 (CAT 373 ), 1-385 (CAT 385 ), and 1-400 (CAT 400 )) and C-terminal RTX domains (residues 751-1706 (RTX 751 ) and 985-1706 (RTX 985 )) were cloned into the pET28a vector for cytoplasmic expression with N-terminal His 6 tags to facilitate purification. In our hands, RTX(482-1706) was poorly soluble and purified inefficiently; instead we selected RTX 985 as the largest fragment to exclude both acylation sites but retaining the N terminus before the first Gly-Asp rich repeat. To enhance solubility, the hydrophobic domain (residues 399 -1096 (HP 1096 ), encompassing the region between the catalytic and RTX domains) was fused down-stream of MBP, with a C-terminal His 6 tag and di-cistronic expression of the specific acylating enzyme CyaC (indicated by *). After cytoplasmic expression of each construct and cell lysis, a one-step affinity chromatography with a HisTrap column yielded ϳ5-80 mg of protein per liter of culture with Ͼ90% purity as determined by SDS-PAGE (Fig. 1C). For HP-MBP fusion proteins, a second chromatographic step with an MBP-Trap column was required to reach a similar level of purity, although purity, proteolysis, and solubility issues persisted.
To determine whether the purified domains exhibited native-like structure and expected calcium-dependent structural changes, SEC was used to assess the oligomeric state, and circular dichroism (CD) spectroscopy was used to assess the secondary structure content. The catalytic domains eluted as a single peak of expected size (40 kDa) with the estimated composition of secondary structures (56% helix, 14% strands, 13% turns, and 17% unordered) similar to that determined by x-ray crystallography ( Fig. 2A) (18). Although the catalytic domain formally encompasses residues 1-373, the construct encompassing residues 1-400 was selected for further use to include the neutralizing epitope recognized by antibody 3D1 (47).
The hydrophobic fusion proteins eluted as broad aggregate peaks when acylated or nonacylated. In the absence of acylation, multiple smaller peaks were observed, suggesting that acylation may stabilize folding of this domain and protect against proteolysis. In both cases, the CD spectra were not characteristic of unfolded or aggregated proteins (Fig. 2B). The solubility and CD spectra of these constructs may be dominated by the MBP fusion partner, but we did not attempt to remove it, as a shorter hydrophobic domain was reported to further aggregate under these circumstances (48).
The RTX domain includes ϳ40 calcium binding Gly-Asp repeats, grouped into five blocks, and separated by non-RTX flanking regions (Fig. 1A). Structural data for RTX-containing proteins suggest the repeats fold into parallel -helix structures. In the presence of calcium ions, the protein converts from an intrinsically disordered domain into a compact -roll structure with an altered CD spectrum and a reduced hydrodynamic radius that appears to be further stabilized by acylation (43,49,50). RTX 985 exhibited a shift from largely monomer in the presence of calcium (78 kDa) to a mixture of oligomers (ϳ380 kDa) upon the addition of EGTA to chelate calcium ions. These structural changes are captured by CD, which shows a more ordered state in the presence of calcium ions (Fig. 2C), consistent with that observed with a similar construct also lacking the acylation sites (residues 1006 -1706) (51). SDS-PAGE indicates the two peaks observed with calcium have the same molecular weight suggesting that RTX 985 forms two stable states with different hydrodynamic radii (Fig. 2C).
Theorizing that these two forms are a consequence of the missing acylation sites, we generated a larger construct to include both sites. RTX 751 expressed without CyaC eluted as a single peak of expected size (ϳ110 kDa) in the presence or absence of calcium. When co-expressed with CyaC, presumably resulting in acylation at residues Lys-860 and Lys-983,
TABLE 1 Biophysical analysis of ACT constructs
ACT and various derivatives were purified and subjected to size exclusion chromatography in the presence of 2 mM calcium or an excess of EGTA to chelate free calcium ions. The expected molecular mass for each construct is noted, as is the observed size and thermal melting temperature of the major peak. Constructs noted with * were co-expressed with CyaC to acylate residues Lys-860 and Lys-983. ND means not determined. RTX 751 * exhibited a calcium-dependent conversion from a compact monomer (ϳ90 kDa) to a soluble higher molecular weight aggregate (ϳ600 kDa) after depletion of calcium (Fig. 2D). The catalytic and RTX domains retain much of the expected structural behavior, with RTX 751 * appearing to better stabilize the monomeric form than RTX 985 . Similar to previous reports on ACT behavior (26, 27, 51-53), the presence of calcium and acylation appears to stabilize the RTX monomers. Anecdotally, the RTX domains were stable for at least 6 months at 4°C with minimal aggregation or degradation (RTX 751 and RTX 751 * were slightly more stable than RTX 985 ), as measured by SDS-PAGE and SEC to assess the monomeric fraction. Although the CAT 400 domain also remains monomeric, it starts to degrade after 3 months under the same conditions, as measured by SDS-PAGE. The compact -roll structure RTX domains adopt in the FIGURE 2. ACT domain oligomeric state and secondary structure. Purified domains were separated by size exclusion chromatography (Superdex200 column, except Superdex75 for CAT 400 ), with far UV circular dichroism spectra (Jasco J-815) used to assess secondary structure in the presence of 2 mM CaCl 2 or the absence of calcium ions. A, catalytic domain, spanning residues 1-400 (CAT 400 ), eluted as a single peak of expected size. The secondary structure is similar to that observed in the CAT 373 crystal structure (56% helix, 14% strands, 13% turns, and 17% unordered). B, HP domain, spanning residues 399 -1096, with an N-terminal MBP fusion protein eluted off SEC as high molecular weight aggregates whether acylated (*) or nonacylated. C, RTX 985 formed high molecular weight aggregates in the absence of calcium ions but eluted as two peaks, one corresponding to the expected molecular mass of 78 kDa in the presence of calcium. Circular dichroism revealed significant conformational change upon addition of 2 mM CaCl 2 corresponding to an increase in -strand content. D, RTX 751 domain exhibits a similar calcium-dependent delay in elution volume, which is more pronounced when the protein is acylated (*), and under these conditions it yields a single monomer peak. Inset, SDS-polyacrylamide gels show proteins present in the indicated peaks, with arrows indicating the expected monomer size.
Domain
presence of calcium may contribute to their overall higher melting temperature and resistance to proteolysis than CAT 400 (Table 1).
ACT Domains Are Biochemically Similar to ACT-To determine whether our domain constructs retain structural elements present in ACT, we screened a panel of nine previously characterized monoclonal antibodies for binding to ACT and individual domains by ELISA (47). All nine antibodies tested recognized only the expected domain and did not distinguish between acylated and nonacylated domains (Table 2), supporting the notion that the domains are properly folded. One exception is 2B12, whose epitope includes residues 888 -1006, but did not recognize HP 1096 . This may be due to incomplete folding of HP 1096 or the binding site may require additional residues distal to residue 1006 not present in this construct.
As the RTX domain harbors the receptor-binding site between residues 1166 and 1281 (26), we assessed the ability of our RTX constructs to bind purified murine extracellular ␣ M  2 receptor, a known ACT cell surface receptor (25). Although ACT and RTX 751 * both bound the murine receptor when acylated and in the presence of calcium (apparent EC 50 ϳ20 nM; Fig. 3), ACT exhibited considerable nonspecific binding to wells without the receptor. This is similar to the sticky behavior observed when ACT without urea was applied to and retained by the SEC column and likely reflects the presence of solventexposed hydrophobic patches in misfolded ACT molecules. Monomeric RTX 985 did not bind the ␣ M  2 receptor, consistent with prior studies showing that post-translational acylation is essential for receptor binding (29,54). In summary, the individual domains were readily purified, with yields of CAT 400 at ϳ80 mg/liter culture, nonacylated RTX and HP domains at ϳ5 mg/liter culture, and the acylated domains at Ͻ2 mg/liter culture. The CAT and RTX domains share many structural features with ACT, although the HP domain is mostly aggregated.
Anti-ACT Antibodies Primarily Recognize RTX-To determine whether a single ACT domain dominates the immune response, we used phage display to analyze murine antibody repertoires after ACT immunization. Two mice were immunized intraperitoneally with 17 g of ACT in complete Freund's adjuvant and were boosted subcutaneously once with incomplete Freund's adjuvant. The resulting sera neutralized the toxic activities of ACT at a 1:400 dilution in an in vitro cAMP intoxication assay (data not shown). The spleens were harvested, and each was used to construct an antibody phage display library, each containing ϳ10 7 total transformants. After two rounds of panning against ACT, 90 individual clones from each library were grown in 96-well plates and assessed for ACT binding by phage ELISA. Of these, 57 and 60 clones from the two libraries respectively yielded signals 2-fold above background, with 29 and 21 expressing unique sequences, as determined by BstNI fingerprinting and DNA sequencing.
To determine the domain specificities of unique antibodies, monoclonal phage were assessed for binding to individual domains in ELISA (Fig. 4A). Few antibodies bound the catalytic domain; none bound the hydrophobic domain, whereas the rest (27 of 29 and 20 of 21, respectively) bound the RTX 985 domain. This observed high frequency of RTX 985 -specific antibodies concurs with previous reports by Lee et al. (47) that the majority of antibodies discovered using hybridomas recognized the RTX domain and by Betsou et al. (29) that this domain may be immunodominant.
Anti-RTX Antibodies Can Neutralize ACT-Next, we screened unique antibodies identified from phage libraries for the ability to neutralize ACT activities using an in vitro cAMP assay. There are several steps during ACT intoxication of cells that are susceptible to antibody-mediated neutralization, including receptor binding, membrane insertion, and translocation, and interference with any of these will be reflected in decreased intracellular cAMP accumulation in or reduced lysis of target cells. For this assay, we employed the murine macrophage cell line J774A.1 bearing the ␣ M  2 integrin.
For the neutralization assay, antibodies were expressed as recombinant scAbs, composed of the variable light chain (V L ) joined to a flexible (Gly 4 Ser) 2 linker and the variable heavy chain (V H ), followed by a C-terminal human chain constant region to increase solubility and serve as a detection handle (9). Based on multiple sequence alignments, 31 antibodies with FIGURE 3. ACT and RTX domains bind purified ␣ M  2 receptor. Soluble murine ␣ M  2 receptor was coated onto ELISA plates and blocked, and ACT or domains were serially diluted in M-PBST. Bound protein was detected with polyclonal rabbit anti-ACT antibody and goat anti-rabbit HRP. To assess nonspecific binding, control wells were not coated with ␣ M  2 receptor but blocked with M-PBST only. A, RTX 751 *, and B, acylated ACT* showed receptordependent binding, although ACT also exhibited significant nonspecific binding. All other domains showed no specific or nonspecific binding.
TABLE 2 Biochemical analysis of ACT constructs
Binding of previously characterized anti-ACT murine antibodies (47) to ACT and various derivatives described in this work in ELISAs. ACT constructs were coated on ELISA plates, and thee antibodies were titrated and detected with anti-mouse IgG-HRP. unique complementary determining regions were selected for scAb expression (Fig. 4B). Ten scAbs either expressed poorly (Ͻ200 g/liter culture) or bound ACT weakly (concentration Ͼ238 nM required for saturation) and were not tested further.
To identify antibodies neutralizing ACT function, ACT and individual scAbs were incubated at a 1:160 molar ratio before addition to adherent J774A.1 cells and determination of intracellular cAMP levels by competition ELISA. Of the 21 scAbs tested, nine reduced the cAMP level by more than 90%, as compared with cells treated with ACT alone, which we consider highly neutralizing in this assay (Fig. 4C). We also determined the ability of these scAbs to rescue J774A.1 cells from lysis using a lactate dehydrogenase release assay, observing a strong correlation with cAMP neutralization (Fig. 4D). This is in agreement with findings by Basler et al. (55) that intracellular ATP depletion is sufficient to promote cell lysis. Notably, all neutralizing antibodies identified recognize the RTX domain.
Two Novel Neutralizing Epitopes in the RTX Domain-We next sought to classify neutralizing antibodies based on their recognition of unique or overlapping epitopes. Here, we used a competitive binding ELISA, in which a single phage-displayed antibody was mixed with buffer or a second antibody in the scAb format, added to an ELISA well coated with ACT, and followed by detection of bound phage remaining in the well. Reduced signal in the presence of a second antibody compared with phage antibody alone indicates competition between the two antibodies for the same or overlapping epitopes. Using this approach, the nine neutralizing antibodies were divided into two groups binding nonoverlapping epitopes, although nonneutralizing antibodies recognized four unique epitopes, for a total of six epitopes represented in this study (Fig. 4A).
One representative antibody binding each neutralizing epitope was selected based on sequence uniqueness, expression level, and binding affinity for conversion into a full-length chimeric immunoglobulin, with human IgG1 and constant domains (Fig. 5A). ELISAs with the two RTX constructs helped to further define the epitopes recognized by these two antibodies, named M1H5 and M2B10. Both antibodies bound RTX 751 * with almost the identical affinity as full-length ACT (Fig. 5B), whereas M1H5 bound the shorter RTX 985 domain weakly (Fig. 5C), suggesting that its epitope is not fully contained or properly presented in this construct.
To determine whether the antibodies identified here bind epitopes overlapping with those of previously defined murine monoclonal antibodies (47), we performed a second set of competition ELISAs (Fig. 5D and data not shown). None of the murine antibodies competed with M2B10 or M1H5 for ACT binding, demonstrating that these antibodies recognize previously undescribed neutralizing epitopes in the RTX region.
Together, we observed antibodies binding six nonoverlapping epitopes on RTX, two neutralizing and four non-neutralizing, and all but two required the presence of calcium (Fig. 4A). A representative antibody binding one epitope, M1C12, competes with antibody 1H6, one of the four non-neutralizing RTX binding antibodies discovered by Lee et al. (47). Their remaining three anti-RTX antibodies bind unique epitopes, suggesting a total of at least nine distinct epitopes in the RTX domain. Identification of new epitopes is not unexpected, because neither hybridoma nor phage display technology is exhaustive, and each has trade-offs; the former is low throughput and laborintensive, and the latter does not preserve the pairing between the light and heavy chains and preferentially selects for antibod- . ACT immunization induces a diverse antibody response. A, phylogenetic tree depicting antibody sequence relatedness was generated using the light and heavy variable region amino acid sequences. Neutralizing scAbs are colored gray, with unique shapes denoting recognition of distinct epitopes among this antibody group as determined by competition ELISA. Open symbols denote antibodies whose binding does not depend on the presence of calcium. Antibodies competing with previously characterized monoclonal antibodies are indicated; all antibodies bind RTX except M1C5, M1F11, and M2G5, which bind CAT 400 . B, representative SDS-PAGE of scAbs after purification by IMAC and Superdex S200. Arrow indicates expected size of ϳ40 kDa, 2 g each of M1F11, M1C12, M1H5, and M2B10 scAbs were loaded. C, 21 unique scAbs identified from the immune phage libraries were tested for the ability to neutralize ACT-mediated increases in intracellular cAMP concentration. ACT was incubated with a 160-fold molar excess of scAb protein before adding to J774A.1 cells. Data are reported as the percent relative cAMP, calculated from the total cAMP concentration in the cellular lysate as determined by cAMP ELISA, divided by the protein concentration of the lysate, and normalized to control cells treated only with ACT (open bar). Error bars indicate range of duplicate assays. D, 21 scAbs were evaluated for their ability to rescue J774A.1 macrophages from ACT-induced lysis, using a similar protocol as for cAMP neutralization. Cell lysis was measured via lactate dehydrogenase release using the Cytotox 96 kit (Promega), normalized to control cells treated only with ACT (empty bar), and reported as the percent relative lysis. Error bars indicate standard deviation of triplicate assays.
ies with high bacterial expression levels. Even recently described repertoire mining approaches based on high throughput sequencing of antibodies from individual B cells do not identify the same sequences as phage display (56).
Antibodies Binding Novel RTX-neutralizing Epitopes Disrupt ACT-␣ M  2 Integrin Binding-ACT primarily targets cells bearing the ␣ M  2 receptor under conditions in which the RTX domain assumes a receptor-binding competent conformation mediated by the presence of calcium ions and post-translational acylation (26). Combining this with our observation that the M2B10 and M1H5 antibodies showed a much weaker neutralizing effect at the same 160-fold molar excess when CHO-K1 cells lacking this receptor were used than when J774A.1 cells expressing the receptor were used (Fig. 6A), we hypothesized that these two antibodies act by blocking the interaction between ACT and the ␣ M  2 integrin.
To test this hypothesis, we used flow cytometry to monitor ACT bound to J774A.1 cells in the presence or absence of the M2B10 and M1H5 scAbs. Biotinylated ACT was incubated with a 300-fold molar excess of neutralizing or non-neutralizing scAbs, added to J774A.1 cells and detected with phycoerythrin-conjugated streptavidin by FACS. The M2B10 and M1H5 antibodies significantly reduced binding of ACT-biotin to J774A.1 cells, whereas two non-neutralizing control scAbs (M1F11 and M1C12) had no significant effect (Fig. 6B). This difference was not due to affinity, as all four scAbs have similar affinities (EC 50 ϭ 0.3-0.7 nM; data not shown).
To further confirm that the diminished binding of ACT to J774A.1 cells was due to interference with a specific receptor, a competition ELISA with soluble ␣ M  2 integrin was performed. ACT (0.5 g/ml) was incubated with the M2B10, M1H5, M1F11, or 3D1 antibodies in molar ratios ranging from 5 to 0.5 and transferred to an ␣ M  2 integrin-coated plate. The result was consistent with the flow cytometry assay; M2B10 and M1H5 reduced ACT binding to immobilized ␣ M  2 integrin in a dose-dependent manner, Ͼ90% at a 2-fold molar excess (Fig. 6C). In contrast, 3D1, a neutralizing IgG that blocks translocation of the catalytic domain (57), and 2A12, a neutralizing antibody with unclear mode-ofaction, had no effect. Minimal nonspecific binding was observed under these assay conditions.
To determine whether ACT neutralization occurs in the context of the whole bacterium, we repeated this assay with live B. pertussis instead of purified ACT. According to Gray et al. (28), newly synthesized ACT is responsible for intoxication. Therefore, B. pertussis was washed in PBS to remove any secreted ACT. Bacteria (A 600 ϭ 0.2) added to J774A.1 cells resulted in cAMP levels similar to that induced by 125 ng/ml purified ACT. When the M2B10 and M1H5 but not the nonneutralizing M1F11 or 7C7 (47) antibodies were added with the bacteria, they resulted in dose-dependent reduction of cAMP levels (Fig. 6D). This suggests these antibodies may be able to neutralize ACT in the context of active infection. Microtiter plates were coated with ACT or ACT domains at equimolar concentrations, followed by serial dilution of antibody from 1 nM, and followed by detection with anti-human Fc antibody-HRP conjugate. D, competition ELISA determined that the M2B10 and M1H5 antibodies bind novel nonoverlapping epitopes. A 200-fold molar excess (20 nM) of previously described murine mAbs (3D1, 2A12, 10A1, 2B12, 9D4, 6E1, 7C7, and 1H6) (47) or scAb versions of M1H5 and M2B10 were mixed with M2B10 and M1H5 IgG (0.1 nM) and incubated on ACT-coated ELISA plate, with bound M2B10 or M1H5 detected as above. The absorbance was normalized to that of M2B10 or M1H5 with no competitor; absorbance significantly Ͻ1.0 indicates competition between the antibody pair. FEBRUARY 6, 2015 • VOLUME 290 • NUMBER 6
RTX Domain Is Immunodominant and Elicits Neutralizing
Antibodies-To determine whether any ACT domain dominates the immune response, we immunized mice with ACT and tested the resulting sera 4 weeks after primary immunization for binding to different ACT domains. Interestingly, strong responses were observed for ACT, RTX 985 , and RTX 751 , but no responses were observed for the CAT 400 or HP 1096 * domains (Fig. 7A).
To determine whether this was the result of RTX immunodominance or a lack of immunogenicity by the CAT and HP domains, we immunized additional groups of mice with CAT 400 , HP 1096 *, or RTX 985 . Here, RTX 985 was selected because it shares many features with RTX 751 *, including recognition by at least one neutralizing antibody, yet it lacks the acylation sites rendering it simpler to produce and less likely to engage the native receptor. The calcium concentration in extracellular fluid in the body is 2.2-2.7 mM (59), which is sufficient to support bacterial secretion and folding of ACT during an infection and is expected to support proper folding during immunization.
Mice immunized with ACT or RTX 985 showed high anti-ACT titers 4 weeks after the first injection, which increased after boosting (weeks 6 and 8, Fig. 7B). On the contrary, the catalytic domain was much less immunogenic, reaching a detectable anti-ACT titer of ϳ1500 only after two boosts (week 8; Fig. 7B). This weak response may reflect structural similari-ties between CAT and eukaryotic adenylate cyclases, resulting in immunological tolerance or an evolutionary mechanism to protect key toxin components from neutralizing antibodies (60,61). Only one of the four mice immunized with the hydrophobic domain reacted with ACT, supporting the SEC data that this construct is poorly folded (Fig. 7B). Antibody responses in the three remaining mice were directed toward the MBP fusion, as determined by ELISA (data not shown).
Next, we wanted to determine which domains induced sera best able to neutralize ACT cAMP intoxication activities in vitro with J774A.1 cells. Here, only immunization with RTX 985 elicited sera able to protect cells to a similar extent as sera elicited by full-length ACT (Fig. 7C). Although mechanisms other than receptor blockade may contribute to neutralization, the presence of M2B10-and M1H5-like antibodies in the sera of ACT-or RTX-immunized animals was confirmed by competition ELISA (Fig. 8). Because RTX 985 binds antibody M1H5 weakly, this domain may have sufficient conformational similarities as ACT to induce antibodies binding overlapping but nonidentical epitopes as M1H5.
Finally, to determine whether humans show a similar bias toward RTX recognition, we tested nine serum samples from humans exposed to B. pertussis, selected randomly from a larger collection (62). All nine sera recognized RTX 751 at a similar level as full-length ACT, whereas sera from only one individual bound CAT 400 (Fig. 9). Taken together, these data Fig. 4. B, antibody blockade of ACT binding to J774A.1 cells assessed by FACS. Biotinylated ACT was incubated with a 300-fold molar excess of scAb and added to 4 ϫ 10 5 J774A.1 suspension cells on ice. After washing, bound biotinylated ACT was detected with streptavidin-PE and analyzed by FACS (mean fluorescence noted next to each peak). Controls include untreated cells (Cells only) and cells treated with nonbiotinylated ACT followed by streptavidin-PE (ACT). C, antibody blockade of ACT binding to soluble ␣ M  2 integrin by ELISA. ACT (0.5 g/ml) was incubated with serial dilutions of M2B10, M1H5, 3D1 and 2A12 antibodies at 5-, 1.6-, and 0.5-fold molar excess, before transfer to an ELISA plate coated with murine ␣ M  2 integrin. Bound ACT was detected with rabbit anti-ACT polyclonal antibody followed by HRP-conjugated goat anti-rabbit IgG antibody. D, antibody neutralization of ACT secreted by B. pertussis. Antibodies at 10, 2.5, 0.63, and 0.16 g/ml were incubated with live B. pertussis (A 600 ϭ 0.2) before adding to adherent J774A.1 cells. The resulting intracellular cAMP concentrations were measured, normalized to total protein concentration, and expressed as % relative cAMP.
provide proof-of-concept that RTX dominates the anti-ACT immune response and that the RTX domain can recapitulate the humoral immune responses induced by ACT.
DISCUSSION
The recent surge in pertussis cases, coupled with increasing recognition of the current acellular vaccine's shortcomings, has motivated design of third generation vaccines to prevent pertussis. In humans, even a single dose of whole cell vaccine significantly reduces the risk of illness (63), although in baboons, acellular immunization prevented the severe symptoms of disease but allowed bacterial persistence and transmission to naive animals (5). To design future vaccines that minimize subclinical disease and reduce transmission to susceptible infants, it is crucial to understand the roles played by various protective antigens. ACT has been shown to be protective in animal models and is immunogenic in humans (15,16,40,64,65). Because ACT activities hinder local anti-bacterial immune responses (66 -68), anti-ACT antibodies may protect these cells, indirectly facilitating bacterial elimination. Here, we demonstrated that the RTX domain is able to largely recapitulate the protective humoral immune response induced by ACT in mice and is better expressed and more stable than the intact ACT.
ACT was first discovered based on its ability to increase cAMP levels in neutrophils, inhibiting their anti-bacterial functions, including phagocytosis and respiratory burst, and promoting the early stages of disease establishment (66). At physiologically relevant concentrations (Ͻ50 ng/ml) (69), ACT results in cytotoxicity to the murine macrophage cell line J774A.1 after 2 h of exposure; ACT also causes chlorine efflux from polarized epithelial cells and hinders IL-2 secretion and proliferation of T cells (42). More recently, ACT has been shown to suppress development of an IL-17-mediated immune response that appears key for bacterial clearance (21). As a result, passively administered antibodies blocking ACT function may be able to enhance neutrophil-mediated phagocytosis of opsonized bacteria (67). Murine studies have shown that immunization with ACT alone or as a supplement to the acellular vaccine reduces bacterial colonization, an effect that correlated with increased immunoglobulin levels and a Th1/Th2 cytokine phenotype (16,54). Finally, ACT is a highly conserved antigen, able to induce protective immunity in mouse models against the three dominant Bordetella species (B. pertussis, B. parapertussis, and B. bronchiseptica) (70 -72). Although ACT is unlikely to be highly protective as an isolated antigen, it may be a valuable addition to vaccines.
The complex mechanism by which ACT directly translocates its catalytic domain into the host cell cytosol remains incompletely understood. The current working model consists of three steps (48,73,74). First, in the presence of millimolar levels of calcium and acylation, the RTX domain forms a -barrel that binds the ␣ M  2 receptor on neutrophils or the  2 -containing integrin lymphocyte fusion-associated antigen 1 receptor on T cells through N-linked oligosaccharides. Second, this is followed by insertion of two loops (four predicted transmembrane ␣-helices between residues 502-522 and 565-591) into the host cell membrane, resulting in a translocation intermediate that permeabilizes the membrane to allow an influx of extracellular calcium ions, activating calpain-mediated cleavage of the integrin's talin tether. Third, the ACT-receptor complex is then free to diffuse to cholesterol-rich lipid rafts, which triggers complete translocation of the catalytic domain dependent on residues 375-485 (48). Interestingly, the catalytic domain allows insertion of peptides up to 206 amino acids long for intracellular delivery (75).
This structure-function information provides insight into epitopes required for cellular intoxication and those likely to induce protective responses. For instance, antibodies could block translocation steps, yet only one such antibody has been characterized. Immunization with ACT followed by analysis of the resulting polyclonal serum suggested that antibodies recognizing the RTX domain dominate the response, as 6 of 12 monoclonal antibodies recognized this domain (47,72). The 3D1 antibody binds a conformational epitope between residues FIGURE 7. RTX domain is immunodominant and elicits neutralizing antibodies. A, immunization with ACT yields sera preferentially recognizing the RTX domain. Purified domains were coated at equal moles on microtiter plates, with sera serially diluted starting at 1:200. The average EC 50 for individual domains is shown. Error bars are the standard deviations of the EC 50 value among four mice. B, immunogenicity of purified domains. Mice were immunized with intact ACT or individual domains, with the serum EC 50 for ACT measured by ELISA after the first boost (6 weeks) or second boost (8 weeks). C, sera from mice immunized with the ACT and RTX 985 domain neutralize cAMP intoxication similarly. Sera from each immunization group at a 1:400 dilution were incubated with 125 ng/ml ACT in DMEM before adding to J774A.1 cells. Intracellular cAMP levels were measured by cAMP ELISA, divided by the total protein concentrations, and normalized to cells treated with ACT alone, as in Fig. 4. pre indicates baseline sera collected prior to immunization. Statistical significance was determined by one-way analysis of variance with Tukey's test; **, p Յ 0.01; ***, p Յ 0.001. For all panels, * indicates an acylated domain. n.s. indicates not significant (p Ͼ 0.05).
373 and 399, adjacent to the catalytic domain, trapping a translocation intermediate and preventing complete delivery of the catalytic domain to the host cytosol (48,73), while the antihydrophobic region antibody 2A12 inhibits intoxication and, to a lesser extent, hemolysis, and the anti-RTX antibody 6E1 inhibits only hemolysis (47). Consistent with these prior reports, we observed that the majority of antibodies recovered from phage libraries bind the RTX domain, whereas sera from four mice immunized with the holo-toxin bind RTX only. To the best of our knowledge, no antibodies blocking the ACTreceptor binding, such as M1H5 or M2B10, have been previously described.
A barrier to development of additional technologies based on ACT has been the challenges of recovering monomeric protein from in vitro refolding processes. Standard protocols, including dilution and dialysis from denaturing buffers containing 8 M urea, recover high molecular weight species with variable activity levels. Recently, refolding on a size exclusion column was performed to prevent aggregation of partially folded species and resulted in purification of monomers with very high activity, which depended on the presence of calcium, acylation and molecular confinement (43). Although promising, the yields and scalability of this process are currently unclear. As shown here, the RTX domain retains many structural features, is more readily expressed, and exhibits greater stability than ACT. Furthermore, because RTX lacks the catalytic domain, it has no homology to endogenous proteins and thus poses no potential autoimmune concerns for human use. We evaluated two different RTX constructs, initially RTX 985 lacking the acylation sites and thus simpler to express, and then the larger RTX 751 * retaining the acylation sites. Although both exhibited expected calcium-dependent structural shifts and binding to previously described monoclonal antibodies, RTX 751 * appears superior to RTX 985 in terms of monomericity, stability, and recognition of soluble ␣ M  2 receptor in vitro.
Because RTX 985 is well behaved and binds at least one neutralizing antibody (M2B10), we used this construct for immunization before discovering that RTX 751 and RTX 751 * are better behaved and retain the ability to bind both neutralizing antibodies. Regardless, sera from mice immunized with RTX 985 neutralized cAMP intoxication in vitro as efficiently as sera from ACT-immunized mice (Fig. 7C). This may be because these nonoverlapping epitopes induce antibodies neutralizing toxin via similar receptor blocking mechanisms. Supporting this idea, no synergy was observed when the M2B10 and M1H5 antibodies were combined in vitro (data not shown). Thus, it may be possible to induce a strong neutralizing response when only one of the epitopes is structurally intact. We do not have data on RTX 751 * immunization, but based on the biochemical and biophysical data presented here, we expect it to perform as well or better than RTX 985 .
Structure-function analyses of antibody-antigen interactions can identify residues forming protective epitopes, key information to guide design of immunogens able to elicit neutralizing antibodies. This approach has been employed for complex antigens with high sequence variability and metastable protective epitopes, such as fHBP from Neisseria meningitidis (76), and the F-protein from respiratory syncytial virus (77). We have demonstrated proof-of-concept that constructs based on the RTX domain are better behaved than ACT, while retaining key epitopes and inducing neutralizing antibodies. Using the structures of homologous RTX toxins (58,78), RTX variants with enhanced stability, expression level, and reduced immunogenicity of non-neutralizing epitopes could be engineered to evaluate RTX as a vaccine antigen in murine challenge models. RTX 985 (B). Sera at a 1:200 dilution were incubated on ELISA plates coated with ACT, before addition of 0.1 nM M2B10, M1H5, or M1F11 as a competitor. After incubation, immobilized monoclonal antibody was detected with anti-human Fc-HRP, with absorbance normalized to wells without sera. Lower absorbance indicates a higher concentration of epitope-specific murine antibodies. FIGURE 9. RTX dominates the human immune response to ACT. Nine serum samples from humans exposed to B. pertussis were tested for reactivity to the catalytic domain, RTX 751 , or intact ACT by ELISA. Absorbance values at a 100-fold dilution of the sera were normalized to that of ACT at the same dilution. A paired t test was used to determine the statistical significance between signals for CAT and RTX binding domains. ***, p Յ 0.001. | 2018-04-03T04:26:35.457Z | 2014-12-10T00:00:00.000 | {
"year": 2014,
"sha1": "afb695461e5414218138947c4943228f7a1460a1",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/290/6/3576.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "0aa14981f6389db81cc6681c8f1847d4e18a6824",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
18611132 | pes2o/s2orc | v3-fos-license | Extremal K\"ahler metrics on projective bundles over a curve
Let $M=P(E)$ be the complex manifold underlying the total space of the projectivization of a holomorphic vector bundle $E \to \Sigma$ over a compact complex curve $\Sigma$ of genus $\ge 2$. Building on ideas of Fujiki, we prove that $M$ admits a K\"ahler metric of constant scalar curvature if and only if $E$ is polystable. We also address the more general existence problem of extremal K\"ahler metrics on such bundles and prove that the splitting of $E$ as a direct sum of stable subbundles is necessary and sufficient condition for the existence of extremal K\"ahler metrics in sufficiently small K\"ahler classes. The methods used to prove the above results apply to a wider class of manifolds, called {\it rigid toric bundles over a semisimple base}, which are fibrations associated to a principal torus bundle over a product of constant scalar curvature K\"ahler manifolds with fibres isomorphic to a given toric K\"ahler variety. We discuss various ramifications of our approach to this class of manifolds.
Introduction
Extremal Kähler metrics were first introduced and studied by E. Calabi in [13,14]. , and s g denotes the scalar curvature of g. As shown in [13], g is extremal if and only if the symplectic gradient K := grad ω s g = J grad g s g of s g is a Killing vector field (i.e. L K g = 0) or, equivalently, a (real) holomorphic vector field (i.e. L K J = 0). Extremal Kähler metrics include Kähler metrics of constant scalar curvature -CSC Kähler metrics for short -in particular Kähler-Einstein metrics. Clearly, if the identity component Aut 0 (M, J) of the automorphism group of (M, J) is reduced to {1}, i.e. if (M, J) has no non-trivial holomorphic vector fields, any extremal Kähler metric is CSC, whereas a CSC Kähler metric is Kähler-Einstein if and only if Ω is a multiple of the (real) first Chern class c 1 (M, J). In this paper, except for Theorem 1 below, we will be mainly concerned with extremal Kähler metrics of non-constant scalar curvature.
The Lichnerowicz-Matsushima theorem provides an obstruction to the existence of CSC Kähler metrics on (M, J) in terms of the structure of Aut 0 (M, J), which must be reductive whenever (M, J) admits a CSC Kähler metric; in particular, for any CSC Kähler metric g, the identity component Isom 0 (M, g) of the group of isometries of (M, g) is a maximal compact subgroup of (M, J) [55,49]. The latter fact remains true for any extremal Kähler metric (although Aut 0 (M, J) is then no longer reductive in general) and is again an obstruction to the existence of extremal Kähler metrics [14,48]. Another well-known obstruction to the existence of CSC Kähler metrics within a given class Ω involves the Futaki character [30,14], of which a symplectic version, as developed in [47], will be used in this paper (cf. Lemma 2). Furthermore, it is now known that extremal over a semisimple base, which were introduced in our previous paper [4]. Section 3 of this paper is devoted to recalling the main features of this class of manifolds and proving a general existence theorem (Theorem 3).
The simplest situation considered in this paper is the case of a projective bundle over a curve, i.e. a compact Riemann surface. In this case, the existence problem for CSC Kähler metrics can be resolved. Remark 1. The 'if' part follows from the theorem of Narasimhan and Seshadri: if E is a polystable bundle of rank m over a compact curve (of any genus), then E admits a hermitian-Einstein metric which in turn defines a flat P U (m)-structure on P (E) and, therefore, a family of locally-symmetric CSC Kähler exhausting the Kähler cone of P (E), see e.g. [40], [28]. Note also that in the case when P (E) fibres over CP 1 , E splits as a direct sum of line bundles, and the conclusion of Theorem 1 still holds by the Lichnerowicz-Matsushima theorem, see e.g. [5,Prop. 3].
Remark 2. On all manifolds considered in Theorem 1, rational Kähler classes form a dense subset in the Kähler cone. By LeBrun-Simanca stability theorem [46,Thm. A] and Lemma 3 below it is then sufficient to consider the existence problem only for an integral Kähler class (or polarization). In this setting, it was shown by Ross-Thomas that any projective bundle M = P (E) over a compact complex curve of genus ≥ 1 is K-poly(semi)stable (with respect to some polarization) if and only if E is poly(semi)stable [61,Thm. 5.13]. In view of this theorem, the "only if" part of Theorem 1 can therefore be alternatively recovered -for any genus ≥ 1-as a consequence of recent papers by T. Mabuchi [53,54].
By the de Rham decomposition theorem, an equivalent differential geometric formulation of Theorem 1 is that any CSC Kähler metric on (M, J) must be locally symmetric (see [28,Lemma 8] and [44]). It is in this form that we are going to achieve our proof of Theorem 1, building on the work of A. Fujiki [28]. In fact, [28] already proves Theorem 1 in the case when the underlying bundle E is simple, modulo the uniqueness of CSC Kähler metrics, which is now known [15,20,52].
In view of this, the main technical difficulty in proving Theorem 1 is related to the existence of automorphisms on (M, J) = P (E) → Σ. The way we proceed is by fixing a maximal torus T (of dimension ℓ) in the identity component Aut 0 (M, J) of the automorphism group, and showing that it induces a decomposition of E = ℓ i=0 E i as a direct sum of ℓ + 1 indecomposable subbundles E i , such that T acts by scalar multiplication on each E i (see Lemma 1 below). By computing the Futaki invariant of the S 1 generators of T, we show that the slopes of E i must be all equal, should a CSC Kähler metric exist on P (E) (see Lemma 3 below). 2 Then, following the proof of [28,Theorem 3], we consider small analytic deformations E i (t) of E i = E i (0) with E i (t) being stable bundles for t = 0. This induces a T-invariant Kuranishi family (M, J t ) ∼ = P (E(t)), where E(t) = ℓ i=0 E i (t), with (M, J) being the central fibre (M, J 0 ). We then generalize in Lemma 4 the stability-under-deformations results of [45,46,29], by using the crucial fact that our family is invariant under a fixed maximal torus. This allows us to show that any CSC (or more generally extremal) Kähler metric ω 0 on (M, J 0 ) can be included into a smooth family ω t of extremal Kähler metrics on (M, J t ). As E(t) is polystable for t = 0, the corresponding extremal Kähler metric ω t must be locally symmetric, by the uniqueness results [15,20,51]. This implies that ω 0 is locally symmetric too, and we conclude as in [28,Lemma 8].
We next consider the more general problem of existence of extremal Kähler metrics on the manifold (M, J) = P (E) → Σ. Notice that the deformation argument explained above is not specific to the CSC case, but also yields that any extremal Kähler metric ω 0 on (M, J) = P (E) → Σ can be realized as a smooth limit (as t → 0) of extremal Kähler metrics ω t on (M, J t ) = P (E(t)), where E(t) = ℓ i=0 E i (t) with E i (t) being stable (and thus projectively-flat and indecomposable) bundles over Σ for t = 0, and where ℓ is the dimension of a maximal torus T in the identity component of the group of isometries of ω 0 . Unlike the CSC case (where E i (t) must all have the same slope and therefore E(t) is polystable), the existence problem for extremal Kähler metrics on the manifolds (M, J t ) is not solved in general. The main working conjecture here is that such a metric ω t must always be compatible with the bundle structure (in a sense made precise in Sect. 3 below). As we observe in Sect. 5, if this conjecture were true it would imply that the initial bundle E must also split as a direct sum of stable subbundles (and that ω 0 must be compatible too). We are thus led to believe the following general statement would be true. Remark 3. This conjecture turns out to be true in the case when E is of rank 2 and Σ is a curve of any genus, cf. [7] for an overview.
A partial answer to Conjecture 1 is given by the following result which deals with Kähler classes far enough from the boundary of the Kähler cone.
Theorem 2. Let p : P (E) → Σ be a holomorphic projective bundle over a compact complex curve Σ of genus ≥ 2 and [ω Σ ] be a primitive Kähler class on Σ. Then there exists a k 0 ∈ R such that for any k > k 0 the Kähler class Ω k = 2πc 1 (O(1) E ) + kp * [ω Σ ] on (M, J) = P (E) admits an extremal Kähler metric if and only if E splits as a direct sum of stable subbundles.
In the case when E decomposes as the sum of at most two indecomposable subbundles, 3 the conclusion holds for any Kähler class on P (E).
The proof of Theorem 2, given in Section 5, will be deduced from a general existence theorem established in the much broader framework of rigid and semisimple toric bundles introduced in [4], whose main features are recalled in Section 3 below. As explained in Remark 7, this class of manifolds is closely related to the class of multiplicity-free manifolds recently discussed in Donaldson's paper [25]. Our most general existence result can be stated as follows.
Theorem 3. Let (g, ω) be a compatible Kähler metric on M , where M is a rigid semisimple toric bundle over a CSC locally product Kähler manifold (S, g S , ω S ) with fibres isomorphic to a toric Kähler manifold (W, ω W , g W ), as defined in Sect. 3. Suppose, moreover, that the fibre W admits a compatible extremal Kähler metric. Then, for any k ≫ 0, the Kähler class Ω k = [ω] + kp * [ω S ] admits a compatible extremal Kähler metric.
The terms of this statement, in particular the concept of a compatible metric, are introduced in Section 3. Its proof, also given in Section 3, uses in a crucial way the stability under small perturbations of existence of compatible extremal metric (Proposition 2) which constitutes the delicate technical part of the paper. Another important consequence of Proposition 2 is the general openness theorem given by Corollary 1.
A non-trivial assumption in the hypotheses of Theorem 3 above is the existence of compatible extremal Kähler metric on the (toric) fibre W . This is solved when W ∼ = CP r and M = P (E) with E being holomorphic vector bundle of rank r + 1, which is the sum of ℓ + 1 projectively-flat hermitian bundles, as a consequence of the fact that the Fubini-Study metric on CP r admits a non-trivial hamiltonian 2-form of order ℓ ≤ r (cf. [3]). We thus derive in Sect. 4 the following existence result.
Theorem 4. Let p : P (E) → S be a holomorphic projective bundle over a compact Kähler manifold (S, J S , ω S ). Suppose that (S, J S , ω S ) is covered by the product of constant scalar curvature Kähler manifolds (S j , ω j ), j = 1, . . . , N , and E = ℓ i=0 E i is the direct sum of projectively-flat hermitian bundles. Suppose further that for each i S j (for some constants p ji ). Then there exists a k 0 ∈ R such that for any k > k 0 the Kähler class vanishes. However, in the case when E is not simple (i.e. has automorphisms other than multiples of identity) the condition F Ω k ≡ 0 is not in general satisfied for these classes, see [5,Sect. 3.4 & 4.2] for specific examples. Thus, studying the existence of extremal rather than CSC Kähler metrics in Ω k is essential. Another useful remark is that although the hypothesis in Theorem 4 that E is the sum of projectively-flat hermitian bundles over S is rather restrictive when S is not a curve, our result strongly suggests that considering E to be a direct sum of stable bundles (with not necessarily equal slopes) over a CSC Kähler base S would be the right general setting for seeking extremal Kähler metrics in Ω k = 2πc 1 In the final Sect. 6, we develop further our approach by extending the leading conjectures [21,66] about existence of extremal Kähler metrics on toric varieties to the more general context of compatible Kähler metrics that we consider in this paper. Thus motivated, we explore in a greater detail examples when M is a projective plane bundle over a compact complex curve Σ. We show that when the genus of Σ is greater than 1, Kähler classes close to the boundary of the Kähler cone of M do not admit any extremal Kähler metric. In Appendix A, we introduce the notion of a compatible extremal almost Kähler metric (the existence of which is conjecturally equivalent to the existence of a genuine extremal Kähler metric) and show that if the genus of Σ is 0 or 1, then any Kähler class on M admits an explicit compatible extremal almost Kähler metric.
The first author was supported in part by an NSERC discovery grant, the second author by an EPSRC Advanced Research Fellowship and the fourth author by the Union College Faculty Research Fund.
Proof of Theorem 1
As we have already noted in Remark 1, the 'if' part of the theorem is well-known. So we deal with the 'only if' part.
Let (M, J) = P (E), where π : E → Σ is a holomorphic vector bundle of rank m over a compact curve Σ of genus ≥ 2. We want to prove that E is polystable if (M, J) admits a CSC Kähler metric ω. Also, by Remark 1, we will be primarily concerned with the case when the connected component of the identity Aut 0 (M, J) of the automorphisms group of (M, J) is not trivial. Note that, as the normal bundle to the fibres of P (E) → Σ is trivial and the base is of genus ≥ 2, the group Aut 0 (M, J) reduces to H 0 (Σ, P GL(E)), the group of fibre-preserving automorphisms of E, with Lie algebra h(M, J) ∼ = H 0 (Σ, sl(E)). As any holomorphic vector field in h(M, J) has zeros, the Lichnerowicz-Matsushima theorem [49,55] implies h(M, J) = i(M, g) ⊕ Ji(M, g), where i(M, g) is the Lie algebra of Killing vector fields of (M, J, ω). Thus, Aut 0 (M, . We will fix from now on a maximal torus T (of dimension ℓ) in the connected component of the group of isometries of (M, J, ω). Note that T is a maximal torus in Aut 0 (M, J) too, by the Lichnerowicz-Matsushima theorem cited above.
We will complete the proof in three steps, using several lemmas. We start with following elementary but useful observation which allows us to relate a maximal torus T ⊂ Aut 0 (M, J) with the structure of E. Lemma 1. Let (M, J) = P (E) → S be a projective bundle over a compact complex manifold S, and suppose that the group H 0 (S, P GL(E)) of fibre-preserving automorphisms of (M, J) contains a circle S 1 . Then E decomposes as a direct sum E = ℓ i=0 E i of subbundles E i with ℓ ≥ 1, such that S 1 acts on each factor E i by a scalar multiplication.
In particular, any maximal torus T ⊂ H 0 (S, P GL(E)) arises from a splitting as above, with E i indecomposable and ℓ = dim(T).
Proof. Any S 1 in H 0 (S, P GL(E)) defines a C × holomorphic action on (M, J), generated by an element Θ ∈ h(M, J) ∼ = H 0 (S, sl(E)). For any x ∈ S, exp (tΘ(x)), t ∈ C generates a C × subgroup of SL(E x ) and so Θ(x) must be diagonalizable. The coefficients of the characteristic polynomial of Θ(x) are holomorphic functions of x ∈ S, and therefore are constants. It then follows that Θ gives rise to a direct sum decomposition E = ℓ i=0 E i where E i correspond to the eigenspaces of Θ at each fibre.
The second part of the lemma follows easily.
Because of this result and the discussion preceding it, we consider the decomposition E = ℓ i=0 E i as a direct sum of indecomposable subbundles over Σ, corresponding to a fixed, maximal ℓ-dimensional torus T in the connected component of the isometry group of (g, J, ω). We note that the isometric action of T is hamiltonian as T has fixed points (on any fibre).
Our second step is to understand the condition that that the Futaki invariant [30], with respect to the Kähler class Ω = [ω] on (M, J), restricted to the generators of T is zero. Hodge theory implies that any (real) holomorphic vector field with zeros on a compact Kähler 2m-manifold (M, J, ω, g) can be written as X = grad ω f − Jgrad ω h, where f + ih is a complex-valued smooth function on M of zero integral (with respect to the volume form ω m ), called the holomorphy potential of X, and where grad ω f stands for the hamiltonian vector field associated to a smooth function f via ω. Then the (real) Futaki invariant associates to X the real number where Scal g is the scalar curvature of g. Futaki shows [30] that F ω (X) is independent of the choice of ω within a fixed Kähler class Ω, and that (trivially) F ω (X) = 0 if Ω contains a CSC Kähler metric. A related observation will be useful to us: with a fixed symplectic form ω, the Futaki invariant is independent of the choice of compatible almost complex structure within a path component.
Lemma 2.
Let J t be a smooth family of integrable almost-complex structures compatible with a fixed symplectic form ω, which are invariant under a compact group G of symplectomorphisms acting in a hamiltonian way on the compact symplectic manifold (M, ω). Denote by g ω ⊂ C ∞ (M ) the finite dimensional vector space of smooth functions f such that X = grad ω f ∈ g, where g denotes the Lie algebra of G. 4 Then the L 2 -orthogonal projection of the scalar curvature Scal gt of (J t , ω, g t ) to g ω is independent of t.
Proof. By definition, any f ∈ g ω defines a vector field X = grad ω f which is in g, and is therefore Killing with respect to any of the Kähler metrics g t = ω(·, J t ·). To prove our claim, we have to show that M f Scal gt ω m is independent of t. Using the standard variational formula for scalar curvature (see e.g. [11,Thm. 1.174]), we compute where h denotes d dt g t , while ∆, δ and r are the riemannian laplacian, the codifferential and the Ricci tensor of g t , respectively. Note that to get the last equality, we have used the fact that h is J t -anti-invariant (as all the J t 's are compatible with ω) while the metric and the Ricci tensor are J t -invariant (on any Kähler manifold). Integrating against f , we obtain where D is the Levi-Civita connection of g t ; however, as f is a Killing potential with respect to the Kähler metric (g t , J t ), it follows that Ddf is J t -invariant, and therefore M f Scal gt ω m is independent of t. Remark 5. One can extend Lemma 2 for any smooth family of (not necessarily integrable) G-invariant almost complex structures J t compatible with ω. Then, as shown in [47], the L 2 -projection to g ω of the hermitian scalar curvature of the almost Kähler metric (ω, J t ) (see Appendix A for a precise definition) is independent of t. This gives rise to a symplectic Futaki invariant associated to a compact subgroup G of the group of hamiltonian symplectomorphisms of (M, ω).
Lemma 2 will be used in conjunction with the Narasimhan-Ramanan approximation theorem (see [57,Prop. 2.6] and [58,Prop. 4.1]), which implies that any holomorphic vector bundle E over a compact curve Σ of genus ≥ 2 can be included in an analytic family of vector bundles E t , t ∈ D ε (where D ε = {t ∈ C, |t| < ε}) over Σ, such that E 0 := E and E t is stable for t = 0. Such a family will be referred to in the sequel as a small stable deformation of E. Proof. We take some Kähler form ω on (M, J) = P (E) and, by averaging it over S 1 , we assume that ω is S 1 -invariant. As the S 1 -action has fixed points, the corresponding real vector field X is J-holomorphic and ω-hamiltonian, i.e., X = grad ω f for some smooth function f with M f ω m = 0.
We now consider small stable deformations U t , V t , t ∈ D ε of U and V , and put E t = U t ⊕ V t . Considering the projective bundle P (E t ), we obtain a non-singular Kuranishi family (M, J t ) with J 0 = J. By the Kodaira stability theorem (see e.g. [41]) one can find a smooth family of Kähler metrics (ω t , J t ) with ω 0 = ω. Using the vanishing of the Dolbeault groups H 2,0 (M, J t ) = H 0,2 (M, J t ) = 0, Hodge theory implies that by decreasing the initial ε if necessary, we can assume [ω t ] = [ω] in H 2 dR (M ). Note that any J t is S 1 -invariant so, by averaging over S 1 , we can also assume that ω t is S 1 -invariant. Applying the equivariant Moser lemma, one can find S 1 -equivariant diffeomorphisms, Φ t , such that Φ * t ω t = ω. Considering the pullback of J t by Φ t , the upshot from this construction is that we have found a smooth family of integrable complex structures J t such that: (1) each J t is compatible with the fixed symplectic form ω and is S 1 -invariant; (2) J 0 = J; (3) for t = 0, the complex manifold (M, J t ) is equivariantly biholomorphic to P (U t ⊕ V t ) → Σ with U t and V t stable (and therefore projectively-flat) hermitian bundles.
If U and V have equal slopes, then E t = U t ⊕ V t becomes polystable for t = 0, and (M, J t ) has a CSC Kähler metric in each Kähler class. It follows that the Futaki invariant of X on (M, J t , ω) is zero for t = 0.
Conversely, if U and V have different slopes, it is shown in [5,Sect. 3.2] that the Futaki invariant of X is different from zero for any Kähler class on (M, J t ), t = 0.
We conclude using Lemma 2.
This lemma shows that all the factors in the decomposition E = ℓ i=0 E i must have equal slope, should a CSC Kähler metric exists. As in the proof of Lemma 3, we consider small stable deformations E i (t) of E i and our assumption for the slopes insures that E(t) = ℓ i=0 E i (t) is polystable for t = 0; furthermore, by acting with T-equivariant diffeomorphisms, we obtain a smooth family of T-invariant complex structures J t compatible with ω, such that for t = 0, the complex manifold (M, J t ) has a locally-symmetric CSC Kähler metric in each Kähler class; by the uniqueness of the extremal Kähler metrics modulo automorphisms [15,52], any extremal Kähler metric on (M, J t ) is locallysymmetric when t = 0. The third step in the proof of Theorem 1 is then to show that the initial CSC Kähler metric (J 0 , ω) must be locally symmetric too. This follows from the next technical result, generalizing arguments of [46,29]. Lemma 4. Let J t be a smooth family of integrable almost-complex structures compatible with a symplectic form ω on a compact manifold M , which are invariant under a torus T of hamiltonian symplectomorphisms of (M, ω). Suppose, moreover, that (J 0 , ω) define an extremal Kähler metric and that T is a maximal torus in the reduced automorphism group of (M, J 0 ). Then there exists a smooth family of extremal Kähler metrics (J t , ω t , g t ), defined for sufficiently small t, such that ω 0 = ω and [32,45] on any compact Kähler manifold (M, J), the reduced automorphism group, Aut 0 (M, J), is the identity component of the kernel of the natural group homomorphism from Aut 0 (M, J) to the Albanese torus of (M, J); it is also the connected closed subgroup of Aut 0 (M, J), whose Lie algebra h 0 (M, J) ⊂ h(M, J) is the ideal of holomorphic vector fields with zeros.
We denote by t the Lie algebra of T and by h (resp. h 0 ) the Lie algebra of the complex automorphism group (resp. reduced automorphism group) of the central fibre (M, J 0 ). As T acts in a hamiltonian way, we have t ⊂ h 0 . By assumption, t is a maximal abelian subalgebra of i 0 (M, g 0 ) = i(M, g 0 )∩h 0 , where i(M, g 0 ) is the Lie algebra of Killing vector fields of (M, J 0 , ω, g 0 ).
As in the Lemma 2 above, we let t ω ⊂ C ∞ (M ) be the finite dimensional space of smooth functions which are hamiltonians of elements of t. As the Kähler metric (J 0 , ω, g 0 ) is extremal (by assumption), its scalar curvature Scal g 0 is hamiltonian of a Killing vector field X = grad ω (Scal g 0 ) ∈ i 0 (M, g 0 ). Clearly, such a vector field is central, so X ∈ t (by the maximality of t) and therefore Scal g 0 ∈ t ω .
For any T-invariant Kähler metric (J,ω,g) on M , we denote by tω the corresponding space of Killing potentials of elements of t (noting that any X ∈ t has zeros, so that T belongs to the reduced automorphism group of (M,J )), and by Πω the L 2 -orthogonal projection of smooth function to tω, with respect to the volume formω m . Obviously, if the scalar curvature Scalg ofg belongs to tω, theng is extremal.
Following [46], let C ∞ ⊥ (M ) T denote the Fréchet space of T-invariant smooth functions on M , which are L 2 -orthogonal (with respect to the volume form ω m ) to t ω , and let U be an open set in R × C ∞ ⊥ (M ) T of elements (t, f ) such that ω + dd c t f is Kähler with respect to J t (here d c t denotes the d c -differential corresponding to J t ). We then consider the map Ψ : whereω := ω + dd c t f and Scalg is the scalar curvature of the Kähler metricg defined by (J t ,ω). One can check that this map is C 1 and compute (as in [45], by also using (1)) that its differential at (0, 0) ∈ U is where D and δ are respectively the Levi-Civita connection and the codifferential of g 0 , h = dgt dt t=0 and (Ddf ) − denotes the J 0 -anti-invariant part of Ddf . Note that L(f ) := δδ((Ddf ) − ) is a 4-th order (formally) self-adjoint T-invariant elliptic linear operator (known also as the Lichnerowicz operator, see e.g. [32]). When acting on smooth functions, L annihilates t ω (because any Killing potential f satisfies (Ddf ) − = 0). It then follows that L leaves C ∞ ⊥ (M ) T invariant and, by standard elliptic theory, we obtain an L 2 -orthogonal splitting C ∞ ⊥ (M ) T = ker(L) ⊕ im(L). However, any smooth Tinvariant function f in ker(L) gives rise to a Killing field X = grad ω f in the centralizer of t ⊂ i 0 (M, g 0 ). As t is a maximal abelian subalgebra of i 0 (M, g 0 ) we must have X ∈ t, i.e. f ∈ t ω . It follows that the kernel of L restricted to C ∞ ⊥ (M ) T is trivial, and therefore L is an isomorphism of the Fréchet space C ∞ ⊥ (M ) T . This understood, we are in position to apply standard arguments, using the implicit function theorem for the extension of Ψ to the Sobolev completion together with the regularity result for extremal Kähler metrics, precisely as in [45,46,29]. We thus obtain a family (t, ω t ) of smooth, T-invariant extremal Kähler metrics (J t , ω t ) (defined for t in a small interval about 0) which converge to the initial extremal Kähler metric (J 0 , ω) (in any Sobolev space L 2,k (M ), k ≫ 1, and hence, by the Sobolev embedding, in C ∞ (M )).
The uniqueness argument thus also applies at t = 0, and the initial metric is locally symmetric. We can now conclude the proof of Theorem 1 by a standard argument using the de Rham decomposition theorem (see [28,Lemma 8] and [44]). This realizes the fundamental group of Σ as a discrete subgroup group of isometries of the hermitian symmetric space CP m−1 ×H and thus defines a projectively flat structure on P (E) → Σ.
Rigid toric bundles and the generalized Calabi construction
In this section, we recall the notion of a semisimple and rigid isometric hamiltonian action of a torus on a compact Kähler manifold (M, g, J, ω) (introduced in [4]), as well as the construction of compatible Kähler metrics (given by the generalized Calabi construction of [4]) on such manifolds. This provides a framework for the search of extremal compatible metrics on rigid toric bundles over a semisimple base, which parallels (and extends) the theory of extremal toric metrics developed in [21,22,24]. We then apply the construction of this section to projective bundles of the form P (E 0 ⊕· · ·⊕E ℓ ) → S, where E i is a projectively-flat hermitian bundle over a Kähler manifold (S, ω S ). In all cases, we prove the existence of compatible extremal Kähler metrics in "small" Kähler classes, cf. Theorems 3 and 4.
In other words, the action is rigid if, for any two generators X ξ , X η of the actionξ, η ∈ t -the smooth function g(X ξ , X η ) is constant on the levels of the momentum map z.
Henceforth, we suppose that M is compact. Obvious and well-known examples of rigid toric actions are provided by toric Kähler manifolds. A key feature of toric Kähler manifolds is actually shared by rigid torus actions, namely the fact that the image of M by the momentum map is a Delzant polytope ∆ ⊂ t * (see [4,Prop. 4]) and that the regular values of z are the points in the interior ∆ 0 . Thus, to any compact Kähler manifold endowed with a rigid isometric hamiltonian action of an ℓ-torus T, one can associate a smooth compact toric symplectic 2ℓ-manifold (V, ω V , T), via the Delzant correspondence [16]. Note that the Delzant construction also endows V with the structure of a complex toric variety (V, J V , T c ).
Another smooth variety is associated to a rigid torus action, namely the complexor stable -quotientŜ of (M, J) by the complexified action of T c . For a general torus action,Ŝ is a 2(m − ℓ)-dimensional complex orbifold, but when the torus action is rigid, it is shown in [4,Prop. 5] In either case, by a convenient abuse of notation, we call M or any complex manifold T c -equivariantly biholomorphic to M , a rigid toric bundle. In the case when there is no blow-down, then M =M is a genuine fibre bundle overŜ with fibre the toric manifold V , associated to a principal T-bundle overŜ, whereas, in the general case, the Kähler metric g on M will be described, via its pullback onM , in terms of the toric bundle structure ofM , thus allowing to introduce the notion of compatible Kähler metrics on a general rigid toric bundle, cf. §3. 3.
We now specialize the above construction, in particular the blow-down procedure, in the case when the (rigid) torus action is, in addition, semisimple, according to the following general definition. Definition 2. An isometric hamiltonian torus action on Kähler manifold (M, g, J, ω) is semisimple if for any regular value z 0 of the momentum map, the derivative with respect to z of the family ωŜ(z) of Kähler forms on the complex (stable) quotientŜ of (M, J) (induced by the symplectic quotient construction at z) is parallel and diagonalizable with respect to ωŜ(z 0 ). 5 For a semisimple and rigid isometric hamiltonian torus action the Kähler metrics ωŜ(z), parametrized by z in ∆ 0 , on the stable quotientŜ are simultaneously diagonal and have the same Levi-Civita connection. There then exists a Kähler metric (gŜ , ωŜ) onŜ, such that the Kähler forms ωŜ(z) are simultaneously diagonalizable with respect to gŜ and parallel with respect to the Levi-Civita connection of gŜ, so that the universal cover of (Ŝ, ωŜ) is a product N j=1 (S j , ω j ) of Kähler manifolds (S j , ω j ) of dimensions 2d j , j = 1, . . . , N , in such a way that the restriction to S j of the pullback of ωŜ(z) is a multiple of ω j by an affine function of z. Moreover, to any face of codimension one, Conversely, letŜ be a compact Kähler manifold, whose universal cover is a Kähler where each ω b is the Kähler form of a Fubini-Study metric of holomorphic sectional curvature equal to 2 (A or B may possibly be empty). We moreover assume that π 1 (Ŝ) acts diagonally by Kähler isometries on the universal cover, so thatŜ has the structure of a fibre product of flat unitary CP d b -bundles, b ∈ B, over a compact Kähler manifold S, covered by the product a∈A (S a , ω a ). Let T a real (compact) torus of dimension ℓ, of Lie algebra t, ∆ be a Delzant polytope in the dual space t * , and (V, J V , ω V , T) a T-toric Kähler 2ℓ-manifold, with momentum polytope ∆. Among the n codimension one faces F i , i = 1, . . . , n, of ∆, with inward normals u i in t, we distinguish a subset {F b : b ∈ B} (possibly empty) with inward normals u b . LetP be a principal T-bundle overŜ, such that −2πc 1 (P ), as a t-valued 2-form, is diagonalizable with respect to the local product structure ofŜ, i.e. is of the form where all p j are (constant) elements of T and, we recall, u b denotes the inward normal of the distinguished codimension one face of ∆ associated to the factor (CP d b , ω b ) in the universal cover ofŜ. We denote byM =P × T V the associated toric bundle overŜ.
With these data in hand, the blow-down process relies on the general restricted toric quotient construction, introduced in our previous work [4], which, in the current situation, goes as follows.
Consider the product manifold , and the corresponding bundle of toric Kähler manifoldsŴ = P 0 × T V over S 0 , with momentum map z :Ŵ → ∆ ⊂ t * . Then the restricted toric quotient construction associates toŴ a toric manifold (W, J W , T ), of the same dimension 2(ℓ + b∈B d b ) asŴ , obtained fromŴ by collapsing z −1 (F b ), b ∈ B. Recall that, whereas V is obtained, via the Delzant construction, as a symplectic reduction of C n by the (n − ℓ)-dimensional torus G, kernel of the map (a 1 , . . . , a n ) mod Z n → n i=1 a i u i mod Λ from T n = R n /Z n onto T, W is similarly obtained as a symplectic reduction of ⊕ b∈B C d b +1 ⊕ C n−|B| by G ⊂ T n , via the natural diagonal action 5 In general,Ŝ is well-defined as a complex orbifold for z in the connected component Uz 0 of z0 in the regular values.
of T n on ⊕ b∈B C d b +1 ⊕ C n−|B| (where |B| is the cardinality of B); in this picture, the (ℓ + b∈B d b )-dimensional torus T acting on W is identified with the quotient T n+ b∈B d b /G, whereas the restricted subtorus T is identified with the subtorus T n /G of T , cf. [4, Sect. 1.6] for details, in particular for the identification of W with a blow-down ofŴ . 6 We denote by b :Ŵ → W the (T, T )-equivariant blow-down map ofŴ onto W , along the inclusion T ⊂ T . We then have the following definition. Using this construction, the blow-down was introduced in [4] under the simplifying assumption that the local product structure ofŜ consists of global factors for b ∈ B (i.e.Ŝ → S is a trivial fibre bundle). In particular, the blow-down was expressed in [4,Sect. 2.5] in terms of the universal covers of M andM . In fact, in this case there exists a diagonalizable principal T-bundle P over S with first Chern class 2πc 1 (P ) = a∈A [ω a ] ⊗ p a and we can identifyM =P × T V ∼ = P × TŴ . Then, M := P × T W clearly satisfies the definition 3 above.
We now illustrate the blow-down construction in the case of projective bundles.
3.2.
Projective bundles as rigid toric bundles. In this paragraph, we specialize the previous discussion to the case when the Delzant polytope ∆ is a simplex in t * ∼ = R ℓ , with codimension one faces F 0 , . . . F ℓ ; the associated complex toric variety V is then the complex projective space V ∼ = (CP ℓ , T c ) andM is then T c -equivariantly biholomorphic to a CP ℓ -bundle over a Kähler manifoldŜ of the type discussed in §3.1; sinceM comes from a principal T c -bundle,M is actually of the form P (L 0 ⊕ · · · ⊕ L ℓ ) →Ŝ, where L i are hermitian holomorphic line bundles (the T c action is then induced by scalar multiplication on L i ).
According to the discussion in §3.1, a blow-down process onM is encoded by the realization ofŜ as a fibre product of flat projective unitary CP d b -bundles over a Kähler manifold S. We here only consider flat projective bundles of the form P (E), where E is a rank r + 1 projectively-flat hermitian vector bundle over S (in general the obstruction to the existence of E is given by a torsion element of H 2 (S, O * ); in particular, such an E always exists if S = Σ is a Riemann surface). We then haveŜ = P (E 0 )× S · · ·× S P (E ℓ ) → S, where each E i → S is a projectively-flat hermitian bundle of rank d i + 1, and we assume that In this case, we have thatM = trivial over the other factors ofŜ -whereas M = P E 0 ⊕ · · · ⊕ E ℓ → S, the blow-down process being, over each point of S, the standard blow-down process from P ⊕ ℓ to P (V ), for any splitting V = ⊕ ℓ j=0 V j of a complex vector space V into a direct sum of ℓ + 1 (d j + 1)-dimensional vector subspaces, d j > 0, ℓ > 0, cf. [4] To go further into the geometry of the situation, we next fix a hermitian metric on E i whose Chern connection has curvature Ω i ⊗ Id E i with where p a = (p 1a , . . . , p ℓa ) ∈ R ℓ ∼ = t will be the constants of our construction. Letθ i be a connection 1-form for the principal U (1)-bundle overŜ, associated to the line bundle where ω i pulls back to the Fubini-Study metric of scalar curvature 2d i (d i + 1) on the universal cover of P (E i ) when d i ≥ 1, and is zero when d i = 0. We then putθ j =θ j −θ 0 to define a principal T-connectionθ = (θ 1 , . . . ,θ ℓ ) associated with the principal T c -bundle M 0 overŜ.
3.3.
The generalized Calabi construction on rigid toric bundles over a semisimple base. As recalled in §3.1, any compact Kähler manifold (M, J, ω, g) endowed with a rigid and semisimple isometric hamiltonian action of an ℓ-torus T, is equivariantly biholomorphic to a rigid toric bundle over a semisimple base, obtained by a blow-down process from an associated bundle in T-toric manifoldsM . It still remains to describe Kähler structure (g, ω) on M : according to [4,Thm. 2], this is done by using the generalized Calabi construction which we now recall, following [4], with slightly different notation. We freely use the notation of §3.1.
The generalized Calabi construction is made of three main building blocks -only two if there is no blow-down -and produces a family of (smooth) singular Kähler structures onM , which descend to genuine Kähler metrics on M , called compatible: for any Kähler manifold (M, J, ω, g) endowed with a rigid and semisimple isometric hamiltonian action of an ℓ-torus T, the Kähler structures (g, ω) is compatible.
The first building block of the construction is the choice of a compatible T-invariant Kähler metric g V on the symplectic toric manifold (V, ω V , T). This part is well-known (see e.g. [1,2,22,34]): let z ∈ C ∞ (V, t * ) be the momentum map of the T action with image ∆ and let V 0 = z −1 (∆ 0 ) be the union of the generic T orbits. On V 0 , orthogonal to the T orbits is a rank ℓ distribution spanned by commuting holomorphic vector fields JX ξ for ξ ∈ t. Hence there is a function t : V 0 → t/2πΛ, defined up to an additive constant, such that dt(JX ξ ) = 0 and dt(X ξ ) = ξ for ξ ∈ t. The components of t are 'angular variables', complementary to the components of the momentum map z : V 0 → t * , and the symplectic form in these coordinates is simply where the angle brackets denote contraction of t and t * . These coordinates identify each tangent space with t ⊕ t * , so any T-invariant ω V -compatible Kähler metric must be of the form where G is a positive definite S 2 t-valued function on ∆ 0 , H is its inverse in S 2 t * -observe that G and H define mutually inverse linear maps t * → t and t → t * at each point-and ·, ·, · denotes the pointwise contraction t * × S 2 t × t * → R or the dual contraction. The corresponding almost complex structure is defined by from which it follows that J is integrable if and only if G is the hessian of a function U (called symplectic potential) on ∆ 0 [34].
Necessary and sufficient conditions for U to come from a globally defined T-invariant ω V -compatible Kähler metric on V were obtained in [2,4,22]. We state here the firstorder boundary conditions obtained in [4, Prop. 1]: for any face F ⊂ ∆, denote by t F ⊂ t the vector subspace spanned by the inward normals u i ∈ t to all codimension one faces of ∆, containing F ; as ∆ is Delzant, the codimension of t F equals the dimension of F . Furthermore, the annihilator t 0 F of t F in t * is naturally identified with (t/t F ) * . Then a smooth strictly convex function U on ∆ 0 corresponds to a T-invariant, ω V -compatible Kähler metric g V via (2) if and only if the S 2 t * -valued function H = Hess(U ) −1 on ∆ 0 verifies the following boundary conditions: where the differential dH is viewed as a smooth S 2 t * ⊗ t-valued function on ∆; • [positivity] for any point z in the interior of a face F ⊆ ∆, H z (·, ·) is positive definite when viewed as a smooth function with values in S 2 (t/t F ) * . These conditions can be formulated in the following alternative way, cf. [2,22]: (i) U is smooth and strictly convex 7 on the interior, We denote by S(∆) the space of all symplectic potentials on ∆ defined either way.
The second building block of the generalized Calabi construction consists in using g V to construct a Kähler metric g W on the variety W , with respect to which the restricted T-action is rigid and semisimple. This part of the construction only appears in the situation "with blow-down" and relies in a crucial way on [4,Prop. 2]. Recall that W was obtained by a restricted symplectic quotient process, which ultimately amounts to a blow-down ofŴ = P 0 × T V , where P 0 is a T-principal bundle over b∈B CP d b , cf. §3.1. The construction of g W then requires the choice of a connection 1-form θ 0 on P 0 , with curvature dθ 0 = b∈B ω b ⊗ u b where, we recall, ω b is the (normalized) Fubini-Study metric on CP d b of scalar curvature 2d b (d b + 1), and u b ∈ t is the inward normal to the codimension one face F b ⊂ ∆ (satisfying u b , z + c b = 0). We still denote by θ 0 ∈ Ω 1 (W 0 , t) the induced 1-form on the open dense subset W 0 := P 0 × T V 0 ofŴ and we consider the Kähler structure on W 0 defined by: with G = Hess(U ) = H −1 . Clearly, the Kähler structure (g W , ω W ) is well-defined on W 0 = P 0 × T V 0 . As shown in [4], the pair (g W , ω W ) smoothly extends toŴ -not as a Kähler structure however -and descends to a smooth, T-invariant, Kähler structure on W .
The third and last building block of the generalized Calabi construction similarly consists in constructing a suitable Kähler structure on M 0 =P × T V 0 , via the choice of a connection 1-formθ onP , with curvature (covered by) a∈A ω a ⊗ p a + b∈B ω b ⊗ u b . Then the restriction of (P ,θ) to each fibre ofŜ → S is isomorphic to (P 0 , θ 0 ) over b∈B CP d b . Still denoting byθ ∈ Ω 1 (M 0 , t) the induced 1-form on M 0 =P × T V 0 , we consider the Kähler structure (g, ω) on M 0 defined by: where: • G = Hess(U ) = H −1 , where U is the symplectic potential of the chosen toric Kähler structure g V on V ; • for each b ∈ B, p b = u b and the real number c b is such that p b , z + c b = 0 on the codimension one face F b ; • for each a ∈ A, p a , z + c a is positive on ∆.
Clearly, (6) defines a smooth tensor onM and it is shown in [4,Thm. 2] that it is the pullback of a smooth metric on the blow-down M . Indeed, this is obvious in the case when the fibre bundleŜ → S is trivial (for example takingM be simply connected, as in [4]). Then, there exists a principal T-bundle over S with connection form θ and curvature dθ = a∈A ω a ⊗ p a and the restriction ofθ to S 0 = b∈B CP d b gives rise to a principal T-bundle P 0 over S 0 with connection 1-form θ 0 and curvature dθ 0 It follows that the metric (6) restricts on each W 0 fibre to the metric (g W , ω W ) defined by (5); as (g W , ω W ) compactifies smoothly on W , and p a , z + c a are strictly positive on M , (6) defines a Kähler structure on M . To handle the general case, one can consider the universal covers ofM and M and use the previous argument, noting that the smooth extension of the metric is a local property; a direct argument in the case of the projective bundles described in Sect. 3.2 can be given along the lines of [5, § 1.3]. This completes the generalized Calabi construction according to [4].
Assuming that the metrics (g j , ω j ), the connection 1-formθ, the polytope ∆ and the constants (p j , c j ) are all fixed, (6) defines a family of Kähler metrics parametrized by symplectic potentials U ∈ S(∆) (or, equivalently, by toric Kähler metrics on (V, ω V , T)). We note that for this family, the symplectic 2-form ω remains unchanged, so we obtain a family of T-invariant ω-compatible Kähler metrics corresponding to different complex structures. However, any two such complex structures are biholomorphic, under a Tequivariant diffeomorphism in the identity component: this is well-known in the case of a symplectic toric manifolds (i.e., on (V, ω V , T)) see [2,21], and the same argument holds (fibrewise) on W and M , see [5, § 1.4]. The pullbacks of the symplectic form ω under such diffeomorphisms introduce a Kähler class Ω on a fixed complex manifold (M, J) (we can take J to be the complex structure on M introduced in Definition 3: it corresponds to the standard symplectic potential U 0 , see [2,34]). We shall further assume that the metrics (g j , ω j ) are fixed and have constant scalar curvature Scal j (with Scal b = 2d b (d b + 1) for b ∈ B), 8 and that ∆ and p j are fixed. Recall that for b ∈ B, the constants c b are also fixed by requiring u b , z + c b = 0 on the codimension one face F b ⊂ ∆. The real constants c a , a ∈ A can vary (on a given manifold (M, J)) and they parametrize the compatible Kähler classes.
3.4. The isometry Lie algebra. For a compact Kähler manifold (M, g), we denote by i 0 (M, g) the Lie algebra of all Killing vector fields with zeros; this is equivalently the Lie algebra of all hamiltonian Killing vector fields.
The following result has been established in the case ℓ = 1 in [5, Prop. 3] and its proof generalizes to the general case. For the convenience of the Reader, we reproduce the argument from [5]. Then the vector space z(T, g) is the direct sum of a lift of i 0 (Ŝ, gŜ ) and the Lie algebra t ⊂ i 0 (M, g) of T in such a way that the natural homomorphismp * : z(T, g) → i 0 (S, g S ) is a surjection.
Proof. Denote by K = grad ω z ∈ C ∞ (M, T M ) ⊗ t * the family of hamiltonian Killing vector fields generated by T: thus, the span of K realizes the Lie algebra t of T as a subalgebra of i 0 (M, g).
Let X be a holomorphic vector field onŜ which is hamiltonian with respect to ωŜ; then the projection X j of X onto the distribution H j (induced by T S j on the universal cover N j=1 S j of S) is a Killing vector field with zeros, so ι X j ωŜ = −df j for some function f j (with integral zero). Thus N j=1 f j p j is a family of hamiltonians for X with respect to the family of symplectic forms covered by N j=1 ω j ⊗ p j : since this is the curvature dθ of the connection on M 0 , X lifts to a holomorphic vector fieldX = X H + N j=1 f j p j , K on M 0 , which is hamiltonian with potential N j=1 ( p j , z + c j )f j and commutes with the components of K. (Here X H is the horizontal lift to M 0 with respect toθ.) As the metric g extends to M andX is Killing with respect to g, it extends to M too (note that M \ M 0 has codimension ≥ 2). It is not difficult to see thatX has zeros on M (in fact, if s 0 ∈Ŝ is a zero of X thenX − N j=1 f j (s 0 ) p j , K vanishes on M 0 ) so thatX is an element of i 0 (M, g). Of course, this shows that the Killing potential N j=1 ( p j , z + c j )f j extends as a smooth function on M .
Conversely, anyX ∈ z(T, g) is a T c -invariant holomorphic vector field, so its restriction to M 0 is projectable to a holomorphic vector field X ∈ h 0 (Ŝ). This allows to reverse the above arguments: forX = X H + f p, K + hJ q, K (where p, q ∈ t and f, g ∈ C ∞ (Ŝ)) be Killing with respect to the metric (6), we must have q = 0 and Xbe Killing with respect to gŜ. Such a vector field maps to zero iff it comes from a constant multiple of K. This gives a projection to i 0 (Ŝ, gŜ ) splitting the inclusion just defined. This is the main ingredient in the proof of the following result. 8 Presumably, the Kähler metrics (gj, ωj) must be CSC in order to obtain an extremal Kähler metric (g, ω) as above. We do not prove this here, but this fact has been established for ℓ = 1 in [6, Prop. 14]. Proposition 1. Let (J, g, ω) be a compatible Kähler metric on M where the stable quo-tientŜ is endowed with a local product Kähler structure (gŜ, ωŜ), covered by N j=1 (S j , ω j ) with (S j , ω j ) having constant scalar curvature.
Then g is invariant under a maximal torus G of the reduced automorphism group Aut 0 (M, J).
Proof. Let G be a maximal torus in the group of hamiltonian isometries Isom 0 (M, g), containing the ℓ-torus T. By Lemma 5, G is the product of a maximal torus in the group of hamiltonian isometries Isom 0 (Ŝ, gŜ ) and the ℓ-torus T. Denote by g ⊂ i 0 (M, g) the corresponding Lie algebra. We are going to show that g C = g + Jg is a maximal abelian subalgebra of h 0 (M, J).
As in the proof of Lemma 5, we consider natural homomorphismp * : z(T, J) → h 0 (Ŝ) from the centralizer z(T, J) of T in h 0 (M, J) to h 0 (Ŝ). The proof of Lemma 5 shows that the restriction ofp * to z(T, g) is surjective onto i 0 (Ŝ, gŜ ).
By assumption, the induced Kähler metric (gŜ, ωŜ) onŜ is of constant scalar curvature, so by the Lichnerowicz-Matsushima theorem [49,55], h 0 (Ŝ) is the complexification of i 0 (Ŝ, gŜ). It follows thatp * : z(T, J) → h 0 (S) is also surjective. As g ⊂ z(T, g) is a maximal abelian subalgebra, its projection to i 0 (S, g S ) must also be a maximal abelian subalgebra, so is then the imagep * (g C ) ⊂ h 0 (Ŝ) (by using the Lichnerowicz-Matsushima theorem again). It follows that g C ⊂ h 0 (M, J) is maximal abelian iff g C ∩ hŜ(M ) is a maximal abelian subalgebra of the complex algebra of fibre-preserving holomorphic vector fields hŜ(M ). But the fibre V is a toric variety under T, so g C ∩ hŜ(M ) = t C = t + Jt, which is clearly a maximal abelian subalgebra of h(V, J V ) and hence also of hŜ(M ).
3.5. The extremal vector field. For convenience, we will introduce at places a basis of t (resp. of t * ), for example by taking ℓ generators of the lattice Λ (where T = t/2πΛ). This identifies the vector space t with R ℓ (and t * with (R ℓ ) * ), and fixes a basis of Poisson commuting hamiltonian Killing fields K 1 , . . . , K ℓ in K. Thus, a S 2 t * -valued function H on ∆ can be seen as an ℓ × ℓ-matrix of functions (H rs ) = H on ∆. Similarly, we write z = (z 1 , . . . , z ℓ ) for the momentum coordinates with respect to K 1 , . . . , K ℓ .
An important technical feature of the Kähler metrics given by the generalized Calabi construction (6) is the simple expression of their scalar curvature in terms of the geometry of (V, g V ) and (Ŝ, gŜ) (see e.g. [3, p. 380]): This formula generalizes the expression obtained by Abreu [1] in the toric case (whenŜ is a point).
Another immediate observation is that the volume form Vol ω = ω m is given by It follows that integrals over M of functions of z (pullbacks from ∆) are given by integrals on ∆ with respect to the volume form p(z) dv, where dv is the (constant) euclidean volume form on t * , obtained by wedging any generators of the lattice Λ.
We now recall the definition in [31] of the extremal vector field of a compact Kähler manifold (M, J, g, ω). Let G be a maximal connected compact subgroup of the reduced group of automorphisms Aut 0 (M, J). 9 Following [31], the extremal vector field of a Ginvariant Kähler metric (g, J, ω) on M is the Killing vector field whose Killing potential is the L 2 -projection of the scalar curvature Scal g of g to the space g ω of all Killing potentials (with respect to g) of elements of the Lie algebra g. Futaki and Mabuchi [31] showed that this definition is independent of the choice of a G-invariant Kähler metric within the given Kähler class Ω = [ω] on (M, J). Since the extremal vector field is necessarily in the centre of g, it can be equally defined if we take G be only a maximal torus in Aut 0 (M, J). This remark is relevant to the Kähler metrics (6) as we have already shown in Proposition 1 that they are automatically invariant under such a torus G. In this case, by Lemma 5, g ω is the direct sum of t ω (which in turn is identified to the space of affine functions of z) and a subspace of Killing potentials of zero integral of lifts of Killing vector fields on (Ŝ, gŜ). We have shown in the proof of Lemma 5 that the later potentials are all of the form j ( p j , z + c j )f j where f j is a function onŜ of zero integral with respect to ω d S . As the scalar curvature of a compatible metric is a function of z only (see (7), we assume Scal j are constant) it follows from (8) that the L 2 -projection of Scal g to g ω lies in t ω . This shows that the extremal vector field lies in t and that the projection of Scal g orthogonal to the Killing potentials of g takes the form: Here dσ is the (ℓ − 1)-form on ∂∆ with u i ∧ dσ = −dv on the face F i with normal u i . These formulae are immediate once one applies the divergence theorem and the boundary conditions (4) for H, noting that the normals are inward normals, which introduces a sign compared to the usual formulation of the divergence theorem. The extremal vector field of (M, g, J, ω) is − A, K , where K ∈ C ∞ (M, T M ) ⊗ t * is the generator of the T action.
3.6. The extremal equation and stability of its solutions under small perturbation. It follows from the considerations in Sect. 3.5 that on a given manifold M of the type we consider, finding a compatible extremal Kähler metric (g, ω) of the form (6) reduces to solving the equation (for a unknown symplectic potential U ∈ S(∆)) where • (H rs ) = H = (Hess(U )) −1 ; 9 By a well-known result of Calabi [14], any extremal Kähler metric must be invariant under such a G.
• (c j , p j , Scal j ) are fixed constants; • p(z) = N j=1 (c j + p j , z ) d j is strictly positive on ∆ 0 but vanishes on the blow-down faces F b , b ∈ B; • A and B are expressed in terms of (c j , p j , Scal j ) by (9).
Recall from Sect. 3.3 that the real constants c a , a ∈ A parametrize compatible Kähler classes on a given manifold M . A general result of LeBrun-Simanca [46] affirms that Kähler classes admitting extremal Kähler metric form an open subset of the Kähler cone. We want to obtain a relative version of this result, by showing that compatible Kähler classes which admit a compatible extremal Kähler metric is an open condition on the parameters c a .
We will state and prove our stability result in a slightly more general setting, by considering (10) as a family of differential operators on S(∆), parametrized by λ ∈ {(c a , p a , Scal a ), a ∈ A} (thus λ takes values in a (2 + ℓ)|A|-dimensional euclidean vector space). For any λ such that p a , z + c a > 0 on ∆, we consider and A λ , B λ are introduced by (9). The central result of this section is the following one.
The proof of this proposition has several steps and will occupy the rest of this section. It is not immediately clear from (11) that P λ is a well-defined differential operator: in the presence of blow-downs, the terms Scal b c b + p b ,z and 1 p 0 (z) become degenerate on the boundary of ∆. 10 Of course, for λ = λ 0 we know from (10) that P λ 0 (U ) = Scal ⊥ g where g is the compatible metric on M corresponding to U , and Scal ⊥ g is the L 2 -projection of the scalar curvature to the space of functions orthogonal to the Killing potentials of g. However, for generic values of λ the data (c a , p a , Scal a ) are not longer associated with a compatible Kähler class on a smooth manifold: for this to be true p a and Scal a must satisfy integrality conditions. To overcome this technical difficulty, we are going to rewrite our equation on the smooth compact manifold W . (Note that for b ∈ B, Recall from Sect. 3.3 that any symplectic potential U ∈ S(∆) introduces a compatible Kähler metric (g W , ω W ) on the manifold W obtained by blowing downŴ = P 0 × T V . Thus, (W, g W , ω W ) itself is obtained by the generalized Calabi construction with S being a point.
By a well-known result of G. W. Schwarz [63], the space C ∞ (V ) T of T-invariant smooth functions on the toric symplectic manifold (V, ω V , T) is identified with the space of pullbacks (via the momentum map z) of smooth functions C ∞ (∆) on ∆; similarly, the space of smooth T-invariant functions on W (resp. on M ) which are constant on the inverse images of the momentum map z is identified with the space C ∞ (∆). We will use implicitly these identification throughout. Occasionally, when we want to emphasize the dependence of this identification on z, we will denote these isomorphisms by S z . With this convention, we have Lemma 6. Let U ∈ S(∆) be a symplectic potential of a compatible Kähler metric g V on (V, ω V , T) and (g W , ω W ) be the corresponding compatible Kähler metric on W . Then, for any λ such that p a , z + c a > 0 on ∆, where Scal W and ∆ W respectively denote the scalar curvature and the riemannian laplacian of g W , and dz r = −ω W (K r , ·).
Proof. We work on the open dense subset W 0 = P 0 × T V 0 where the compatible metric (g W , ω W ) takes the explicit form (5). The formula (7) for the scalar curvature of the compatible metric g W then specifies to Still using the explicit form (5) of the Kähler structure, we calculate that for the pullback to W of a smooth function f (z) on ∆ where the decompositions θ 0 = ((θ 0 ) 1 , . . . , (θ 0 ) ℓ ) and p b = (p b1 , . . . , p bℓ ) are with respect to the chosen basis of t and t * . Wedging with ω W , we obtain the following expression for the laplacian Specifying (13) to f = z r and putting the above formulae back in (11) implies the lemma.
Note that 1 p λ (z) and Scala ca+ pa,z pull back to smooth functions on W for λ such that c a + p a , z > 0 on ∆, and A λ and B λ are well-defined and depend smoothly on λ (at least for λ close to λ 0 ). Thus, Lemma 6 implies that P λ is a fully non-linear 4-th order differential operator which depends smoothly on λ (for λ sufficiently close to λ 0 ). It follows that P λ (U ) ∈ C ∞ (∆) for any U ∈ S(∆).
Our problem is formulated in terms of compatible Kähler metrics on V (or, equivalently, on W and M ) with respect to a fixed symplectic form ω V (resp. ω W and ω). This introduces the space of symplectic potentials S(∆) where we have to work with smooth functions on ∆ 0 which have a prescribed boundary behaviour on ∂∆. Our lack of understanding of the convergence in this space (with respect to suitable Sobolev norms) leads us to make an additional technical step and reformulate our initial problem as an existence result on a suitable subspace of the space Kähler metrics in the Kähler class of (g 0 , J 0 , ω 0 ), where C ∞ 0 (M ) G denotes the space of G-invariant smooth functions on M of zero integral with respect to ω m 0 (thus M Ω (M ) G is viewed as an open set in C ∞ 0 (M ) G with respect to || · || C 2 ). Once this interpretation is achieved, we will apply the implicit function theorem along the lines of the proof of Lemma 4.
First of all, note that the Frechét space C ∞ (∆) pulls back via z to a closed subspace in C ∞ (V ) T , C ∞ (W ) T and C ∞ (M ) G , where T (resp. G) is a maximal torus in Aut 0 (W ) (resp. Aut 0 (M )) containing T, as in Proposition 1: this follows easily from the description of the Lie algebras of T and G given in Lemma 5. Furthermore, by (8), the corresponding normalized subspaces of functions with zero integral for the measures p λ (z)p 0 (z)Vol ω 0 V , p λ (z)Vol ω 0 W and Vol ω 0 , respectively, are identified with the space C ∞ 0 (∆) of smooth functions of zero integral with respect to the volume form dµ 0 = p λ 0 (z)p 0 (z)dv on ∆ 0 : this normalization will be used throughout.
Secondly, to adopt the classical point of view of Kähler metrics within a given Kähler class on a fixed complex manifold, we consider the Fréchet space and p a , z + c a > 0 on ∆), we consider the family of differential operators on M Ω (W ) T is the momentum map of T with respect to the Kähler form ω W = ω 0 W + dd c W f of the Kähler metricg W , and Scal W (resp. ∆ W ) denote the scalar curvature (resp. laplacian) ofg W . Thus, by Lemmas 6 and 7, any Kähler metric ω W ∈ M comp Ω (W ) for which Q λ (ω W ) = 0 gives rise to a symplectic potentialŨ ∈ S(∆) solving P λ (Ũ ) = 0.
The positive factor p λ (z) p λ 0 (z) in front of Q λ is introduced so that for any compatible metricω W ∈ M comp Ω (W ), the function Sz(Q λ (ω W )) is L 2 -orthogonal with respect to the measure dµ 0 = p λ 0 p 0 dv on ∆ to the space of affine functions on t * , where, we recall, Sz denotes the identification of T-invariant smooth functions on W which are constant on the inverse images ofz (equivalently of z) with pullbacks viaz of smooth functions on ∆. Indeed, by Lemma 6, p λ 0 (z)p 0 (z)Q λ (ω W ) = P λ (Ũ )p λ (z)p 0 (z), so integrating by parts the r.h.s. of (11) and using (4) we get which holds for any smooth function f (z). When f is affine, the first term in the r.h.s is clearly zero, while by the definition (9) of A λ and B λ the sum of the two other terms is zero too; our claim then follows by Lemma 7 and the expression (8) for the volume form of the compatible metricω W . Let Π 0 denote the orthogonal L 2 -projection of C ∞ (∆) to the finite dimensional subspace of affine functions of t * with respect to the measure dµ 0 = p λ 0 p 0 dv on ∆, and C ∞ ⊥ (∆) be the kernel of Π 0 . We then consider the map Ψ : Note that if f has sufficiently small C 1 -norm, the equation (Id − Π 0 ) • (S z (Q λ (ω W )) = 0 is satisfied if and only if Q λ (ω W ) = 0: this follows from the fact that Π 0 • Sz • S −1 z defines a continuous family of linear endomorphisms of the finite dimensional space of affine functions on t * , with the identity corresponding toω W = ω 0 W ; thus Π 0 • Sz • S −1 z • Π 0 is invertible for ω W close to ω 0 W , and hence (by using that Π 0 (Sz(Q(ω W )) = 0) we get which is zero iff S z (Q λ (ω W ) = 0 i.e. Q λ (ω W ) = 0.
By the discussion above, we are in position to complete the proof of Proposition 2 by applying the inverse function theorem to the extension of Ψ to suitable Sobolev spaces, together with elliptic regularity (as in [46], see also the proof of Lemma 4) in order to find a familyω λ W = ω 0 W + dd c W f λ of smooth compatible metrics satisfying Ψ(λ, f λ ) = (λ, 0) for |λ − λ 0 | < ε.
Let us first introduce the functional spaces we will work on. Recall that C ∞ (∆) is seen as a (closed) Fréchet subspace of the space of T -invariant smooth functions on W (resp. G-invariant smooth functions on M ) which are constant on the inverse images of the momentum map z for the sub-torus T. It follows from the description of the Lie algebra of T (resp. G) given in Lemma 5 that C ∞ ⊥ (∆) is precisely the intersection of C ∞ (∆) with the space C ∞ ⊥ (W ) T of T -invariant smooth functions on W which are L 2 -orthogonal with respect to p λ 0 Vol ω 0 W to Killing potentials of g 0 W (resp. the space C ∞ ⊥ (M ) G of G-invariant smooth functions on M which are L 2 -orthogonal with respect to Vol ω 0 to Killing potentials of g 0 ). We let L 2,k ⊥ (W, ∆) (resp. L 2,k ⊥ (M, ∆)) be the closure of C ∞ ⊥ (∆) with respect to the Sobolev norm || · || k 2 on W for the measure p λ 0 (z)Vol ω 0 W and riemannian metric g 0 W (resp. the Sobolev norm || · || k 2 on M with respect to Vol ω 0 and g 0 ). For k ≫ 1, the Sobolev embedding L 2,k+4 ⊥ (W, ∆) ⊂ C 3 ⊥ (∆) allows us to extend the differential operator Ψ to a C 1 -map from a neighbourhood of (λ 0 , 0) ∈ R (2+ℓ)|A| × L 2,k+4 ⊥ (W, ∆) into L 2,k ⊥ (W, ∆), such that Ψ(λ 0 , 0) = 0; furthermore, as the principal part of Q λ is concentrated in the term Scal W , one can see that Ψ is a fourthorder quasi-elliptic operator [46]. Now, in order to apply the inverse function theorem, it is enough to establish the following Then T 0 is an isomorphism of Fréchet spaces.
Proof. Let (g 0 , J 0 , ω 0 ) be the compatible extremal Kähler metric on M corresponding to the initial value λ = λ 0 . For any function f ∈ C ∞ ⊥ (∆) we consider the compatible Kähler metricg on M , with Kähler formω = ω 0 + dd c M f and the compatible Kähler metric g W on W with Kähler formω W = ω 0 W + dd c W f . We saw already in Sect. 3.5 that for λ = λ 0 , Q λ 0 (ω W ) = P λ 0 (Ũ ) = Scal ⊥ g , whereŨ and Scal ⊥ g are the symplectic potential and normalized scalar curvature ofg. It then follows from [32,45] that the linearization T 0 of Q λ 0 (at ω 0 W ) is equal to −2 times the Lichnerowicz operator L of (g 0 , ω 0 ) acting on the space of pullbacks (via z) of functions in C ∞ ⊥ (∆). We have already observed in the proof of Lemma 4 that L is an isomorphism when restricted to the space C ∞ ⊥ (M ) G of G-invariant smooth functions L 2 -orthogonal to Killing potentials of g 0 . The main point here is to refine this by showing that L is an isomorphism when restricted to subspace C ∞ ⊥ (∆), the only missing piece being the surjectivity. Suppose for a contradiction that L : C ∞ ⊥ (∆) → C ∞ ⊥ (∆) is not surjective. Considering the extension of L to an operator between the Sobolev spaces L 2,4 ⊥ (M, ∆) → L 2 ⊥ (M, ∆) (by elliptic theory L is a closed operator), our assumption is then equivalent to the existence of a non-zero function u ∈ L 2 ⊥ (M, ∆) such that, for any φ ∈ C ∞ ⊥ (∆), L(φ) is L 2 orthogonal to u. As any sequence of functions converging in L 2 (M ) has a point-wise converging subsequence, u = u(z) is (the pullback to M of) a L 2 -function on ∆, and using (8) It is enough to establish (16) by integrating on M 0 = z −1 (∆ 0 ) (which is the complement of the union of submanifolds of real codimension at least 2).
The Lichnerowicz operator L has the following general equivalent expression [32,45] (17) where ρ g 0 is the Ricci form of (g 0 , J 0 ) and ∆ g 0 is its laplacian. We will use the specific form (6) of g 0 to express the r.h.s of the above equality in terms of the geometry of (V, g 0 V ) and (Ŝ, gŜ ). Let f be any G-invariant (and hence T-invariant) smooth function on M . It can be written on M 0 as a smooth function depending on z andŜ and, for any s ∈Ŝ, we will denote by f s (z) = f (z, s) the corresponding smooth function of z (Note that, as the pullback of f toM is smooth, f s (z) is a smooth function on ∆, not only on ∆ 0 .) Similarly, for any z ∈ ∆ 0 , f z (s) = f (z, s) stands for the corresponding smooth function onŜ.
Using [4,Prop. 7] and the specific form (6) of g 0 , it is straightforward to check that on M 0 we have . . . ,θ ℓ ) and p j = (p j1 , . . . , p jℓ ) with respect to the chosen basis of t; • dŜ and d cŜ are the differential and the d c -operator acting on functions and forms on S; • (g j , ω j ) are the product CSC Kähler factors of the Kähler metric (gŜ, ωŜ), with respective Ricci forms ρ j and laplacians ∆ g j ; • gŜ ,z = N j=1 ( p j , z) + c j )g j is the quotient Kähler metric onŜ at z, and ωŜ ,z , ScalŜ ,z and ∆Ŝ ,z denote its Kähler form, scalar curvature and laplacian, respectively; Substituting back in (17), we obtain where LŜ ,z is the Lichnerowicz operator of gŜ ,z , and R j (z) are coefficients (that can be found explicitly from the above formulae) depending only on z, and such that p(z)R j (z) are smooth on ∆.
If we integrate the above expression for L(f ) against u(z) (by using (8)) we get that To see that all the terms vanish, note that the first term is zero by (15); the third and fourth terms are zero because LŜ ,z and ∆Ŝ ,z are self-adjoint (with respect to ωŜ ,z ) and therefore their images are L 2 -orthogonal to constants onŜ. The fifth term is also zero because ∆ g j (f ) is L 2 -orthogonal to constants onŜ with respect to ωŜ: this follows easily from the local product structure of gŜ. For the second term one uses that ∆ g 0 defines a self-adjoint operator on C ∞ (∆) with respect to the measure p(z)dv: thus, for any smooth function φ(z) on ∆, because ∆Ŝ ,z f z is L 2 -orthogonal to constants onŜ; as u is in the closure in L 2 of pullbacks of smooth functions on ∆, the second term vanishes too. This concludes the proof of the lemma.
An immediate consequence of Proposition 2 is the following Proof. As we have already observed, the admissible Kähler classes are parametrized by the real constants c a for a ∈ A. We thus apply Proposition 2 by taking λ = (c a , p 0 a , Scal 0 a ). 3.7. Proof of Theorem 3. To deduce Theorem 3 from Proposition 2, we observe that the differential operators (11) satisfy P tλ = P λ for any real number t = 0.
On any Kähler manifold (M, g, ω) obtained by the generalized Calabi construction with data λ = (c a , p a , Scal a ), we can consider the sequence of differential operators P λ k where λ k = (c a + k, p a , Scal a ). The differential operator P λ k is the same as P λ k k and λ k k converges when k → ∞ to the data corresponding to the extremal Kähler metric equation for a compatible Kähler metrics on W . We then readily infer Theorem 3 from Proposition 2. Remark 6. As any invariant Kähler metric on a toric manifold is compatible, Theorem 3 implies the existence of (compatible) extremal metrics on a rigid semisimple toric bundles M over a CSC locally product Kähler manifold, in the case when there are no blowdowns and W = V is a toric extremal Kähler manifold.
Remark 7. An interesting class of rigid toric bundles comes from the theory of multiplicityfree manifolds recently discussed in [25]. A typical example is obtained by taking a compact connected semisimple Lee group G and a maximal torus T ⊂ G with Lie algebra t; if we pick a positive Weyl chamber t + ⊂ t (and identify t with its dual space t * via the Killing form), for any Delzant polytope ∆ contained in the interior of t + , one can consider the manifold M = p : G × T V → S = G/T, where V is the toric manifold with Delzant polytope ∆. Note that G has a structure of principal T-bundle over the flag manifold S = G/T with a connection 1-form θ ∈ Ω 1 (G, t) whose curvature ω(z) = dθ, z defines a family of symplectic forms on S (the Kirillov-Kostant-Souriau forms); identifying S ∼ = G c /B, where B is a Borel subgroup of the complexification G c of G, each ω(z) defines a homogeneous Kähler metric g(z) on the complex manifold S (which is therefore of constant scalar curvature); the Ricci form ω S of ω(z) is independent of z, giving rise to the normal (Kähler-Einstein) metric g S on S. Now, for any toric Kähler metric on V , corresponding to a symplectic potential U ∈ S(∆), one considers the Kähler metric on M where G = Hess(U ), H = G −1 , z ∈ ∆ and k > 0. In this case, G → S = G/T is not necessarily a diagonalizable principal T-bundle over S = G/T (in other words, M = G × T V → S = G/T is a rigid but not in general semisimple toric bundle). However, most parts of the discussion in Sect. 3 do extend to this case too (see also [3]), with some obvious modifications. The key points are that (a) the volume form of g(z) + kg S is a multiple p(z) (depending only on z) of Vol g S : this allows to extend the curvature computations (see [3,Prop. 7]) and formula (8) to this case, (b) for any z ∈ ∆, g(z) + kg S is a CSC Kähler metric on S: this allows to extend the results in Sect. 3.4, and (c) there is a similar formula to (7) for the scalar curvature of g, found by Raza [59], which allows to reduce the extremal equation for the Kähler metrics in the above form to (10) with p a being essentially the positive roots of G, c a = k and Scal a positive constants. Proposition 2 and its corollaries (Corollary 1 and Theorem 3) extend to this setting too. We thus get both openness and existence of extremal Kähler metrics of the above form when V is an extremal toric Kähler variety and k ≫ 0.
Proof of Theorem 4
As another application of Theorem 3, we derive Theorem 4 from the introduction. This is the case when V = CP ℓ and W = CP r , r ≥ ℓ ≥ 1 and M = P (E 0 ⊕· · ·⊕E ℓ ) → S (see Sect. 3.2). It follows from the general theory of hamiltonian 2-forms [3,4] that any Fubini-Study metric on CP r admits a rigid semisimple isometric action of an ℓdimensional torus T, for any 1 ≤ ℓ ≤ r (see in particular [3,Prop. 17] and [4,Thm. 5]): thus, W = CP r admits a compatible extremal Kähler metric.
Let ω be a compatible Kähler on M ; as the fibre is CP r , by re-scaling, we can assume without loss that [ω] = 2πc 1 (O(1) E ) + p * α, where α is a cohomology class on S. The form (6) of ω and the assumption on the first Chern classes c 1 (E i ) imply that α is diagonal with respect to the product structure of S, in the sense that it pulls back to the covering product space as α = a∈A q a [ω a ] for some real constants q a . Therefore, Ω k = 2πc 1 If we choose q with q > q a , thenω = ω + a∈A (q − q a )p * ω a is clearly a compatible Kähler metric too.
Proof of Theorem 2
Suppose that (g, ω) is an extremal Kähler metric in Ω k = 2πc 1 where E i are indecomposable holomorphic vector bundles over a compact curve Σ of genus g ≥ 2. We can assume without loss that ω Σ is the Kähler form of a constant curvature metric on Σ and, by virtue of Theorem 1, that the scalar curvature of g is not constant. In particular, ℓ ≥ 1.
We have seen in Lemma 1 that the ℓ-dimensional torus T acting by scalar multiplication on each E i is maximal in the reduced automorphism group Aut 0 (M, J) ∼ = H 0 (Σ, P GL(E)). By a well-known result of Calabi [14] the identity component of the group of Kähler isometries of an extremal Kähler metric is a maximal compact subgroup of Aut 0 (M, J), so we can assume without loss that (g, ω) is T-invariant.
By considering small stable deformations E i (t) and applying Lemma 4, we can find a smooth family of extremal T-invariant Kähler metrics (J t , g t , ω t ), converging to (J, ω) . By the equivariant Moser lemma, we can assume without loss that ω t = ω.
It is not difficult to see that any Kähler class on (M, J t ) (for t = 0) is compatible: this follows from the fact that the cohomology H 2 (M ) ∼ = H 1,1 (M, J t ) is generated by any compatible Kähler class on (M, J t ) and the pullback p * [ω Σ ]. By Theorem 4 and the uniqueness of the extremal Kähler metrics up to automorphisms [15], for any t = 0 we can take k ≫ 0 such that the extremal Kähler metric (g t , ω) on (M, J t ) is compatible with respect to the rigid semisimple action of the maximal torus T. Strictly speaking, Theorem 4 produces a lower bound k 0 for such k, depending on J t . However, in our case |A| = 1, the simplex ∆, the moment map z and the metric on Σ are fixed, and the parameter λ = (c, p, Scal Σ ) defining the corresponding extremal equation (10) for a compatible metric on (M, J t , [ω]) is independent of t: indeed, the constants p ∈ t and c ∈ R are determined by the first Chern classes c 1 (E i ) and the cohomology class Ω k = [ω] ∈ H 2 dR (M ). Thus, the deformation argument used in Sect. 3.7 produces a lower bound k 0 independent of t, such that for any k > k 0 and t = 0, (g t , ω) is an extremal Kähler metric in Ω k with respect to which the maximal torus T acts in a rigid and semisimple way.
Take a regular value z 0 of the momentum map z associated to the hamiltonian action of T on (M, ω) and consider the family of Kähler quotient metrics (ĝ t ,Ĵ t ) on the symplectic quotientŜ. By identifying the symplectic quotient with the stable quotient, we see that (Ŝ,Ĵ t ) ∼ = P (E 0 (t)) × Σ · · · × Σ P (E ℓ (t)) → Σ (see Sect. 3.2). As for t = 0 the action of T is rigid and semisimple and g t is compatible, the quotient Kähler metric (ĝ t ,Ĵ t ) must be locally a product of CSC Kähler metrics. By the de Rham decomposition theoremĝ t must be a locally-symmetric metric modelled on the hermitian-symmetric space and H is the hyperbolic plane. By continuity, (ĝ 0 ,Ĵ 0 ) is a locally-symmetric Kähler metric on S of the same type. By the de Rham decomposition theorem and considering the form of the covering transformations we obtain representations ρ i : π 1 (Σ) → P U (d i + 1), and therefore E i must be stable by the standard theory [56].
In the case when ℓ = 1, we can assume without loss by Theorem 1 that E is not polystable, and we can then use instead of Theorem 4 the stronger results [5,Thm. 1 & 6] which affirm that any extremal Kähler metric on (M, J t ) (for t = 0) must be compatible with respect to the natural S 1 -action. 6. Further observations 6.1. Relative K-energy and the main conjecture. Leaving aside the specific motivation of this paper to study projective bundles over a curve, the theory of rigid semisimple toric bundles which we reviewed in Sect. 3 extends the theory of extremal Kähler metrics on toric manifolds [21,22,24,66,74,75] to this more general context.
To recast the leading conjectures [21,66] in the toric case to this setting, recall from [21] that if we parametrize compatible Kähler metrics g by their symplectic potentials U ∈ S(∆), then the relative (Mabuchi-Guan-Simanca) K-energy E Ω on this space satisfies the functional equation where we have used (10) and integration by parts by taking into account (4). Following [21,66,74], let us introduce the linear functional (18) F The above calculation of dE Ω g shows that F Ω (f ) = 0 if f is an affine function of z. Furthermore, using the fact that the derivative of log det H is tr H −1 dH, we obtain the following generalization of Donaldson's formula for E Ω : (19) E Ω (U ) = 2F Ω (U ) − ∆ log det Hess U (z) p(z)dv.
(In case of doubt about the convergence of the integrals, one can introduce a reference potential U c and a relative version E Ω gc of E Ω , but in fact, as Donaldson shows, the convexity of U ensures that the positive part of log det Hess U (z) is integrable, hence − log det Hess U (z) has a well defined integral in (−∞, ∞].) According to [21,66], the existence of a solution U ∈ S(∆) to (10) should be entirely governed by properties of the linear functional (18): Let Ω be a compatible class on M . Then the following conditions should be equivalent: (1) Ω admits an extremal Kähler metric.
Our formula (19) can be used to show as in [21,Prop. 7.1.3] that F Ω (f ) ≥ 0 if the relative K-energy is bounded from below. However, according to Chen-Tian [15], the boundedness from below of E Ω is a necessary condition for the existence of an extremal Kähler metric.
If Ω admits a compatible extremal Kähler metric with symplectic potential U and inverse hessian H, one can use (10) and integration by parts (taking into account (4)) in order to re-write (18) as (20) F This formula makes sense for smooth functions f (z), but can also be used to calculate F Ω (f ) in distributional sense for any piecewise linear convex function as in [75]: using the fact that H is positive definite, we obtain the analogue of a result in [75], showing that the second statement of Conjecture 2 implies the third. We thus have the following partial result.
Proposition 3.
If Ω admits an extremal Kähler metric then F Ω (f ) ≥ 0 for any convex piecewise linear function. If Ω admits a compatible extremal metric then, furthermore, F Ω (f ) = 0 if and only if f is an affine function on ∆.
Of course, the most difficult part of Conjecture 2 is to prove (3) ⇒ (2). So far the Conjecture 2 has been fully established in the cases when ℓ = 1 [5] and when M is a toric surface (i.e. ℓ = 2 andŜ is a point) with vanishing extremal vector field [24]. 6.2. Computing F Ω . It is natural to consider (following Donaldson [21]) the space of S 2 t * -valued functions H on ∆ satisfying just the boundary conditions (4). If such a function satisfies the (underdetermined, linear) equation (10), then formula (20) holds, and it can be used to compute the action of F Ω (in distributional sense) on piecewise linear functions.
Note that if a solution to (10) exists, then so do many because the double divergence is underdetermined.
If a solution H of (10) happens to be positive definite on each face of ∆, i.e. if it verifies the positivity condition in Sect. 3.3, then formulae (6) introduce an almost Kähler metric on M (see e.g. [4]) and one can show that (7) computes its hermitian scalar curvature (see Appendix A). Thus, positive definite solutions of (10) correspond to compatible extremal almost Kähler metrics. If such extremal almost Kähler metrics exist, it then follows from (20) (see [75] and Proposition 3 above) that the condition (3) of Conjecture 2 is verified. Thus, the existence of a positive definite solution H of (10) (and verifying the boundary conditions (4)) is conjecturally equivalent to the existence of a compatible extremal Kähler metric (corresponding to another positive definite function H Ω with inverse equal to the hessian of a function U Ω ). In fact, following [21], as log det is strictly convex on positive definite matrices, the functional ∆ (log det H)p(z)dv is strictly convex on the space of positive definite solutions of (10), and therefore has at most one minimum H Ω . Such a minimum would automatically have its inverse equal to the hessian of a function U Ω (see [21]). Thus, H Ω would then give the extremal Kähler metric in the compatible Kähler class Ω.
Thus motivated, it is natural to wonder if on the manifolds we consider in this paper a (not necessarily positive definite) solution H of (10) exists, thus generalizing the extremal polynomial introduced in [5] on M = P (E 0 ⊕E 1 ) → S (in fact P(z) = p(z)H(z) would be the precise generalization). 6.3. Example: projective plane bundles over a curve. We illustrate the above discussion by explicit calculations on the manifold M = P (O ⊕ L 1 ⊕ L 2 ) → Σ, where L 1 and L 2 are holomorphic line bundles over a compact complex curve Σ of genus g. We put p i = deg(L i ) and assume without loss that p 2 ≥ p 1 ≥ 0. Note that in the case p 1 = p 2 = 0, the vector bundle E = O ⊕ L 1 ⊕ L 2 is polystable, and therefore the existence of extremal Kähler metrics is given by Theorem 1. The cases p 1 = p 2 > 0 and p 2 > p 1 = 0, on the other hand, are solved in [5]. We thus assume furthermore that p 2 > p 1 > 0.
To recast our example in the set up of Sect. 3, we take a riemannian metric g Σ of constant scalar curvature 4(1−g) on Σ. To ease the notation, we put C = 4(g−1). Let z i be the momentum map of the natural S 1 -action by multiplication on L i . Thus, without loss, for a compatible Kähler metric on M , the momentum coordinate z = (z 1 , z 2 ) takes values in the simplex ∆ = {(z 1 , z 2 ) ∈ R 2 | z 1 ≥ 0, z 2 ≥ 0, 1 − z 1 − z 2 ≥ 0} (which is the Delzant polytope of the fibre CP 2 viewed as a toric variety).
We now investigate the positivity condition for our distinguished solution H Ω 0 of (4) and (10). First of all, when c → ∞, the v i 's tend to 0, so H Ω 0 tends to the matrix associated to a Fubini-Study metric on CP 2 . It follows that H Ω 0 becomes positivedefinite on each face for sufficiently small Kähler classes, and therefore H Ω 0 defines an explicit extremal (non-Kähler) almost Kähler metric in Ω (see Appendix A below). This is of course consistent (via Conjecture 2) with the existence of a (non-explicit) extremal Kähler metric in Ω, given by Theorem 2. Furthermore, if g = 0, 1 (i.e. C < 0), a computer assisted verification shows that, in fact, H Ω 0 is positive definite on each face of ∆ for all Kähler classes. We thus obtain the following result. If g ≥ 2, then the same conclusion holds for the compatible Kähler forms in sufficiently small Kähler classes Ω k = 2πc 1 (O(1) E ) + kp * [ω Σ ], k ≫ 0.
As speculated in the previous section, the explicit solution H Ω 0 of (4) and (10) can be used to compute the action of the functional F Ω on piecewise linear convex functions (by extending formula (20) in a distributional sense, after integrating by parts and using (4)). As a simple illustration of this, let us take a simple crease function f a with crease along the segment S a = {(t, a − t), 0 < t < a} for some a ∈ (0, 1) (thus as a → 0, the crease moves to the lower left corner of the simplex ∆). A normal of the crease is u = (1, 1) and one easily finds that F Ω (f a ) =
Sa
H Ω 0 (u, u)dσ = a 0 ((H 11 + 2H 12 + H 22 )(t, a − t))(c + p 1 t + p 2 (a − t)) dt, where dσ is the contraction of the euclidian volume dv on R 2 by u. Note that the integrand (being a rational function of c with a non-vanishing denominator at c = 0), and hence the integral, is continuous near c = 0; for c = 0 the integral equals 1 6 (1 − a)a 3 (−C + 2(p 1 + p 2 ) + a(C + 4(p 1 + p 2 ))), which is clearly negative for a ∈ (0, 1) sufficiently small as long as C = 4(g − 1) > 2(p 1 + p 2 ). If we take g > 2, such p 1 and p 2 do exist. By Proposition 3, this implies a non-existence result of extremal Kähler metrics when p 1 and p 2 satisfy the above inequality and c is small enough. (As a special case, for p 1 = p 2 we have recast the non-existence part of [5,Thm. 6].) Proposition 5. Let M be as in Proposition 4, with g > 2 and p 1 , p 2 satisfying 2(g−1) > p 1 + p 2 . Then all sufficiently 'big' Kähler classes do not admit any extremal Kähler metric.
Appendix A. Compatible extremal almost Kähler metrics
In this appendix, we calculate the hermitian scalar curvature of a compatible almost Kähler metric and extend the notion of extremal Kähler metrics to the more general almost Kähler case.
Recall that on a general almost Kähler manifold (M 2m , g, J, ω), the canonical hermitian connection ∇ is defined by where D is the Levi-Civita connection of g. Note that The Ricci form, ρ ∇ , of ∇ represents 2πc 1 (M, J) and its trace s ∇ (given by 2mρ ∇ ∧ ω m−1 = s ∇ ω m ) is called hermitian scalar curvature of (g, J, ω). The hermitian scalar curvature plays an important role in a setting described by Donaldson [19] (see also [32]), in which s ∇ is identified with the momentum map of the action of the group Ham(M, ω) of hamiltonian symplectomorphisms of a compact symplectic manifold (M, ω) on the (formal) Kähler Fréchet space of ω-compatible almost Kähler metrics AK ω . It immediately follows from this formal picture [26,47] that the critical points of the functional on AK ω g −→ M (s ∇ ) 2 ω m are precisely the ω-compatible almost Kähler metrics for which grad ω s ∇ is a Killing vector field. This provides a natural extension of the notion of an extremal Kähler metric to the more general almost Kähler context. Definition 5. An almost Kähler metric (g, ω) for which grad ω s ∇ is a Killing vector field is called extremal. Now let M be a manifold obtained by the generalized Calabi construction of Sect. 3.3.
In the notation of this section, for any S 2 t * -valued function H on ∆, satisfying the boundary and positivity conditions, formulae (6) introduce a pair (g, ω) of a smooth metric g and a symplectic form ω on M , such that the field of endomorphisms J defined by ω(·, ·) = g(J·, ·) is an almost complex structure, i.e., (g, ω) is an almost Kähler structure on M . 13 We shall refer to such pairs (g, ω) as compatible almost Kähler metrics on M .
. Thus, Φ = ΦŜ ∧ Φ V is a non-vanishing section of K −1 M 0 and the hermitian Ricci form ρ ∇ is then given by Denote by T H the g-orthogonal complement of T V; the spaces T H and T V then define the decomposition of T M 0 as the sum of horizontal and vertical spaces and, therefore (28) ∇ where ∇ X Y = ∇ H X Y + ∇ V X Y denotes the decomposition into horizontal and vertical parts.
Our first observation is that [3,Prop. 8] generalizes in the non-integrable case in the following sense: The foliation V is totally-geodesic with respect to both the Levi-Civita and hermitian connections. Indeed, with respect to the Levi-Civita connection D we have D Kr K s , X = D JKr K s , X = 0 for any X ∈ T H; using [K r , JK s ] = 0, our claim reduces to check that D JKr JK s , X = 0. We take X be the horizontal lift of a basic vector field and use the Koszul formula 2 D JKr JK s , X = L JKr JK s , X + L JKs JK r , X − L X JK r , JK s + [JK r , JK s ], X + L X JK r , JK s + L X JK s , JK r = L X JK r , JK s + L X JK s , JK r = (L X g)(JK r , JK s ) = (L Xĝ (z))(JK r , JK s ) = 0, where (ĝ =ĝ(z),ω =ω(z)) denote the Kähler quotient structure onŜ (also identified with the horizontal part of (g, ω)). Considering the hermitian connection ∇, by (25) and (26), our claim reduces to showing that N (K r , X) is horizontal for any X ∈ T H; using (26) and the fact that V is totally-geodesic with respect to D, we get N (K r , X), JU = 2 (D U J)(K r ), X = 0, for any U ∈ T V. | 2010-03-17T13:44:46.000Z | 2009-05-04T00:00:00.000 | {
"year": 2009,
"sha1": "b49e92f890e4a14e9dfab282461642e0f5ed3b1c",
"oa_license": "elsevier-specific: oa user license",
"oa_url": "https://doi.org/10.1016/j.aim.2011.05.006",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "b49e92f890e4a14e9dfab282461642e0f5ed3b1c",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
227129476 | pes2o/s2orc | v3-fos-license | Towards a Ring Analogue of the Leftover Hash Lemma
: The leftover hash lemma (LHL) is used in the analysis of various lattice-based cryptosystems, such as the Regev and Dual-Regev encryption schemes as well as their leakage-resilient counterparts. The LHL does not hold in the ring setting, when the ring is far from a field, which is typical for efficient cryptosystems. Lyubashevsky et al. (Eurocrypt ’13) proved a “regularity lemma,” which can be used instead of the LHL, but applies only for Gaussian inputs. This is in contrast to the LHL, which applies when the input is drawn from any high min-entropy distribution. Our work presents an approach for generalizing the “regularity lemma” of Lyubashevsky et al. to certain conditional distributions. We assume the input was sampled from a discrete Gaussian distribution and consider the induced distribution, given side-channel leakage on the input. We present three instantiations of our approach, proving that the regularity lemma holds for three natural conditional distributions.
Introduction
The leftover hash lemma (LHL) is used in the analysis of various lattice-based cryptosystems. Specifically, it is often useful to argue that for high-min entropy input x ∈ Z m q and random matrix A ← Z n×m q , Ax is uniform random, given A. The above fact is used in the proof of security for both the Regev and Dual-Regev encryption schemes. More sophisticated proof approaches that utilize the LHL along with the structure of the matrix A have been used to argue leakage resilience of these cryptosystems, such as in [1,13].¹ Analogues of the statement above do not necessarily hold in the ring setting. Specifically, assuming a high min-entropy input x = x 1 , . . . , x l , setting a 1 = 1, and a 2 , . . . , a l chosen uniformly at random from the ring, the uniformity of a l+1 = ∑︀ i∈ [l] a i x i does not follow from the LHL lemma, in cases where the ring is far from a field, which is the typical case for efficient cryptosystems.
Fortunately, Lyubashevsky et al. [25,26] proved a "regularity lemma" showing that the distribution over a l+1 as above is (close to) uniform random, even given a 2 , . . . , a l , but only for the case where the input x is drawn from a discrete Gaussian distribution of sufficiently high standard deviation. While sufficient for proving the security of certain cryptosystems, unlike the more general leftover hash lemma, the statement of the regularity lemma of [25] implies nothing about uniformity of a l+1 in the case that x is a high min-entropy input from another distribution.
The ring setting.
Consider the number field K = Q[x]/Φm(x), where Φm(x) is the m-th cyclotomic polynomial of degree φ(m). The ring of integers, R ⊂ K, is defined as R = Z[x]/Φm(x). Rq := Zq[x]/Φm(x) denotes the set of polynomials obtained by taking an element of Z[x]/Φm(x) and reducing each coefficient modulo q. In this paper, we further assume that m is a power of two, so Φm(x) = x n +1 has degree n = m/2, and set q to be a prime such that q ≡ 1 mod m. In this case Φm(x) completely splits into n factors in Zq[x]. This is the setting favored in practice since it allows for optimizations in the implementation, such as fast arithmetic over the ring Rq.
A Ring Analogue of the LHL.
For rings Rq such as the above, a result analogous to the leftover hash lemma-proving that a l+1 = ∑︀ i∈ [l] a i x i is indistinguishable from random, given a 2 , . . . , a l , as long as x 1 , . . . , x l has sufficiently high min-entropyis impossible. For example, if the j-th NTT coordinate of each ring element in x = x 1 , . . . , x l is leaked, then the j-th NTT coordinate of a l+1 = ∑︀ i∈ [l] a i x i is known², and so a l+1 is very far from uniform. Yet this is only a 1/n leakage rate!³ Nevertheless, Lyubashevsky et al. [25,26] proved a "regularity lemma" showing that for matrix A = [I k |Ā] ∈ (Rq) k×l , where I k ∈ (Rq) k×k is the identity matrix andĀ ∈ (Rq) k×(l−k) is uniformly random, and x chosen from a discrete Gaussian distribution (centered at 0) over R l q , the distribution over Ax is (close to) uniform random. A similar result was proven by Micciancio [28], but requires super-constant dimension l, thus yielding non-compact cryptosystems. In contrast, the regularity lemma of [25] holds even for constant dimension l as small as 2. The fundamental technical question we consider in this work is: For which distributions D over x ∈ R l q , is the distribution over Ax (close to) uniform random, for R, q, A as above and constant l?
Our Results
We prove a "regularity lemma" for three conditional distributions, which we describe next. Only the parameter s-the standard deviation of the discrete Gaussian for sampling each coordinate of x-differs in each setting.
Conditional Distribution I.
We assume a secret key x = (x 1 , . . . , x l ), where each x i ∈ Rq. Moreover, each x i itself is represented as an ndimensional vector. So in total, x is an l · n-dimensional vector. We consider the conditional distribution on x when the sum of x and e is revealed, where each coordinate of e is a Gaussian random variable with standard deviation at least s. This setting captures leakage on x by an adversary who uses a fast, but inaccurate device to obtain noisy measurements of each sampled coordinate of the secret key (e.g. through a power or timing channel). We prove that it is sufficient to set s ≥ √ 2 · 2n · q k/l+2/ (nl) . See Theorem 2.1 and Corollary 2.2.
2 Applying NTT to a i , x i ∈ Rq-resulting in n-dimensional vectors,̂︀ a i ,̂︀ x i ∈ Z n q -allows for component-wise multiplication/addition, so the j-th NTT coordinate of a i x i , i ∈ [l] will be known and so the j-th NTT coordinate of a l+1 is known. 3 We thank an anonymous reviewer for pointing out this counterexample to us.
Conditional Distribution II.
We consider the conditional distribution over x = (x 1 , . . . , x l ) when we leak ℓ coordinates from each x i , i ∈ [l]. and we set parameters such that the fraction of leaked coordinates-ℓ·l n·l -is constant. The ℓ leaked coordinates are arbitrary, but the same ℓ coordinates must be leaked from each x i , i ∈ [l].⁴ Low noise is added to each leaked coordinate (only 2n standard deviation, as opposed to √ 2 · 2n · q k/l+2/(nl) standard deviation as in Conditional Distribution I). No information at all is leaked about the remaining coordinates. This setting corresponds to a side-channel attack launched during the sampling of x, where the attacker has a slower, but more accurate device which allows it to obtain more accurate measurements for a constant fraction of the coordinates of the secret key, but no information for the remaining coordinates. ⁵ We prove that it is sufficient to set s ≥ 2n · q kn+2 l(n−ℓ) , where ℓ · l is the number of leaked coordinates. See Theorem 2.3 and Corollary 2.6.
Conditional Distribution III.
Here, we consider the conditional distribution on x, when the magnitude of x with Gaussian channel error e is revealed (note that e is a scalar). We assume e is sampled from a univariate Gaussian with standard deviation s. A motivation for this type of leakage is that (discrete) Gaussian sampling of x is often implemented via rejection sampling in practice [7,12]. E.g. a vector could be sampled from a "close" multi-dimensional binomial distribution and rejection sampling then used to obtain a sample from the correct distribution. The rejection condition depends on the weight of x under the target distribution, which in turn depends on the magnitude of x, and so this information is vulnerable to leakage during computation. ⁶ We prove that it is sufficient to set s ≥ √︀ 14/5 · (n ′ /n) · ln n ′ · 2n · q k/l+2/(nl) , where n ′ = n · l + 1. See Theorem 2.9 and Corollary 2.10.
Applications to leakage resilience.
Since applications of the LHL/Regularity Lemma in lattice-based cryptography are widespread, a number of Ring-LWE (RLWE) cryptosystems achieve certain leakage resilience properties using our results. Such cryptosystems include the ring analogues of Regev encryption [24], Dual-Regev encryption [25], and identitybased encryption (IBE) based on Dual-Regev encryption [19] (see ring version in [3]). Specifically, by substituting our "regularity lemma" for the original "regularity lemma" in the security proofs, those schemes still enjoy security guarantees even given certain leakage on the randomness for encryption (for Regev) the secret key (for Dual-Regev), and the secret key corresponding to the challenge identity (for IBE).
Our High-Level Approach
] is uniform random (over cosets of Λ ⊥ (A)), then the distribution of Ax is also uniform random over cosets of (qR) k . The input/output distributions can then be discretized over the ring R. Therefore, the goal is to show that when x is sampled from continuous distribution D, we have that [x mod Λ ⊥ (A)] is uniform random. Consider the case where the distribution D is exactly a Gaussian distribution with mean 0 and standard deviation s. In this case, if s is greater than or equal to the smoothing parameter of Λ ⊥ (A), this by definition ensures that the distribution [x mod Λ ⊥ (A)] is uniform random. Thus, [25] prove their regularity lemma by showing that with high probability over choice of A, the smoothing parameter, ηε(Λ ⊥ (A)), is upperbounded by s.
Before presenting our approach to extending the above result, it is instructive to give a high-level recap of how to derive upper bounds on the smoothing parameter.
Let ρs := e −π ⟨x,x⟩ s 2 and let ψs (the normalization of ρs) correspond to the probability density function (PDF) of the normalized n-dimensional Gaussian distribution with mean 0 and standard deviation s. In the following, for a function f we concisely represent ∑︀ v∈Λ f (v) by f (Λ). To show that the distribution over [x mod Λ] is (close to) uniform when x is sampled from a distribution with PDF ψs, one needs to show that for every coset (Λ + c) of the lattice, ψs(Λ + c) ≈ 1 det(Λ) . Focusing on the zero coset, where c = 0, we can prove this using the Poisson summation formula, which says that for any lattice Λ and integrable function ρs: where for a function f ,̂︀ f denotes the n-dimensional Fourier transform of f and Λ ∨ is the dual lattice of Λ (see Appendix A.2). It remains to show that̂︁ ψs(Λ ∨ ) is close to 1 (i.e. is upperbounded by 1 + ε).
The proof approach outlined above can be applied to (integrable) normalized PDF Ψ that are not Gaussians centered at 0: To show that the distribution over [x mod Λ] is (close to) uniform when x is sampled from a distribution with PDF Ψ, it is sufficient to show that̂︀ Ψ(Λ ∨ ) is upperbounded by 1 + ε.
In this work, we consider PDF's, Ψ, that correspond to the PDF of x, from the point of view of the adversary, given the leakage. The technical contribution of this work is to show that, for each conditional distribution, (with overwhelming probability over choice ofĀ)̂︀ Ψ(Λ ⊥ (A) ∨ ) is close to 1. Specifically, for each distribution, our approach requires: (1) Determining the PDF Ψ, (2) Computing (an upper bound for) the multi-dimensional
Related Work
Leakage-resilient cryptography. There is a significant body of work on leakage-resilient cryptographic primitives, beginning with the work of Dziembowski and Pietrzak [16] on leakage-resilient stream-ciphers. Other constructions include [1,5,6,14,22,22,23,23,27,30,31]. With the exception of [1], most of these results construct new cryptosystems from the bottom up. In our work, we consider whether we can prove that an existing cryptosystem enjoys leakage resilience, without modification of the scheme.
Lattice-based & leakage-resilient cryptography.
Goldwasser et al. [20] initiated the study of leakage resilience of lattice based cryptosystems. This was followed by series of works [1,13,15], all these papers however study leakage resilience of schemes based on standard LWE problem in both symmetric as well as public key setting.
Robustness of Ring-LWE
To the best of our knowledge the ePrint version [10] of this work is the first effort to study the robustness of RLWE based cryptosystems under leakage. Subsequent to the publishing of ePrint [10], interest has sparked in analyzing the RLWE-based schemes and their leakage resilience. Albrecht et al. [2] implemented cold boot attack on RLWE based KEM schemes and compared the number of operations required to mount the attack when secret is stored with different encodings. Recently, Bolboceanu et al. [4] studied the hardness of RLWE problem in cases where the secret is sampled from distributions other than uniform random distribution over the ring. In [11], it is shown that under specific structured leakage on the NTT encoding of secret key, it is possible to recover the entire secret key given multiple RLWE samples and they implement the attack to recover the secret in real world parameter settings. Stehlé and Steinfeld [34] studied the leftover hash lemma in the ring setting for power of 2 cyclotomics and Rosca et al. [33] generalized their result to non-cyclotomic rings. However, both these results study the case where input is sampled from discrete Gaussian distribution.
Extending the Regularity Lemma
For a positive integer n, we denote by [n] the set {1, . . . , n}. We denote vectors in boldface x and matrices using capital letters A. For vector x over R n or C n , define the ℓ 2 norm as ‖x‖ 2 = ( ∑︀ i |x i | 2 ) 1/2 . We write as ‖x‖ for simplicity. Background and standard definitions related to lattices and algebraic number theory are in Appendix A. Our results are applicable when R is the ring of integers in the m th cyclotomic number field K of degree n, m = 2n is a power of 2 and prime q is s.t. q ≡ 1 mod m. We denote by I k ∈ (Rq) k×k the identity matrix.
Conditional Distribution I
Recall that x = (x 1 , . . . , x l ), where each coordinate of each x i ∈ Rq is sampled from a discrete Gaussian with standard deviation s and each x i is represented as a vector in either the polynomial or canonical basis.⁷ We assume leakage of all coordinates, with Gaussian noise of standard deviation v = τ · s added. It turns out that this conditional distribution is fairly simple to handle since if X and Y are independent Gaussian random variables, then the distribution of X conditioned on X + Y is also a Gaussian that is not centered at 0. Fortunately, the regularity lemma of [26] straightforwardly extends to Gaussians that are not centered at 0. We discuss formal details next, however, we mainly view Conditional Distribution I as a warm-up to the more is uniformly random. Then for all σ ≥ 2n · q k/l+2/ (nl) and c ∈ R n·l then︂ ρσ,c except with probability at most 2 −Ω(n) over choice ofĀ.
Proof. The theorem follows from Lemma B.7 and the regularity lemma from [26].
The following corollary follows from Lemmas B.12 and B.13 and Theorem 2.1.
Corollary 2.2.
Let R, n, q, k, l, c, σ be as in Theorem 2.1. Assume that A = [I k |Ā] ∈ (Rq) k×l is chosen as in Theorem 2.1. Then, with probability 1 − 2 −Ω(n) over the choice ofĀ, the distribution of Ax ∈ R k q , where x ∈ R l is chosen from D Λ,σ,c , the discrete Gaussian probability distribution over R l with parameter σ and center c, satisfies that the probability of each of the q nk possible outcomes is in the interval (1 ± 2 −Ω(n) )q −nk (and in particular is within statistical distance 2 −Ω(n) of the uniform distribution over R k q ).
Conditional Distribution II
Recall that x = (x 1 , . . . , x l ), where each x i ∈ Rq and each x i is represented as a vector in the canonical embedding. We assume leakage of ℓ coordinates-with low noise added-of each x i for i ∈ [l] and restrict the coordinates leaked across each x i to be the same. Let S ⊆ [n], where |S| = ℓ denote the set of positions (from each x i ) that are leaked. Lemma D.1 shows that, conditioned on leakage, each component (resp. 0), and variance σ 2 j ≥ 4n 2 (resp. is uniformly random. Let σ := (σ 1 , . . . , σn) ∈ R n >0 and c := (c 1 , . . . , c ln ) ∈ R ln be vectors, where ℓ positions in σ are set to 2n, and all others are set to s. Let k, l, ℓ be such that l − k − l · ℓ/n > 0 and l − k − 1 ≥ 1, and let s ≥ 2n · q For proving Theorem 2.3, we begin with exposition on the forms of the Ideals qR ∨ ⊆ J ⊆ R ∨ in power-of-two cyclotomics as well as some lemmas.
Thus, the number of ideals I such that qR ⊆ I ⊆ R (and hence also the number of ideals J ∈ T) is exactly 2 n . Moreover, note that for each ideal J ∈ T, Thus, we see that for each J ∈ T, 1 ≤ |J/qR ∨ | ≤ q n . Let T 1 denote the set of ideals J ∈ T such that |J/qR ∨ | < 2 n . Let T 2 denote the set of ideals J such that |J/qR ∨ | ≥ 2 n . Furthermore, let T 1 2 be the set of J ∈ T 2 such that s ≥ η 2 −2n (( 1 q J) ∨ ) (where η 2 −2n denotes the smoothing parameter and s is fixed as above). Let T 2 2 := T 2 \ T 1 2 . Let σ := (σ 1 , . . . , σn) ∈ R n >0 be a vector with ℓ positions are set to 2n, while the other positions are set to value s.
The proof of Lemma 2.4 can be found in Appendix E.1.
The proof of Lemma 2.5 can be found in Appendix E.1. We now conclude the proof of Theorem 2.3.
Proof of Theorem 2.3. Since by Lemma B.7 we have that for any (n · l)-dimensional vectors, c, x and any ndimensional vector σ = (σ 1 , . . . , σn):̂︂ then following the proof of [26] step-by-step, it is sufficient to show that We will show that and that To show (2), note that by Lemma 2.4, for ideals J ∈ T 1 (we have that On the other hand, by definition of T 2 2 , for ideals J ∈ T 2 2 , we have that Combining the above, we get that for J ∈ T 1 ∪ T 2 2 , Similarly to [26], using the lower bound of s from Theorem 2.3, we bound Moreover, by Lemma 2.5 and the fact that |T 1 2 | ≤ |T| = 2 n , we can bound where the last line follows from the setting of parameters in Theorem 2.3. This completes the proof.
The following corollary follows from Lemmas B.12 and B.13 and Theorem 2.3.
Corollary 2.6. Let k, l, ℓ, σ and c be as in Theorem 2.3. Assume that A = [I k |Ā] ∈ (Rq) k×l is chosen as in Theorem 2.3. Then, with probability 1 − 2 −Ω(n) over the choice ofĀ, the distribution of Ax ∈ R k q , where x ∈ R l is chosen from D R l ,σ l ,c , the discrete Gaussian probability distribution over R l with parameter σ l and center c, satisfies that the probability of each of the q nk possible outcomes is in the interval (1 ± 2 −Ω(n) )q −nk (and in particular is within statistical distance 2 −Ω(n) of the uniform distribution over R k q ).
In particular, this means that the standard deviation used to sample x should be increased from 2n · q k/l+2/(nl) (as in [26]) to 2n · q kn+2 l(n−ℓ) .
Conditional Distribution III
We slightly change the dimensions so that x is represented by a vector of dimension n ′ := l · n + 1. When n is a power of two, a spherical Gaussian in the coefficient representation is also a spherical Gaussian in the canonical embedding representation [24]. So we can assume that x is generated using the coefficient representation, where each coordinate is sampled independently from a discrete Gaussian, D Z,s ′ . During sampling of x, an additional coordinate is sampled and stored together with the remainder of the secret. We compute the PDF corresponding to the conditional distribution on x, given z = |r + e|, where r = ‖x‖ as: where N is the normalization factor. For details on how the PDF is computed, is the sum of two Gaussian functions centered at zs 2 v 2 +s 2 and − zs 2 v 2 +s 2 respectively with the same standard deviation σ.
where the probability is taken over choice of x and e.
The proof is found in Appendix E.2.
By Lemma 2.7, we have that with all but negligible probability, c : For the proof, we will require certain properties of the Fourier transform of Ψσ,c, when c is bounded as above. We state those properties in the following theorem, which is proved in Appendix C. , where x is a vector over n ′ dimensions. and let̂︂ Ψσ,c(y) denote the n ′ -dimensional Fourier transform of Ψσ,c. Then We next present the main theorem of this section.
and Z, written as Λ ⊥ (A) Proof. Note that Λ ⊥ (A) is a lattice of even dimension l · n (where n is a power of two), but Theorem 2.8 holds only for n ′ equal to l · 2 a + 1. Therefore, we define n ′ := l · n + 1, and we have the n ′ -dimensional lattice We have the following properties of Λ ⊥ (A) + , which can be verified by inspection: By Poisson summation formula, it is sufficient to show that with probability 1 − 2 −Ω(n) over choice of A, |̂︂ Ψσ,c|(Λ ⊥ (A) + ) ∨ ) ≤ 1 + 2 −Ω(n) , wherê︂ Ψσ,c denotes the Fourier transform of Ψσ,c over n ′ dimensions and the notation |̂︂ Ψσ,c| means the summation of the absolute value of the function over the lattice Λ ⊥ (A) + ) ∨ .
The proof appears in Appendix E.2. Given the corollary, the analysis of Conditional Distribution III is complete. In particular, this means that the standard deviation used to sample x should be increased from 2n · q k/l+2/(nl) (as in [26]) to √︁ 1+τ 2 τ 2 · 2n · q k/l+2/(nl) .
Conclusions and Future Directions
In this work, we present a general approach for analyzing the leakage resilience of RLWE-based cryptosystems, by determining and analyzing the explicit PDF resulting from the conditional distribution of the RLWE secret given the leakage. Our approach can be used to provide a security analysis for existing cryptosystems in the presence of leakage, with appropriate choice of parameters (and without any modifications to the scheme). We instantiate our approach by considering three leakage settings and corresponding conditional distributions I, II and III.
A key technical tool in the analysis of conditional distribution II is extending the regularity lemma of [25]; to cases where x is drawn from a non-spherical Gaussian with standard deviation significantly smaller than the smoothing parameter in a constant fraction of the dimensions and larger than the smoothing parameter in the remaining dimensions. In the analysis of conditional distribution III we find applications of the Radial Fourier Transform to lattice-based cryptography.
Future Directions.
We believe that our approach of generalizing the regularity lemma to conditional distributions can be used as an important tool in the security analysis of RLWE-based cryptosystems. In future work, we plan to extend our analysis to other conditional distributions, with implications for other leakage settings. A first candidate is generalizing conditional distribution II to (certain types of) multivariate Gaussians with covariance matrices that are not diagonal. Such a generalization would allow us to capture leakage of coordinates in the polynomial instead of canonical representation.
A.1 Notation
For a positive integer n, we denote by [n] the set {1, . . . , n}. We denote vectors in boldface x and matrices using capital letters A. For vector x over R n or C n , define the ℓ 2 norm as ‖x‖ 2 = ( ∑︀ i |x i | 2 ) 1/2 . We write as ‖x‖ for simplicity.
A.2 Lattices and background
Let T = R/Z denote the cycle, i.e. the additive group of reals modulo 1. We also denote by Tq its cyclic subgroup of order q, i.e., the subgroup given by {0, 1/q, . . . , (q − 1)/q}. Let H be a subspace, defined as H ⊆ C Z * m , (for some integer m ≥ 2), A lattice is a discrete additive subgroup of H. We exclusively consider the full-rank lattices, which are generated as the set of all linear integer combinations of some set of n linearly independent basis vectors The determinant of a lattice L(B) is defined as |det(B)|, which is independent of the choice of basis B. The minimum distance λ 1 (Λ) of a lattice Λ (in the Euclidean norm) is the length of a shortest nonzero lattice vector.
The dual lattice of Λ ⊂ H is defined as following, where ⟨·, ·⟩ denotes the inner product.
Discretization
Discretization is an important procedure used in applications based on lattices, such as converting continuous Gaussian distribution (defined in Appendix B) into a discrete Gaussian distribution (Definition B.9). Given a lattice Λ = L(B) represented by some "good" basis B = {b i }, a point x ∈ H, and a point c ∈ H representing a lattice coset Λ + c, the discretization process outputs a point y ∈ Λ + c such that the length of y − x is not too large. This is denoted as y ← ⌊x⌉ Λ+c . A discretization procedure is called valid if it is efficient; and depends only on the lattice coset Λ + (c − x), not on particular representative used to specify it. Note that for a valid discretization, ⌊z + x⌉ Λ+c and z + ⌊x⌉ Λ+c are identically distributed for any z ∈ Λ. For more details and actual description of algorithms used for discretization we refer the interested reader to [26].
A.3 Algebraic Number Theory
For a positive integer m, the m th cyclotomic number field is a field extension K = Q(ζm) obtained by adjoining an element ζm of order m (i.e. a primitive m th root of unity) to the rationals. The minimal polynomial of ζm is the m th cyclotomic polynomial where ωm ∈ C is any primitive m th root of unity in C.
For every i ∈ Z * m , there is an embedding σ i : K → C, defined as σ i (ζm) = ω i m . Let n = φ(m), the totient of m. The trace Tr : K → Q and norm N : K → Q can be defined as the sum and product, respectively, of the embeddings: Tr For any x ∈ K, the lp norm of x is defined as ‖x‖p = ‖σ(x)‖p = ( ∑︀ i∈[n] |σ i (x)| p ) 1/p . We omit p when p = 2. Note that the appropriate notion of norm ‖·‖ is used throughout this paper depending on whether the argument is a vector over C n , or whether the argument is an element from K; whenever the context is clear.
A.4 Ring of Integers and Its Ideals
Let R ⊂ K denote the set of all algebraic integers in a number field K. This set forms a ring (under the usual addition and multiplication operations in K), called the ring of integers of K. Ring of integers in K is written The (absolute) discriminant ∆ K of K measures the geometric sparsity of its ring of integers. The discriminant of the m th cyclotomic number field K is in which the product in denominator runs over all the primes dividing m.
An (integral) ideal I ⊆ R is a non-trivial (i.e. I ≠ ∅ and I ≠ {0}) additive subgroup that is closed under multiplication by R, i,e., r · a ∈ I for any r ∈ R and a ∈ I. The norm of an ideal I ⊆ R is the number of cosets of I as an addictive subgroup in R, defined as index of I, i.e., N(I) = |R/I|. Note that N(IJ) = N(I)N(J).
A fractional ideal I in K is defined as a subset such that I ⊆ R is an integral ideal for some nonzero d ∈ R. Its norm is defined as N(I) = N(dI)/N(d). An ideal lattice is a lattice σ(I) embedded from a fractional ideal I by σ in H. The determinant of an ideal lattice σ(I) is det(σ(I)) = N(I) · √︀ ∆ K . For simplicity, however, most often when discussing about ideal lattice, we omit mention of σ since no confusion is likely to arise. For any fractional ideal I in K, its dual ideal is defined as where p runs over all odd primes dividing m. Also, define t =m g ∈ R, wherem = m 2 if m is even, otherwisem = m. Rq × (K R /qR ∨ ), outputs a pair (a = pa ′ mod qR, b) ∈ Rq × R ∨ q with the following guarantees: if the input pair is uniformly distributed then so is the output pair; and if the input pair is distributed according to the RLWE distribution A s,ψ for some (unknown) s ∈ R ∨ and distribution ψ over K R , then the output pair is distributed according to As,χ, where χ = ⌊p · ψ⌉ w+pR ∨ .
Lemma A.5. [26, Lemma 2.24]
Let p and q be positive coprime integers, ⌊·⌉ be a valid discretization to (cosets of) pR ∨ , and w be an arbitrary element in R ∨ p . If R-DLWE q,ψ is hard given l samples, then so is the variant of R-DLWE q,ψ in which the secret is sampled from χ := ⌊p · ψ⌉ w+pR ∨ , given l − 1 samples.
B Regularity and Fourier Transforms
Let ρs,c denote an n-dimensional Gaussian function with standard deviation s and mean c.
Definition B.1 (Fourier Transform).
Given an integrable function f : R n → C, we denote bŷ︀ f : R n → C the Fourier transform of f , defined aŝ︀
Theorem B.2 (Poisson Summation Formula).
:Let Λ ⊂ R n be an arbitrary lattice of dimension n, and let f : R n → C be an appropriate function ⁸ Then where Λ ∨ is the dual lattice of Λ and̂︀ f is a Fourier transform of f .
The following is a modified version of Lemma 3.8 from [32]. Proof. First, since Ψ is a pdf, we have that̂︀ Ψ(0) = 1. We have: where the equality follows from properties of the Fourier transform.
The proof of the following lemma proceeds as the proof of Corollary 2.8 in [19].
Lemma B.13. Let Λ ′ be an n-dimensional lattice and Ψ a probability distribution over R n . Assume that for all c ∈ R n it is the case that Let Λ be an n-dimensional lattice such that Λ ′ ⊆ Λ then the distribution of (D Λ,Ψ mod Λ ′ ) is within statistical distance of at most 4ε of uniform over (Λ mod Λ ′ ).
Definition B.14. For a matrix A ∈ R k×l q we define Λ ⊥ (A) = {z ∈ R l : Az = 0 mod qR}, which we identify with a lattice in H l . Its dual lattice (which is again a lattice in H l ) is denoted by Λ ⊥ (A) ∨ .
Theorem B.15. [26] Let R be the ring of integers in the m th cyclotomic number field K of degree n, and q ≥ 2 an integer. For positive integers k
is uniformly random. Then for all s ≥ 2n, In particular, if s > 2n · q k/l+2/(nl) then EĀ , and so by Markov's inequality, The following corollary was presented in [26].
Corollary B.16. Let R, n, q, k and l be as in Theorem B.15. Assume that A = [I k |Ā] ∈ (Rq) k×l is chosen as in Theorem B.15. Then, with probability 1 − 2 −Ω(n) over the choice ofĀ, the distribution of Ax ∈ R k q , where each coordinate of x ∈ R l q is chosen from a discrete Gaussian distribution of parameter s > 2n · q k/l+2/(nl) over R, satisfies that the probability of each of the q nk possible outcomes is in the interval (1 ± 2 −Ω(n) )q −nk (and in particular is within statistical distance 2 −Ω(n) of the uniform distribution over R k q ).
We next state an additional corollary of the regularity theorem from [26].
C Proof of Theorem 2.8
In this section, we prove the following theorem, which provides an upper bound on the Fourier transform of a pdf for the analysis of Conditional Distribution III in Section 2.3. , where x is a vector over n ′ dimensions. and let̂︂ Ψσ,c(y) denote the n ′ -dimensional Fourier transform of Ψσ,c. Then |̂︂ Ψσ,c(y)| ≤ n ′ n ′ · e −π‖y‖ 2 σ 2 for ‖y‖ > 1/σ.
The following lemma computes a lower bound of the normalization factor of the pdf in Theorem 2.8. Once we prove the lemma, we proceed to the proof of Theorem 2.8.
. Let r = ‖x‖. Since f is a radial function, we slightly abuse notation and denote by f (r) := e − π(r−c) 2 . Now, we have that where V n ′ denotes the volume of n ′ -dimensional ball V n ′ = π n ′ /2 Γ(1+n ′ /2) . Since f is an even function and n ′ is odd, so r n ′ −1 is an even function, we have that r n ′ −1 f (r) is even and so Let a = π/σ 2 . Since n ′ is odd, we now have that Combining the above with (C1) and (C2) and substituting for a, we get that ∫︀ R n ′ f (x) dx ≥ σ n ′ , which completes the proof of the lemma.
Let r := ‖x‖, we slightly abuse notation and view f as a function of r, f (r) := e − π(r−c) 2 Ψσ,c is a radial function, so is its Fourier transform, thus, we again slightly abuse notation and view F :=̂︀ f as a function of κ := ‖y‖. We may now use the formula for the radial Fourier transform of an n ′ -dimensional, radial function f to find F [21]: where the [x] means the largest integer not exceeding x. We now have: where the first equality follows from (C3), the second equality follows from (C5), (C6) and the settings of .
In order to bound (C7), we will individually upper bound I: and II: where the second equality follows since f (r) is an even function, cos(2πκr) is an even function and for n ′ = l · 2 a + 1, all powers of r in the integrand are even, which means that the entire integrand is an even function.
To compute an upper bound on as above, we integrate each term separately. Since the analysis is essentially the same for each term, we focus on upper bounding the term A := Thus, we have that Plugging the above back into (C8), and recalling that |c ′ j | = , we have that Where the last inequality follows since (︀ n i )︀ ≤ 2 n and n! ≤ n n . We now turn to upper-bounding I. Recalling that , we have that where the second equality follows since f (r) is an even function, sin(2πκr) is an odd function and for n ′ = l · 2 a + 1, all powers of r in the integrand are odd, which means that the entire integrand is an even function.
To compute an upper bound on as above, we integrate each term separately. Since the analysis is essentially the same for each term, we focus on the term B := Thus, we have that Plugging the above back into (C10), and recalling that |c , we have that Proof.
where (E1) follows from Lemma B.4, (E2) follows from Lemma A.1, and (E3) follows from the fact that Proof. Recall that σ := (σ 1 , . . . , σn) ∈ R n >0 is defined as a vector such that ℓ positions are set to 2n, while the other positions are set to s. Define z 1 , . . . , zn in the following way: Applying Poisson summation twice we arrive at: where (E6) follows from definitions of ρ and z i . To derive (E7), let us first introduce the following claim. , when σ i = 2n. Since there are ℓ positions in σ when σ i = 2n, we obtain (E7). Finally (E8) follows by definition of smoothing parameter η 2 −2n (( 1 q J) ∨ ). Now, using the fact that η 2 −2n ≤ (∆ K |J/qR ∨ |) 1/n , the fact that ∆ K = n n and the fact that |J/qR ∨ | ≥ 2 n , and the set of parameters, we have that which completes the proof of the lemma.
E.2 Additional Proofs in Conditional Distribution III
Recall that a generic PDF of one dimensional Gaussian distribution is defined as: where r is the magnitude of x. It also can be viewed as probability density function of secret key for its magnitude ‖X‖ = r, denoted as ψs(‖X‖ = r). The error is sampled from a 1-dimensional Gaussian distribution with center at 0. We write probability density function of error E at value y is Let F Z|A (f (Z) = b) generically represent the probability density function of random variable Z at value b of f (Z), conditioned on event A.
We now derive the density function of secret key X given the value z of |‖X‖ + E|. The weight placed on a value x = (x 1 , . . . , x n ′ ) by the conditional distribution depends only on the magnitude of x (i.e. r = ‖x‖) and can be computed as: Proof. Using union bound, we have Note that since s > n, and using the fact that λ 1 ((R l × Z) ∨ ) ≥ λ 1 (R ∨ ) ≥ Corollary 2.10. Let k, l, σ and c be as in Theorem 2.9. Assume that A = [I k |Ā] ∈ (Rq) k×l is chosen as in Theorem 2.9. Then, with probability 1 − 2 −Ω(n) over the choice ofĀ, the distribution of Ax ∈ R k q , where (x, x n ′ ) ∈ R l × Z is chosen from D R l ×Z,Ψσ,c satisfies that the probability of each of the q nk possible outcomes is in the interval (1±2 −Ω(n) )q −nk (and in particular is within statistical distance 2 −Ω(n) of the uniform distribution over R k q ). | 2020-11-24T14:15:41.756Z | 2020-11-17T00:00:00.000 | {
"year": 2020,
"sha1": "d0e89668a34e69bc389f1c4e65513e3f64ae3ebe",
"oa_license": "CCBY",
"oa_url": "https://www.degruyter.com/document/doi/10.1515/jmc-2020-0076/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "a1aaf3b817df1aafed4c54a0af34ce3e69364a18",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
} |
5792526 | pes2o/s2orc | v3-fos-license | HSP60 plays a regulatory role in IL-1β-induced microglial inflammation via TLR4-p38 MAPK axis
Background IL-1β, also known as “the master regulator of inflammation”, is a potent pro-inflammatory cytokine secreted by activated microglia in response to pathogenic invasions or neurodegeneration. It initiates a vicious cycle of inflammation and orchestrates various molecular mechanisms involved in neuroinflammation. The role of IL-1β has been extensively studied in neurodegenerative disorders; however, molecular mechanisms underlying inflammation induced by IL-1β are still poorly understood. The objective of our study is the comprehensive identification of molecular circuitry involved in IL-1β-induced inflammation in microglia through protein profiling. Methods To achieve our aim, we performed the proteomic analysis of N9 microglial cells with and without IL-1β treatment at different time points. Expression of HSP60 in response to IL-1β administration was checked by quantitative real-time PCR, immunoblotting, and immunofluorescence. Interaction of HSP60 with TLR4 was determined by co-immunoprecipitation. Inhibition of TLR4 was done using TLR4 inhibitor to reveal its effect on IL-1β-induced inflammation. Further, effect of HSP60 knockdown and overexpression were assessed on the inflammation in microglia. Specific MAPK inhibitors were used to reveal the downstream MAPK exclusively involved in HSP60-induced inflammation in microglia. Results Total 21 proteins were found to be differentially expressed in response to IL-1β treatment in N9 microglial cells. In silico analysis of these proteins revealed unfolded protein response as one of the most significant molecular functions, and HSP60 turned out to be a key hub molecule. IL-1β induced the expression as well as secretion of HSP60 in extracellular milieu during inflammation of N9 cells. Secreted HSP60 binds to TLR4 and inhibition of TLR4 suppressed IL-1β-induced inflammation to a significant extent. Our knockdown and overexpression studies demonstrated that HSP60 increases the phosphorylation of ERK, JNK, and p38 MAPKs in N9 cells during inflammation. Specific inhibition of p38 by inhibitors suppressed HSP60-induced inflammation, thus pointed towards the major role of p38 MAPK rather than ERK1/2 and JNK in HSP60-induced inflammation. Furthermore, silencing of upstream modulator of p38, i.e., MEK3/6 also reduced HSP60-induced inflammation. Conclusions IL-1β induces expression of HSP60 in N9 microglial cells that further augments inflammation via TLR4-p38 MAPK axis. Electronic supplementary material The online version of this article (doi:10.1186/s12974-016-0486-x) contains supplementary material, which is available to authorized users.
Background
Neuroinflammation being the first line of defense of the central nervous system (CNS) provides innate immunity to the brain and spinal cord. It can be evoked by various factors ranging from bacterial infections to neurodegenerative disorders that mediate acute and chronic inflammations, respectively [1][2][3]. In addition, it may also be caused by an autoimmune response such as multiple sclerosis or in response to toxins and nerve agents [4,5]. Inflammation in the CNS, however, acts as a doubleedged sword, as on one hand, it serves to protect the CNS from infection and neuronal injury but on the other hand, an exaggerated inflammatory process may lead to further neurodegeneration and neuronal loss [6].
Among the various factors secreted by activated microglia, IL-1β is a prominent pro-inflammatory cytokine which plays a crucial role in the progression of chronic neurodegenerative diseases as well as acute neuroinflammatory conditions [18][19][20]. Once secreted by the activated microglia and astrocytes [21], it can further stimulate its own production in an autocrine and/or, paracrine fashion by binding to its cognate IL-1 receptors (IL-1Rs) [21,22], this leads to a constitutive expression of IL-1β which further amplifies the inflammatory signal. After binding, it can upregulate the production of other pro-inflammatory cytokines, prostaglandins, and other toxic mediators like ROS, by starting a vicious cycle of biochemical pathways, and is therefore, considered as the "master regulator of inflammation" [23,24]. However, the molecular signaling underlying IL-1βinduced inflammation during microglial activation is not fully understood.
Heat shock proteins (HSPs), represent a collection of highly conserved proteins constitutively expressed in most cells under cellular stress conditions like, nutrient deprivation or mechanical damage and are considered as endogenous danger signals to the immune system [25,26]. One of the important mitochondrial molecular chaperones is HSP60 which contributes to the proper folding of the proteins and restoration of the tertiary structure of the misfolded or denatured proteins [27]. Interestingly, HSP60 has been reported to play immunomodulatory role in case of various infections [28][29][30]. In addition, several studies suggest that HSP60 serves as an endogenous signal of injury in the CNS by activating microglia after its release from injured neurons and by binding to toll-like receptor 4 (TLR4) in a myeloid differentiation factor 88 (Myd88) dependent pathway [31,32]. Intrathecal HSP60 mediates neurodegeneration and demyelination through a TLR4-Myd88 dependent pathway [33]. Despite its chaperone activities, HSP60 can also appear in extracellular milieu where it elicits a potent pro-inflammatory response in the peripheral immune system [34]. Besides its chaperone and immunomodulatory roles, the function of HSP60 in response to Il-1β-induced inflammation in microglial cells is unknown.
As understanding the mechanism of IL-1β-induced inflammation in microglia is of considerable importance in neuroinflammation biology, hence we set out to investigate molecular circuitry underlying IL-1β-induced inflammation in microglia and how HSP60 modulates this circuitry. Herein, we demonstrate that HSP60 aggravates IL-1β-induced inflammation in microglia via TLR4 receptors and MAPK signaling pathway. Our results further suggest that p38 MAPK is the major player in HSP60-induced inflammation which acts following the activation of MEK3/6.
Animal experiments
P10 (postnatal day 10) BALB/c mice of either sex were intraperitoneally (i.p.) injected with 50 μl of 10 ng/g body weight of IL-1β dissolved in 1× phosphate-buffered saline (PBS) every 24 h for different durations (1, 3, and 5 days) as described elsewhere [35], while controltreatment group received the same volume of the carrier (1× PBS). Groups of three mice were sacrificed at each time point either for protein or mRNA isolation. P0-P2 (postnatal days 0-2) BALB/c mice of either sex were procured for primary microglial culture. Animals were handled in strict accordance with good animal practice as defined by the Committee for the Purpose of Control and Supervision of Experiments on Animals (CPCSEA) and the Ministry of Environment and Forestry, Government of India. The Institutional Animal Ethics Committee (IAEC) of the National Brain Research Centre approved the study protocol (NBRC/IAEC/2013/77 and NBRC/IAEC/2012/70).
Cell culture
Primary microglial cells were isolated from BALB/c mouse pups (postnatal days 0-2) as reported previously [36]. Briefly, the whole brain cortex was dissected from the mouse brain, and the meninges were peeled off under a dissecting microscope. Tissue was digested using trypsin-DNase I solution at 37°C, with a brief mechanical dissociation to obtain a cell suspension. The cell suspension was passed through 130-μm cell strainers, and the supernatant was centrifuged at 800 rpm for 10 min to obtain a cell pellet. Cells were seeded in 75-cm 2 tissue culture flasks at a density of 2 × 10 5 viable cells/cm 2 in complete MEM (supplemented with 10 % fetal bovine serum, 100 units/ml penicillin, 100 μg/ml streptomycin, 0.6 % glucose, and 2 mM glutamine). The exhausted media was changed every 2 days with fresh complete MEM, until the mixed glial culture became confluent. On day 12, the flasks were shaken on an Excella E25 orbital shaker (New Brunswick Scientific, NJ, USA) at 250 rpm for 90 min at 37°C to dislodge microglial cells. The non-adherent cells thus obtained were plated in bacteriological petridishes for 90 min to allow microglial cells to adhere. The adherent cells were then scraped, centrifuged, and plated in chamber slides at 8 × 10 4 viable cells/cm 2 and incubated at 37°C for further experiments.
Mouse microglial cell line N9 was a kind gift from Prof. Maria Pedroso de Lima, Center for Neuroscience and Cell Biology, University of Coimbra, Portugal. The cell lines were grown at 37°C in RPMI-1640 supplemented with 10 % fetal bovine serum, 100 units/ml penicillin, and 100 μg/ml streptomycin. IL-1β treatment was given to N9 cells at a dose of 5 ng/ml at different time points (3,6, and 12 h) in vitro. All the reagents related to cell culture were obtained from Sigma-Aldrich, St. Louis, USA, unless otherwise stated.
Knockdown and overexpression studies
Knockdown studies were performed using endonucleaseprepared short interfering RNA (esiRNA) against mouse HSP60 (EMU151751) and scrambled esiRNA (enhanced green fluorescent protein (eGFP)) (sense, 5′-GTG AGC AAG GGC GAGGAG CTG TTC ACC GGG GTG GTG CCC ATC CTG GTC GAG CTG GA-3′) and were purchased from Sigma-Aldrich. A total of 6 pM HSP60 or 8 pM MEK3/6 esiRNA were used for transfection using Lipofectamine 2000 (Invitrogen, Carlsbad, CA, USA) according to the manufacturer's protocol. After 24 h of transfection, cells were further treated with IL-1β for 3 h and processed for immunoblotting and cytokine bead array. Overexpression of HSP60 in N9 cells was achieved by transfection of mouse HSP60 plasmid clone (MC206740, OriGene) in 60 mm 2 plates using lipofectamine 2000 (Invitrogen, Carlsbad, CA, USA) according to the manufacturer's protocol. The media was changed after 6 h of transfection, and cells were further kept for 24 h to allow overexpression of the cloned HSP60 gene. The control cells were transfected with pCMV6 empty plasmid vector.
Proteomic profiling
Sample preparation and two-dimensional gel electrophoresis (2-DE) 2-DE was performed as described earlier [37]. Untreated control and treated N9 cells were lysed in buffer containing 8 M urea, 2 % (w/v) CHAPS, 0.2 % sodium orthovanadate, and protease inhibitor cocktail (Sigma-Aldrich, USA). Samples were sonicated and centrifuged at 20,000g for 30 min at 4°C to remove debris. The proteins were further precipitated using trichloroacetic acid (TCA) at 4°C overnight followed by centrifugation at 20,000g at 4°C.
Protein visualization and image analysis
Protein spots were visualized by staining with Coomassie Brilliant Blue G-250, and the gel images were captured by LI-COR odyssey infra-red imager (LI-COR Biosciences, USA). Four biological replicates each with two analytical replicate (n = 8) images per dataset (untreated control versus different time points of IL-1βtreated N9 cells) were used for automatic spot detection using PD Quest 2D Analysis Software (Hercules, CA, USA). Spot intensities were normalized by total valid spot intensities and mean of values from duplicate analytical gels from four biological replicates were subjected to paired t test analysis using GraphPad Prism software. Protein spots showing altered expression between control and experimental groups (|ratio| ≥ 1.5, p ≤ 0.05) were marked and excised by use of thinwalled PCR tubes (200 μl) and appropriately cut at the bottom with a fresh surgical scalpel blade. Care was taken not to contaminate the spots with adjoining proteins or with skin keratin.
Mass spectrometry analysis and database searching
Proteins were identified by mass spectrometry (MS) using an AB Sciex MALDI TOF/TOF 5800 (AB Sciex, CA, USA) at Institute of Life Sciences, Bhubaneswar, after washing and in-gel trypsin digestion of gel spots. All MS and MS/MS spectra were simultaneously submitted to ProteinPilot software version 3.0 (Applied Biosystems) for database searching using Mascot search engine against UniprotKB-Swissprot database containing 544996 sequences with the taxonomy group of Mus musculus. Search parameters were as follows: trypsin digestion with one missed cleavage, variable modifications (oxidation of methionine and carbamidomethylation of cysteine), and the peptide mass tolerance of 100 ppm for precursor ion and mass tolerance of ±0.8 Da for fragment ion with +1 charge state. Results obtained from database search were further analyzed. Proteins from M. musculus species with significant Mowse scores and more than one unique peptide were identified and used for further study as shown in Table S1 in the Additional file 1).
Functional analysis using GeneCodis and String Software
The list of differentially expressed genes/proteins obtained after the proteomic analysis of IL-1β-treated N9 cells were also imported into the GeneCodis software. In our analysis, we used the default settings of GeneCodis, which employs hypergeometric test for calculating P values and false-discovery rate for P values correction [38].
We studied interactomes of differentially expressed genes/proteins using Search Tool for the Retrieval of Interacting Genes/Proteins (STRING) database. For this, we first generated first order protein-protein interaction network of the identified proteins with the help of STRING database [39], at low confidence value (0.150), to identify highest possible connections and applied highest degree of Markov Cluster Algorithm (MCL) clustering to determine different clusters.
For western blotting of the proteins secreted in the media, the proteins present in the used culture media were precipitated overnight by using 1/4th volume of TCA at 4°C and centrifuged at 20,000g. The pellet was washed with acetone and air dried and resuspended in 2 % urea-CHAPS before loading in 10 % SDS polyacrylamide gel. Western blotting was performed as previously described [24]. Following primary antibodies were used: Anti HSP60, Anti-MEK3/6 (Abcam), phospho-and total-ERK1/2, phospho-and total JNK1/2, phospho-and total p38 (Cell Signaling), Anti-TLR4 and phospho-MEK3/6 (Santa Cruz Biotechnology), and β-actin (Sigma-Aldrich). Secondary antibodies were horseradish peroxidase labeled. The blots were developed using chemiluminescence reagent (Millipore) in ChemiGenius Bioimaging System (Syngene, Cambridge, UK). The images were captured and analyzed using the GeneSnap and GeneTools software, respectively, from Syngene. The protein levels were normalized to β-actin levels. The fold change with respect to control cells was then calculated based on integrated density values (IDV). All experiments were repeated at least three times and representative blots are shown.
Quantitative real-time PCR (qRT-PCR)
Total RNA from N9 cells and mouse brains was isolated using TRI Reagent (Sigma-Aldrich), and reverse transcription was carried out using an Advantage RT-for-PCR kit (Clontech Laboratories). Real-time PCR was done using power SYBR Green PCR master mix (Applied Biosystems, Foster City, CA, USA) in i7 realtime PCR instrument (Applied Biosystems) as described previously [36]. Sequence for primers used for real-time PCR is given in Additional file 1: Table S2. GAPDH mRNA was used as endogenous control for normalization. Relative quantitation of gene expression was carried out using the Pfaffl method [40].
Cytokine bead array (CBA)
Fifty microgram protein from the cell and brain lysate was used for the quantification of the levels of cytokines in control and treated condition. CBA was performed using a mouse CBA kit (BD Biosciences, Franklin Lakes, NJ, USA) according to the manufacturer's instructions. The beads coated with interleukin 6 (IL-6), tumor necrosis factor alpha (TNF-α), and monocyte chemotactic protein 1 (MCP-1) were mixed with 50 μg cell lysates and standards, to which fluorescent dye phycoerythrine (PE) was added. The experiment was performed in triplicates as described [41], and data was analyzed using BD CBA software (Becton, Dickinson, San Diego, CA, USA). The concentrations of various cytokines were expressed as fold change with respect to control.
Statistical analysis
Data are represented as the mean ± standard deviation (SD) from at least three independent experiments. The data was analyzed statistically by paired two-tailed Student's t test. p < 0.05 were considered significant.
IL-1β administration induces inflammation in microglia both in vitro and in vivo
IL-1β, being the master regulator of inflammation, is well known to induce inflammation in microglia by triggering a cascade of molecular pathways leading to the activation of microglia by the production of proinflammatory molecules and cyto-chemokines [24]. We first assessed the extent of IL-1β-induced inflammation in microglia in vitro by treating N9 murine microglial cells with IL-1β and determined the levels of proinflammatory enzymes (iNOS and COX2) and proinflammatory cytokines (TNF-α, MCP-1, and IL-6). The N9 murine microglial cells were treated with 5 ng/ml IL-1β for 3, 6, and 12 h. Consistent with previous reports, a significant increase in the expression of proinflammatory markers, iNOS, and COX2 was observed at 3, 6, and 12 h of treatment in microglial cells as compared to control cells (Fig. 1a). Further, we observed significant increase in the levels of pro-inflammatory cytokines (TNF-α, MCP-1, and IL-6) in IL-1β-treated cells as revealed by cytokine bead array (CBA) (Fig. 1b), thus confirming the role of IL-1β in inducing inflammation in microglial cells.
In addition, we checked for the inflammatory effect of IL-1β in vivo also (Fig. 1c, d). For this, P10 (postnatal day 10) BALB/c mice were injected with 10 ng/g body weight of IL-1β for 1, 3, and 5 days as described elsewhere [35]. Control group received the same volume of the carrier (1× PBS). Further, we checked the expression of pro-inflammatory enzymes and cytokines to assess inflammation. We observed time dependent increase in iNOS and consistent increase in COX2 protein levels (Fig. 1c), as well as, in TNF-α, MCP-1, and IL-6 levels ( Fig. 1d) at 1, 3, and 5 days of IL-1β treatment in mouse brain. This further confirmed the inflammatory role of IL-1β in mouse brain, thus strengthening our in vitro data.
Identification of global host proteome response post IL-1β administration in N9 microglial cells The microglial proteome has not been analyzed in response to the leading cytokine IL-1β till now; therefore, we set out to identify differentially expressed proteins in response to IL-1β in microglial cells. N9 murine microglial cells were treated with IL-1β (5 ng/ml) to induce inflammation and proteomic analyses of control, and treated N9 cells was done at different time points (3, 6, and 12 h) followed by 2D-gel electrophoresis (Fig. 2a). 2D-gel images for control versus IL-1β-treated N9 cells of different time points were quantitatively analyzed using PD Quest software as shown in Fig. 2b. In total, 21 spots were found to be differentially regulated. These 21 protein spots showing differential expression of 1.5fold or greater (p < 0.05) were excised, trypsin digested, and identified by MALDI TOF/TOF MS and MS/MS analysis, which revealed seventeen different types of proteins. Among them, nine proteins were significantly upregulated while the rest were found to be downregulated. The observed MW and pI values of the protein spots on 2-DE gels were compared with the theoretical MW and pI values of the corresponding proteins (Additional file 1: Table S1), and most experimental values were found to be close to theoretical values, indicating unambiguous identification except a few. The list of all identified proteins along with their P values and the average ratio is given in Additional file 1: Table S1.
To investigate possible biological functions of differentially regulated proteins, we performed in silico analysis using GeneCodis3 software [42] which revealed eleven significant molecular functions. Out of these functions, unfolded protein binding was one of the highest rated molecular functions (Additional file 1: Figure S1). We further did interactome studies with the help of STRING database to find out the proteins playing key role in the interactome developed from the identified proteins [39]. Out of these, HSP60 (HSPD1) was found to be present in the biggest cluster of proteins and turned out to have highest numbers of interactions with other proteins of the interactome (Additional file 1: Figure S2).
IL-1β administration increases HSP60 expression both in vitro and in vivo
HSP60 is a molecular chaperone of mitochondria, which plays an important role in neuron-glia crosstalk during a Left panel shows the representative western blot images of iNOS and COX2 from N9 cell lysates at 3, 6, and 12 h after 5 ng/ml of IL-1β treatment. Right panel shows the bar diagrams which represent mean fold change in the levels of iNOS and COX2 after IL-1β treatment with respect to control. b Bar diagrams represent the mean fold change after CBA analysis of proinflammatory cytokines, i.e., TNF-α, MCP-1, and IL-6 after IL-1β treatment at 3, 6, and 12 h. c Left panel shows the representative western blot images of iNOS and COX2 from P10 BALB/c mice brain after IL-1β treatment (10 ng/g of body weight, intraperitoneally injected) for different time periods (1, 3, and 5 days). Right panel shows the bar diagrams which represent mean fold change in the level of respective proteins in comparison to control at different time points. One hundred microgram of the protein was loaded for western blot (a and c) and the levels of iNOS and COX2 were normalized with β-actin. d CBA analysis of pro-inflammatory cytokines (TNF-α, MCP-1, and IL-6) at different time points of IL-1β treatment. Data represent mean ± SD from three different sets of experiments. *p < 0.05, **p < 0.01 in comparison to untreated control condition neurodegeneration [31], and it has also been detected in our interactome studies as one of highly interacting proteins; therefore, we next focused on HSP60 and set out to investigate the role of HSP60 in IL-1β-induced inflammation. The expression of HSP60 was determined both by western blotting (Fig. 3a, b) and quantitative real-time PCR (Fig. 3c, d) at different time points of IL-1β treatment both in vitro (in N9 cells) and in vivo (in mice brain). As shown in Fig. 3a-d, the protein as well as transcript levels of HSP60 were increased significantly as compared to control in response to IL-1β treatment at different time points both in vitro (Fig. 3a, c) and in vivo (Fig. 3b, d).
Further, using double immunostaining, we observed that within 3 h of IL-1β treatment, the primary microglial cells exhibited a transformation from "resting" state, with basal levels of Iba1 expression (control, upper panel, Fig. 3e) to an "activated" state with increased Iba1 expression (3, 6, and 12-h treatment groups, lower panels, Fig. 3e). In addition, expression of HSP60 increased significantly after IL-1β treatment in the primary microglial cells (Fig. 3e) as well as N9 cells (Fig. 3f ) as compared to control cells as witnessed by co-localization of HSP60 (green) with Iba1 (red) (Fig. 3e, f ). These results justify and strengthen our proteomics analysis.
Microglial activation through IL-1β administration leads to the secretion of HSP60 in extracellular milieu
Literature suggests that HSP60 can be released by the damaged or injured CNS cells and can further activate microglia [31,43]. Therefore, we hypothesized that HSP60 could also be secreted by the activated microglia to further aggravate the immune response in CNS. To test the hypothesis, we next assessed HSP60 levels in secretome of microglial cells after IL-1β treatment. The proteins present in the media of control and IL-1β- Table S1. b Bar diagrams represent relative fold changes in differentially expressed proteins in IL-1β-treated N9 microglial cells with respect to control. Total 21 spots were taken. Spot intensities were normalized by total valid spot intensities and mean of values from duplicate analytical gels from four biological replicates and were subjected to paired t test analysis. Protein spots showing altered expression between control and experimental groups (|ratio| > = 1.5, p ≤ 0.05) were marked and excised. *p < 0.05. Data represented are means ± SD of four independent experiments treated N9 cells were precipitated by adding 1/4th volume of trichloroacetic acid (TCA) and were separated by western blotting. Surprisingly, HSP60 levels were increased significantly in the secreted media of IL-1βtreated cells at all time points (3, 6, and 12 h) with respect to control (Fig. 4a), suggesting that IL-1β not only increases the expression of intracellular HSP60 in microglia, but also induces the secretion of HSP60 by microglia in the surroundings.
Interaction among HSP60 and toll-like receptor 4 (TLR4) and the role of TLR4 in IL-1β-induced inflammation Reports further suggest that secreted HSP60 serves as a signal of CNS injury by activating microglia through , respectively. Right panel represents the bar diagrams which depict mean fold change in the levels of HSP60 with respect to control treated group. Thirty microgram protein was loaded for western blot, and β-actin served as a loading control. c, d Quantitative real-time PCR analysis of the transcript level of HSP60 after treatment with IL-1β at different time points in N9 murine microglial cells (c) and in BALB/c mice brain (d). GAPDH was used for the normalization. e, f Immunostaining of HSP60 in primary microglia (e) and N9 murine microglial cells (f) using specific antibodies as described in methods. Nuclei were counterstained with the DNA-binding dye DAPI. Images were captured using Zeiss apotome fluorescence microscope (Scale bar-20 μm; magnification-×40). Representative of three independent experiments is shown here (a, b, e, f) (n = 3). *p < 0.05, **p < 0.01 in comparison to control values. Data represented are mean ± SD of three independent experiments TLR4-MyD88 dependent pathway [31]. To check whether HSP60 secreted by microglia in response to IL-1β treatment binds with TLR4, we determined the interaction between HSP60 and TLR4 using coimmunoprecipitation technique. Five hundred microgram of N9 microglial cellular extract was precipitated with HSP60 antibody, and the blots were probed for TLR4 as well as for HSP60. We found the expression of TLR4 in the immunoprecipitate that was pulled using HSP60 antibody (Fig. 4b). Further, increase in the levels of HSP60 was accompanied with the increase in TLR4 in treated N9 murine microglial cells indicated a possible interaction between HSP60 and TLR4 (Fig. 4c).
To investigate the role of TLR4 in IL-1β-induced inflammation, we inhibited TLR4 signaling by using specific TLR4 signaling inhibitor (CLI-095, InvivoGen) in N9 murine microglial cells as described in methods. The levels of iNOS and COX2 were checked by western blot and the pro-inflammatory cytokines (MCP-1, TNFα, and IL-6) were assessed by CBA. As shown in Fig. 5a, b, the levels of iNOS, COX2, and pro-inflammatory cytokines decreased significantly in presence of 10 μM dose of TLR4 inhibitor in N9 murine microglial cells (Fig. 5a, b). TLR4 inhibitor also reduces the levels of inflammatory molecules induced by IL-1β (Fig. 5a, b). These results suggest that TLR4, in addition to IL-1R1 (specific receptor of IL-1β), plays an important role in IL-1β-mediated signaling in microglia.
Effect of knockdown and overexpression of HSP60 on inflammation
To assess the effect of HSP60 on inflammation, various inflammatory molecules were studied after the Fig. 4 HSP60 is secreted by microglia in the surrounding medium and interacts with TLR4 during inflammation. a N9 murine microglial cells were treated with IL-1β for different time periods (3, 6, and 12 h), and the proteins in the used medium were precipitated with trichloroacetic acid (TCA). Western blotting was performed to determine the levels of HSP60 in secretome. Normalization was performed with Ponceau-stained bands. Right panel shows the bar diagram representing mean fold changes in the level of HSP60 with respect to control N9 cells. Twenty microgram of the secreted protein was loaded for western blot of HSP60. b Co-immunoprecipitation analysis of the interaction between HSP60 and TLR4 in cells treated with IL-1β for 3 h. Whole-cell extracts (500 μg) of untreated and treated N9 microglial cells were immunoprecipitated with anti-HSP60 and anti-IgG antibodies and analyzed by western blot analysis with anti-TLR4 antibody (left panel). Right panel (c) shows the western blots with 100 μg of lysates for the detection of total HSP60, TLR4, and β-actin in the IL-1β-treated cells as compared to control cells. Lower panel (b, c) represents bar diagrams which depict mean fold change in the expression of HSP60 and TLR4 in comparison to control in immunoprecipitate (b) and lysate (c), respectively. Representative blots of the three independent experiments are shown here. *p < 0.05, **p < 0.01 in comparison to control values. Data represented are mean ± SD of three independent experiments knockdown as well as overexpression of HSP60 in N9 microglial cells in vitro. For knockdown studies, N9 microglial cells were transfected with 6pM HSP60 eSiRNA and scrambled eGFP eSiRNA and the knockdown of HSP60 was confirmed by western blotting (Fig. 6a). As shown in Fig. 6, the levels of iNOS, COX2 (Fig. 6a), and pro-inflammatory cytokines (MCP-1, TNF-α, and IL-6) (Fig. 6c) decreased significantly in N9 microglial cells in presence of HSP60 eSiRNA, as compared to scrambled eGFP eSiRNA-transfected cells and this reduction was persistent even after the addition of IL-1β.
In contrast, we did overexpression of HSP60 in N9 cells using mouse HSP60 cDNA clone at different concentrations (4, 8, and 10 μg), and the over expression was confirmed by western blot (Fig. 6b). The levels of inflammatory molecules including iNOS, COX2 (Fig. 6b), and pro-inflammatory cytokines (MCP-1, TNF-α, and IL-6) (Fig. 6d) increased significantly after overexpression of HSP60 alone without IL-1β treatment. These results suggest that HSP60 plays a modulatory role in IL-1β-induced inflammation in microglia.
TLR4 plays a pivotal role in HSP60-induced inflammation
As we found HSP60 to be secreted out in the extracellular milieu and interact with TLR4 to perform downstream signaling, we wanted to confirm whether inhibition of TLR4 signaling affects HSP60-induced inflammation in microglia. For this, we inhibited TLR4 signaling in microglial cells overexpressing HSP60 with specific TLR4 inhibitor (CLI-095, InvivoGen, 10 μM) and checked the levels of iNOS, COX2 (Fig. 7a), and pro-inflammatory cytokines (MCP-1, TNF-α, and IL-6) (Fig. 7b). As shown in Fig. 7a, b, levels of all these proinflammatory markers decreased significantly in presence of TLR4 inhibitor. Inhibition of TLR4 also reduces showing effect of TLR4 inhibitor on IL-1β-induced inflammation in microglia, right panel represents the bar diagrams which reflect mean fold change in expression as compared to control. One hundred microgram protein was loaded for western blots of iNOS and COX2, and β-actin was used as a loading control. The blots are representative of three independent experiments. b Effect of inhibition of TLR4 on pro-inflammatory cytokines (TNFα, MCP-1, and IL-6) was assessed by CBA. Bar diagrams are representative of three independent experiments with similar results. Data represented are mean ± SD of three independent experiments. *p < 0.05, **p < 0.01 in comparison to control values and # p < 0.01 in comparison to IL-1β treatment the levels of different pro-inflammatory molecules induced by HSP60 (Fig. 7a, b), thus further strengthening our hypothesis.
Effect of HSP60 on mitogen-activated protein kinase (MAPK) phosphorylation
It has been well reported that IL-1β induces inflammation by activation of MAP kinase (MAPK) pathway (Additional file 1: Figure S3) in addition to phosphorylation of NF-κB [44,45]. Additionally, according to some previous reports [46,47], HSP60 acts as an antigenic protein and induces inflammation by inducing phosphorylation of MAPK proteins which lead to the execution of kinase pathway signaling mediated inflammatory response. Hence, we next investigated the effect of HSP60 expressed by microglia on the phosphorylation of MAPK proteins. For this, we knocked down HSP60 with specific eSiRNA and surprisingly, found significant decrease in the levels of phosphorylated forms of all three MAPK (ERK1/2, JNK, and p38) in HSP60 eSiRNAtreated cells as compared to cells transfected with nonspecific scrambled eGFP eSiRNA (Fig. 8a). IL-1β treatment also only partially rescued the effect of HSP60 eSiRNA on phosphorylation of ERK and JNK but not in p38 MAPK (Fig. 8a). It seems that p38 is the specific target of HSP60. Further, we overexpressed HSP60 protein in N9 cells using mouse HSP60 cDNA clone at different doses, and immunoblot analysis revealed a significant increase in the phosphorylation of all three MAPK proteins in cells overexpressing HSP60 (Fig. 8b). (Fig. 6a, c) and mouse HSP60 cDNA clone (Fig. 6b, d), respectively, to check subsequent effects on pro-inflammatory factors. a Left upper panel shows representative western blot image of HSP60, iNOS and COX2 in the presence of HSP60 esiRNA (6pM) or scrambled esiRNA and/or IL-1β in N9 microglial cells. Left lower panel shows the bar diagram which represent mean fold change in the levels of HSP60, iNOS, and COX2 with respect to control. One hundred microgram protein was loaded for western blots of iNOS and COX2, and β-actin was used as a loading control. b Right upper panel shows the effect of overexpression of HSP60 on iNOS and COX2 by western blotting. Lower panel shows the bar diagram which represent fold change in the levels of HSP60, iNOS and COX2 with respect to control. One hundred microgram protein was loaded for western blots of iNOS and COX2 and 20 μg for HSP60. β-actin was used as a loading control. The blots are representative of three independent experiments. c, d CBA analysis of pro-inflammatory cytokines TNF-α, MCP-1, and IL-6 in presence of HSP60 esiRNA (c) and mouse HSP60 cDNA clone (d). Data represented are mean ± SD of three independent experiments. *p < 0.05; **p < 0.01 in comparison to control values and # p < 0.01 in comparison to IL-1β treatment The above results indicate that HSP60 regulates IL-1βinduced inflammation via activation of MAPK proteins.
HSP60 induces inflammation in microglia via p38 MAPK activation
To reveal the specific MAPK effector molecule which plays a crucial role in HSP60-modulated inflammation, we used specific inhibitors for these kinases. We treated N9 cells with specific MAPK inhibitors U0126 (10 μM), SP600125 (10 μM), and SB203580 (10 μM) for blocking phosphorylation of ERK pathway, JNK pathway, and p38 pathway, respectively, in addition to HSP60 cDNA clone and assessed the expression of pro-inflammatory enzymes (iNOS and COX2) and pro-inflammatory cytokines (TNF-α, MCP-1, and IL-6). To our surprise, blocking of ERK and JNK pathway in presence of HSP60 did not show marked decrease in the levels of iNOS, COX2, TNF-α, IL-6, and MCP-1 (Fig. 9a, b and d).
These results suggest that ERK and JNK pathway do not show significant effect on HSP60-induced inflammation in microglial cells. In contrast, inhibition of p38 pathway showed marked decrease in inflammatory response of cells overexpressing HSP60 (Fig. 9c, d). This is reflected by the decrease of iNOS, COX2 and pro-inflammatory cytokines in the presence of p38 inhibitor which were induced by overexpression of HSP60 (TNF-α, MCP-1, and IL-6) (Fig. 9c, d). These results confirm that the downstream modulator which plays important role in HSP60-mediated inflammation is p38 MAP kinase which further aggravates the inflammatory process.
MEK3/6: an important player in HSP60-induced inflammatory response in microglia
To further confirm the active involvement of p38 MAPK pathway in HSP60-mediated inflammation in microglia, we knocked down upstream molecule of p38 MAPK pathway, i.e., mitogen/extracellular signal-regulated kinase 3/6 (MEK3/6), which is responsible for causing phosphorylation of p38. Knockdown of MEK3/6 using specific siRNA specifically inhibits phosphorylation of p38 and overexpression of HSP60 only partially rescued the effect of MEK3/6 eSiRNA (Fig. 10a). Surprisingly, we observed a decrease in the pro-inflammatory enzymes (iNOS and COX2) as shown by western blotting (Fig. 10a) as well as pro-inflammatory cytokines (TNF-α, MCP-1, and IL-6) as shown by CBA (Fig. 10b). These results further streamline the signaling and confirm that HSP60 mediates inflammatory process in microglia by Fig. 7 TLR4 plays a pivotal role in HSP60-induced inflammation in microglia. N9 cells were cultured in the presence or absence of 10 μM TLR4 inhibitor (CLI-095) 2 h prior to transfection of mouse HSP60 plasmid clone. a Left panel shows western blots illustrating effect of TLR4 inhibitor on iNOS and COX2 in the cells transfected with 4 μg mouse HSP60 plasmid clone or control pCMV6 plasmid, right panel represents the bar diagrams which reflect mean fold change in expression as compared to control. One hundred microgram protein was loaded for western blots of iNOS and COX2, and β-actin was used as a loading control. The blots are representative of three independent experiments. b CBA analysis of proinflammatory cytokines (TNF-α, MCP-1, and IL-6) also suggests a major involvement of TLR4 in HSP60-induced inflammation in microglia. Bar diagrams are mean fold change of three independent experiments with similar results. Data represented are mean ± SD of three independent experiments. *p < 0.05, **p < 0.01 in comparison to control values and # p < 0.01 in comparison to IL-1β treatment modulating MEK3/6 which phosphorylates p38 MAPK in a downstream pathway leading to inflammatory response.
Discussion
Microglia, the resident immune cells of the central nervous system, receives signals from various stimuli ranging from pathogenic invasions, stress, toxins, and autoimmune diseases to neurodegeneration, and these signals act as the first warning that indicate disruption of normal cellular function in the organism and lead to the activation of microglia. Activated microglia further release endogenous inflammatory factors to activate other cells in nearby vicinity and the feedback cycle, thus proceeds to evoke acute or chronic inflammation. Microglial activation-which is marked by extensive proliferation, chemotaxis, and altered morphology-is the hallmark of neuroinflammation in several neurodegenerative diseases and pathological conditions of CNS [24].
Literature suggests that IL-1β, the master regulator of inflammation, induces microglial activation and plays a crucial role in the progression of chronic neurodegenerative diseases such as AD and PD as well as acute neuroinflammatory conditions including stroke, ischemia, and brain injury [18][19][20]23]. However, the underlying molecular circuitry in IL-1β-induced microglial activation is still unexplored. In this study, we show that IL-1β causes activation of microglial cells by regulating the downstream signaling mediated via HSP60 to TLR4 to p38 MAPK. Our proteomics data revealed HSP60, the mitochondrial chaperone, as an important differentially regulated as well as highly interacted protein in IL-1βstimulated N9 murine microglial cells, hence, we further stressed upon the role played by HSP60 in regulating IL-1β-induced inflammatory processes in microglia. We show that HSP60 secreted by microglia after IL-1β treatment also interacts with TLR4 receptor on microglia Overexpression of HSP60 cDNA clone in microglial cells leads to increase in phosphorylation of all the three MAPKs at different doses of HSP60 cDNA clone. One hundred microgram protein was loaded for western blots, and β-actin was used as a loading control. Representative of three independent experiments is shown here. Graphs in lower panel represent mean fold change in the level of phosphorylation of MAPKs. The levels of phosphorylated proteins were normalized to their total proteins, respectively. Data represented are mean ± SD of three independent experiments. *p < 0.05, **p < 0.01 in comparison to control values and # p < 0.01 with respect to IL-1β-treated values membrane. Using overexpression and knockdown experiments, we further reveal that HSP60 triggers microglia activation via TLR4-MEK3/6-p38 MAPK axis.
Several reports support that IL-1β secreted from activated microglia can activate other cells in the extracellular environment by activating different signaling pathways. Kim et al. reported that activated microglia secretes IL-1β which induces iNOS/NO in astrocytoma cells through p38 MAPK and NF-κB pathways [48]. Besides this, IL-1β induces the elevation of intracellular Ca +2 levels via the dual pathways of Ca +2 entry and Ca +2 mobilization [49]. Further, IL-1β has Fig. 9 Effect of MAPK inhibitors on HSP60-induced inflammation in N9 microglial cells. N9 cells were cultured in the presence or absence of MAPK inhibitors; U0126 (10 μM), SP600125 (10 μM), and SB203580 (10 μM) 60 min prior to transfection of 4 μg of HSP60 cDNA clone and then incubated for 24 h. a-c The effect of ERK inhibitor U0126, JNK inhibitor SP600125 (SP), and p38 inhibitor SB0193 (SB) on phospho-and total ERK1/2 (a), on phospho-and total JNK (b), on phospho-and total p38 (c), respectively, and on pro-inflammatory molecules iNOS and COX2 (a-c). Right panel (a-c) shows bar graphs which represent mean fold change in iNOS and COX2 with respect to control, in different treatment conditions. The blots are representative of three independent experiments. One hundred microgram protein was loaded for western blots, and β-actin was used as a loading control. d Effect of specific inhibition of MAPKs on pro-inflammatory cytokines. CBA analysis of TNF-α, MCP-1, and IL-6 after treatment of N9 microglial cells with U0126, SP600125, and SB203580 inhibitors (i-iii). Data represented are mean ± SD of three independent experiments. *p < 0.05, **p < 0.01 in comparison to control values been reported to induce HSP60 expression in cultured human adult astrocytes [50]. This leads to the framework of our hypothesis that IL-1β-induced microglia inflammation may involve heat shock protein as an endogenous signal that can further relay inflammation via MAPKs inside the microglia.
Based on our current findings, we hereby propose a model (feed-forward loop) of the signaling pathway leading to IL-1β-induced inflammation via HSP60 in microglial cells (Fig. 11). Stimulation of microglia by IL-1β induces binding of IL-1β ligand to its cognate receptor IL-1R1, and this increases the expression of HSP60 in the cytoplasm of cells. HSP60 is secreted out by the cells to give signals to possibly other cells in nearby vicinity to produce pro-inflammatory cytokines to combat the stressed situation; thus once induced, HSP60 regulates its own production in an autocrine and paracrine manner. This is in harmony with other reports where intracellular HSP60 has been shown to be secreted out of the cells [51]. Extracellular HSP60 then binds TLR4 receptor [31] which in turn is a part of the innate immune system and therefore secreted HSP60 expression positively correlates with the triggering of innate immune response by the production of pro-inflammatory molecules. Secreted Fig. 10 Role of MEK3/6 in HSP60-induced inflammation. N9 cells were transfected with 4 μg of HSP60 cDNA clone and/or, MEK3/6 specific eSiRNA (6pM) for 24 h, and the effect on inflammation was assessed by western blot (a) and cytokine bead array (b). a Western blot analysis of phospho-MEK3/6 and total MEK3/6, phospho-and total p38, iNOS and COX2 after inhibition of MEK3/6 and overexpression of HSP60. Blots are representative of three different experiments with similar results. One hundred microgram protein was loaded for western blots, and β-actin was used as a loading control. Graphs represent the mean fold change in the phosphorylation of ERK1/2, JNK, and p38 with respect to their respective total proteins and represents mean fold change in the expression of iNOS and COX2 with respect to control. b Effect of knockdown of MEK3/6 on pro-inflammatory cytokines. CBA analysis of TNF-α, MCP-1, and IL-6 after transfection of N9 microglial cells with HSP60 cDNA clone and/or, MEK3/6 eSiRNA (i-iii) Data represented are mean ± SD of three independent experiments. *p < 0.05; **p < 0.01 in comparison to control values HSP60 binds to TLR4 and upregulates the expression of TLR4 which further activates myeloid differentiation factor 88 (MyD88). MyD88 in turn leads to the phosphorylation of MEK3/6, a specific upstream modulator of p38 MAPK [52,53]. Phosphorylation of MEK3/6 then specifically phosphorylates p38 MAPK which in turn increases the production of pro-inflammatory cytokines viz. TNFα, MCP-1, and IL-6 and pro-inflammatory enzymes, i.e., COX2 and iNOS. In contrast to our study, Kilmartin et al. reported that treatment of monocytes with human HSP60 led to suppression of TNF-α production [54]. This difference can be attributed to different cell types and different cellular environments. A mitochondrial chaperone, i.e., HSP60 thus plays an important role in increasing the intensity of inflammation with its continuous production by forming a feed-forward loop of inflammation.
HSP60, in addition to an important molecular chaperone, has also been reported to have critical immunomodulatory roles. It has been found to be accumulated in the cytoplasm during apoptotic activation [55]. In contrast, HSP60 levels were reported to be significantly higher in cytoplasm of neuroepithelial tumors [56]. This chaperone has also been considered as a potential antitumor target [57]. Further, several evidences suggest the role of heat shock proteins in regulation of intracellular signaling [58][59][60]; however, the role of HSP60 in intracellular signaling leading to inflammation Fig. 11 Schema of signaling pathway involved in IL-1β-induced inflammation in microglia. IL-1β induces inflammation by binding to its specific receptor IL-1R1 present on the cell surface and leads to the enhanced expression of HSP60 in microglia. HSP60 is secreted by the microglia into the surrounding and binds to TLR4 and further induces the inflammatory process by the activation of MEK3/6 which leads to increased phosphorylation of p38 MAP kinase pathway resulting in increased production of pro-inflammatory factors including TNF-α, MCP-1, and IL-6. HSP60, thus plays an important role in further increasing the intensity of inflammation by forming a feed-forward loop of inflammation in microglia is sparsely explored. In the present study, HSP60 likely modulates intracellular signaling of IL-1β-induced inflammation. However, neuroinflammation is a complex process and, considering that several pathways are upregulated upon cytokine stimulation, therefore the role of other transcription factors and co-activators cannot be ruled out in IL-1β-induced inflammation.
IL-1β has previously been reported to orchestrate its function via its specific receptor, IL-1 receptor 1 (IL-1R1). However, our results clearly suggest that TLR4 is indeed playing a key role in IL-1β and HSP60-induced inflammation in microglia. Our results also propose that IL-1β may bind to TLR4, in addition to its cognate receptor IL-1R1, to exert its inflammatory effects in microglia, which is a novel finding and needs to be further explored. These findings are also in harmony with the two other recently published reports which claim that inhibition of TLR4 reduces vascular inflammation during hypertension [61,62].
Literature suggests that p38 may act via several ways to induce the production of inflammatory cytokines. p38 may either act through MK2 to release TNF-α mRNA from translational arrest imposed by the ARE [63]. Another potential target of p38 is the redox-sensitive transcription factor NF-κB which is also one of the main transcription factors involved in TNF-α gene transcription. Since, we found increase in TNF-α and increased phosphorylation of p38 after HSP60 overexpression, hence, p38 MAPK might promote the release of inflammatory cytokines via a NF-κB dependent mechanism [64]. IL-1β has also been found to increase the expression of NF-κB in several studies [65]. However, p38 MAPK can also directly cause the production of pro-inflammatory cytokines (IL-1β, TNF-α, and IL-6) [66] and induction of enzymes such as COX2 [67] as well as p38 also modulates the expression of intracellular enzymes such as iNOS [68]. For defining these discrete functions and relationships of p38 to other molecules during IL-1β-induced inflammation, further investigation is warranted.
In this report, we firmly establish a molecular mechanism by which IL-1β leads to release of HSP60, which in turn activates microglia, the innate immune cells of CNS in a TLR4-MEK3/6-p38 MAPK-dependent manner. We thus speculate a model, in which neuroinflammation activates innate immunity through the release of HSP60 and activation of TLR4, leading to increased inflammatory response of microglia. Recently, intense research has been focused on immunomodulatory properties of heat shock proteins (HSP), including their role as adjuvant for vaccines in addition to their primary function [54]. Our results reveal a new potential mitochondria immunomodulatory chaperone, i.e., HSP60 that can be further evaluated as a therapeutic target for the management of inflammatory conditions of CNS as it induces inflammation by orchestration of inflammatory genes in response to IL-1β. Unlocking the signaling pathway underlying IL-1β-induced inflammation via HSP60-TLR4-p38 MAPK axis in microglia has for sure future implications for therapeutic management of neuroinflammatory disorders. Our study thus fills the gaps in current understanding of molecular circuitry of neuroinflammation and also provides a novel target as HSP60 for the treatment of various neuroinflammatory diseases. Future studies in this direction may provide conclusive answers.
Conclusions
Observations from our present study suggest that IL-1β induces inflammation in microglia and alters the expression of various proteins, one of them is HSP60, which is a mitochondrial chaperone and plays a regulatory role in aggravating IL-1β-induced inflammation in microglia. IL-1β treatment not only increases the expression of HSP60 in microglia but it also leads to increased secretion of HSP60 from the microglia in the extracellular milieu. HSP60 then binds with TLR4 and induces inflammation in microglia by activating p38 MAPK via MEK3/6. In this study, we provide the first evidence of HSP60 as a new component of IL-1βinduced inflammatory network in microglial cells which further augments inflammation via TLR4-p38 MAPK axis.
Additional file
Additional file 1: 3 Supplementary figures and 2 Supplementary tables. Table S1. List of proteins showing differential expression after IL-1β treatment in N9 microglia cells, identified by MS/MS analysis. Table S2. List of primers used for quantitative real time PCR (qRT-PCR) analysis. Figure S1. Pie chart showing the molecular functions of differentially expressed proteins in IL-1β treated N9 microglial cells. Figure S2. Protein-protein interaction network of the identified proteins. Figure S3. Effect of IL-1β on phosphorylation of MAPK effector proteins in vitro and in vivo. (DOCX 2526 kb) | 2016-05-12T22:15:10.714Z | 2016-02-02T00:00:00.000 | {
"year": 2016,
"sha1": "69904ed0169ec82ee6b95cb04a0365122ca77396",
"oa_license": "CCBY",
"oa_url": "https://jneuroinflammation.biomedcentral.com/track/pdf/10.1186/s12974-016-0486-x",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "69904ed0169ec82ee6b95cb04a0365122ca77396",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
253362496 | pes2o/s2orc | v3-fos-license | UroVysionTM Fluorescence In Situ Hybridization in Urological Cancers: A Narrative Review and Future Perspectives
Simple Summary Positive UroVysionTM fluorescence in situ hybridization (U-FISH) is generally considered urothelial carcinoma (UC). However, in our clinical practice, we found that U-FISH also showed positive findings in non-urothelial carcinomas or even metastatic carcinomas. A review is needed to increase awareness to avoid misdiagnosis. This review focuses on summarizing the research status of U-FISH in UC, non-urothelial carcinoma and metastatic tumor, so as to strengthen urologists’ comprehensive understanding of the application value of U-FISH and better complete the accurate diagnosis of urological cancers. Abstract UroVysionTM is a fluorescence in situ hybridization assay that was developed for the detection of bladder cancer (UC accounted for 90%) in urine specimens. It consists of fluorescently labeled DNA probes to the pericentromeric regions of chromosomes 3, 7, 17 and to the 9p21 band location of the P16 tumor suppressor gene, which was approved by the Food and Drug Administration (FDA) in 2001 and 2005, respectively, for urine detection in patients with suspected bladder cancer and postoperative recurrence monitoring. Furthermore, recent studies also demonstrated that U-FISH was useful for assessing superficial bladder cancer patients’ response to Bacillus Calmette–Guérin therapy and in detecting upper tract urothelial carcinoma. Therefore, positive U-FISH was well known to urologists as a molecular cytogenetic technique for the detection of UC. However, with the continuous enrichment of clinical studies at home and abroad, U-FISH has shown a broader application space in the detection of various urinary primary tumors and even metastatic tumors. This review focuses on summarizing the research status of U-FISH in UC, non-urothelial carcinoma and metastatic tumor, so as to strengthen urologists’ more comprehensive understanding of the application value of U-FISH and better complete the accurate diagnosis and treatment of urological cancers.
Introduction
Fluorescence in situ hybridization (FISH) technology is a molecular cytogenetic technology that originated in the late 1960s [1]. FISH detects chromosomal or genetic abnormalities in cell and tissue samples by detecting fluorescence signals through fluorescence microscopy after hybridization between the probe and the DNA of the sample through the complementarity of DNA base pairs, with the characteristics of rapid detection, good repeatability and accurate spatial positioning [2][3][4][5]. Probes can be divided into five types: whole chromosome painting probes, telomere probes, chromosome arm probes, centromere probes and site-specific probes. The samples that can be detected by FISH are diverse, including: 1 amniotic fluid and villi: used for prenatal diagnosis, cause of miscarriage and other related genetic diseases; 2 cervical cells: used for the diagnosis of cervical precancerous lesions; 3 peripheral blood: postpartum genetic diseases and blood tumor detection;
Application of U-FISH in UC
UC is one of the common urological cancers, originating from the malignant transformation of renal pelvis mucosal epithelium, ureteral mucosal epithelium, bladder mucosal epithelium or urothelial mucosal epithelial cells, which is divided into upper tract urothelial carcinoma (UTUC, renal pelvis carcinoma and ureteral urothelial carcinoma) and lower tract urothelial carcinoma (bladder cancer and urethra cancer). Worldwide, the incidence of bladder cancer ranks 9th among malignant tumors, ranking 7th among men and 10th among women [8]. UC has insidious onset, high morbidity and malignancy, and easy recurrence [8,[10][11][12][13]. Therefore, its early diagnosis and prognosis monitoring are particularly important.
Application of U-FISH in Bladder Cancer
A series of studies [7,[14][15][16][17][18][19][20][21] have shown that U-FISH has high sensitivity and specificity in the diagnosis and follow-up monitoring of bladder cancer, but the results vary, and the sensitivity can be as high as 80%-100%. U-FISH positivity was directly correlated with the number of chromosomal aberrant cells in urine, corresponding to high-grade, high-burden bladder cancer (see Table 1) [7,18,19]. U-FISH has the advantage of high sensi-tivity and specificity in cytologically uncertain and negative urine samples [18]. Another series of studies [14][15][16][17]21] found that patients with positive U-FISH and normal cystoscopy developed UC within 15-22 months. The residual tumor rate of the first transurethral resection of bladder tumor (TURBT) is 4%-78%, which was related to tumor stage, number and surgical experience [22,23]. Ding et al. [24] found that there was no significant difference in the positive rate of FISH between patients without residual tumors and those with residual tumors before the initial TURBT. After the initial TURBT, the positive rate of FISH in the residual tumor group was significantly higher than that in the group without residual tumor (42.2% vs. 17.6%, p = 0.003). Therefore, after the initial TURBT, it is necessary for patients with positive FISH to undergo secondary electric resection after 2-6 weeks, or to strengthen the adjuvant intravesical therapy. In addition, due to the limited skills of the operator or surgical specimens, for patients with pathologically suspected myometrial invasion, it is necessary to perform postoperative FISH testing to determine whether the tumor has been completely removed, so as to further assist in treatment decisions, such as secondary electric resection or radical resection [22,25].
The gold standard for postoperative follow-up of TURBT is regular cystoscopy and urine cytology [26][27][28], but cystoscopy is mainly dependent on subjective observable changes and is dependent on actual tumor recurrence, which makes poor efficacy in early prediction of intravesical treatment failure (including Bacillus Calmette-Guérin (BCG) and other drugs). Cystoscopy is also an invasive procedure and cannot be performed under certain conditions, such as the acute spread of inflammation, low bladder volume, bone or joint malformations, urethral malformations or strictures, and intolerance in elderly patients. Studies have shown that urine cytology has excellent specificity (i.e., a low false positive rate), but suboptimal sensitivity (i.e., a fairly high false negative rate). The sensitivity of cytology is fairly high for high-grade tumors, but even for these tumors has a suboptimal false negative rate and it is difficult to distinguish inflammatory responses from tumor recurrence, especially in patients treated with intravesical BCG. U-FISH is more sensitive for the detection of UC cells in urine or bladder washes than urine cytology and is not affected by hematuria, urinary tract infection or BCG-induced inflammatory response [7,19,20,29]. A series of studies [29][30][31][32][33][34][35] found that before the first BCG perfusion after TURBT, positive U-FISH was not associated with a higher risk of recurrence, but 6 weeks or 3 or 6 months after BCG treatment, positive U-FISH was significantly correlated with the risk of tumor recurrence and progression (p < 0.001), and the possibility of recurrence was 3-5 times higher than that of the negative group. Disease progression was 5-13 times more likely than in the negative group, and positive U-FISH after BCG perfusion was an independent risk factor for recurrence. Liem et al. [36] also mentioned in their study that the median recurrence-free time of patients with FISH-positive after BCG treatment was 6 (3-28) months. Therefore, patients with FISH-positive after BCG perfusion should be closely followed up to appropriately shorten the follow-up interval, while patients with negative can appropriately extend the follow-up interval.
Application of U-FISH in UTUC
UTUC, namely renal pelvis and ureter cancer, only accounts for 5-10% of UC in European and American studies [37], but the proportion is high in the Chinese population. Results from a survey of patients hospitalized at 32 large medical centers nationwide in 2018 showed that UTUC accounted for 9.3-29.9% of UC, with a mean of 17.9%, and 7-17% of patients had concurrent bladder cancer [38,39]. The current diagnostic methods are mainly cytology, imaging techniques and endoscopy. Cytology is the most convenient and widely used method, but it is also much less sensitive and specific in detecting low-grade UTUC, and more importantly, cytology is subjective and controversial in conditions such as infection and inflammation. Imaging techniques such as computed tomography, urography, and intravenous pyelography fail to detect small tumors or carcinoma in situ. Theoretically, ureteroscopy is one of the standard methods for diagnosing UTUC, but it can be invasive and costly with the risk of complications such as infection, perforation and bleeding [37,38]. In addition, anatomic abnormalities and a history of urinary tract reconstruction can make ureteroscopy more difficult and dangerous. U-FISH is based on genetic aberrations, which can reduce complications. Gene mutations can be identified in the early stages of cancer development and become important indicators for clinical detection in the process of further malignant transformation [40,41]. Over the past decade, U-FISH has demonstrated high sensitivity and specificity in detecting UTUC, with a sensitivity of 87.8% and a specificity of 85.7% [42]. Evidence gathered suggests that U-FISH is not only ideal for diagnosing UTUC but has also proven to be significantly superior to cytology in terms of sensitivity, with no significant differences in specificity [41][42][43][44][45]. The study found that preoperative FISH-positive patients had a later tumor stage and higher tumor grade than negative patients. The polyploidy of CSP7/CSP17 was significantly negatively correlated with survival rate, while CSP3/GLPp16 had no significant difference with survival rate [44,46]. Chromosomal aberrations were most common in high-grade tumors, and the increase in the percentage of hyperdiploid on each chromosome was significantly associated with high-grade tumor differentiation, while there was no statistically significant association between the percentage of hyperdiploid on any chromosome and tumor stage [44]. Another study found that patients with positive U-FISH before radical nephro-ureter-bladder cuff resection were more likely to have bladder recurrence [47]. Therefore, intravesical perfusion therapy and follow-up monitoring should be strengthened for patients with preoperative positive U-FISH for UTUT.
Advantages and Disadvantages of FISH Technology
Compared with other non-invasive techniques, such as hematuria test paper, NMP22, NMP22 Bladder chek, BTA stat, BTA TARK, ImmunoCyt, FISH also has some advantages (see Table 2) [48,49]. However, FISH also has certain potential deficiencies. The sensitivity of FISH to detect low-grade tumors is low. The possible explanation is that low-grade tumors are usually diploid or nearly diploid in chromosomes, without obvious genetic abnormalities, and are similar to normal cells [7]. Secondly, the locus probe 9p21 has the smallest volume, and it is also the most common genetic abnormal locus, so it is not easy to observe [50].
Summary
FISH has great application value in the occurrence, development, diagnosis, prognosis and other aspects of UC with high sensitivity and specificity. However, it cannot completely replace cystoscopes and should be carried out in parallel with cystoscopes and cytology.
Application of Cytology and Histological U-FISH in Non-Urothelial Carcinoma
Chromosomal aberrations are a hallmark of human malignancies, and most solid tumors exhibit complex alterations in genetic material [51]. There are few studies and reviews on the use of U-FISH in non-urothelial carcinoma. Reid-Nicholson et al. [52] performed histological U-FISH detection on the paraffin sections of 31 patients with non-urothelial carcinoma (15 cases of primary squamous cell carcinoma, 2 cases of squamous cell carcinoma with UC, 4 cases of primary adenocarcinoma, 5 cases of colorectal adenocarcinoma, 4 cases of prostate cancer, and 1 case of cervical adenocarcinoma). Findings of positive U-FISH are common in primary and secondary adenocarcinoma and rare in squamous cell carcinoma. Similarly, Kipp et al. [53] also performed histological U-FISH detection by paraffin section and found that the chromosomal abnormalities detected in urothelial carcinoma were also common in rare bladder cancer histological types (adenocarcinoma in 4 cases, adenocarcinoma in 5 cases, small cell carcinoma in 6 cases, and squamous cell carcinoma in 7 cases). Moreover, Yang et al. [54] found that preoperative urinary U-FISH in patients with bladder paraganglioma was positive, showing polyploidy on chromosome 3 and chromosome 17. Urinary U-FISH was performed again after surgery, and the result turned negative.
Mutual Validation of Cytology and Histology U-FISH
In the above studies, U-FISH mutual verification was not carried out through paraffin section and urine cytology, thus resulting in the inadequacy of the study and unable to prove the relationship between the two specimen types. Hu et al. [55] confirmed the consistency of histological and cytological U-FISH detection results in patients with urachal carcinoma. Therefore, histological and cytological U-FISH analysis results are consistent, but if sufficient tumor cells are not shed into the urine, histological U-FISH results may be inconsistent with urine cytology results.
Analysis of Reasons for Positive FISH Findings in Urine and Tissue Specimens of Non-Urothelial Carcinoma
The commonly used UroVysion TM probes are composed of centromeric probes (CSP3/ CSP7/CSP17) and gene locus-specific recognition probes (GLP p21). If the tumor cells have chromosome 3, 7, 17 aberrations or (and) deletion of the GLP p21 locus, and the diseased cells can be shed in sufficient quantities into the urine, both histological and cytological FISH may be positive. In adenocarcinoma (prostate cancer, urachal carcinoma), prostate cancer shares some common chromosomal abnormalities with UC. For example, it also has chromosome 7, 8, 10, 16, 17, 18 and X abnormalities, as well as amplification or deletion of genes such as C-MYC, HER-2/NEU, AR, MCM7, EZH2 and Ki-67, resulting in positive FISH results [56]. Chromosome 7 amplification is most common in locally advanced and/or metastatic prostate cancer, where tumor cells are rarely exfoliated in urine, and these tumors usually have a Gleason score of 8 or higher [57]. In a genomic sequencing study of 70 cases of urachal carcinoma, sequence variation was observed in TP53, KRAS, BRAF, PIK3CA, FGFR1, MET, NRAS, and PDGFRA, and gene amplification was observed in EGFR, ERBB2, and MET. These genes exist on chromosomes such as 17p13, 3p21, 7p12 and 17p21, so they can lead to positive FISH results [58]. Urachal carcinoma is similar to colorectal cancer in histology and genomics, with a histological FISH test showing the highest positive rate for colorectal adenocarcinoma, followed by prostate cancer and primary bladder adenocarcinoma, according to the study in European Journal of Urology [59].
There are relatively few molecular genetic studies on small cell carcinoma of the bladder. Atkin et al. [60] first reported the genetic material changes of bladder small cell carcinoma and found that it was hypertriploid and hypertetraploid, which were closely associated with extensive rearrangement of chromosomes 1-3, 5-7, 9, 11 and 18, respectively. Leonard et al. [61] also reported chromosome 9 monomorphism, deletion of homozygosity of the p16 gene and trisomy of chromosome 7 in small cell carcinoma of the bladder. Chromosomal imbalance in bladder paraganglioma has emerged as a new parameter to predict the malignant potential of paraganglioma. As summarized by Schaefer et al. [62], the increase or decrease of chromosomes 1, 3, 6, 7, 8, 9, 11, 16, 17, 19, 20, 21 and 22 has been reported in paraganglioma. In addition, amplification of 17p was associated with an increased likelihood of malignant progression. From the above studies, they all have the genetic material changes that make FISH positive.
Application of Other Types of Probe Combinations in Non-Urothelial Carcinoma
In addition to the UroVysion TM probe combination, other probes can be designed in clinical practice to distinguish tumor types, judge benign and malignant tumors and prognosis, and diagnose genetic diseases. The characteristic chromosomal abnormality of renal clear cell carcinoma is the deletion of 3p25 by FISH detection of tissue sections or exfoliated cells. Deletions of 9p21 and 14q22 predicted poor prognosis, while 5q amplification and 14q22 deletions predicted large tumor size and local invasion. The characteristic chromosomal abnormality of papillary renal cell carcinoma is the amplification of chromosomes 7 and 17, while the amplification of 12, 16 and 20 is more definitely papillary carcinoma. Chromosome 7 triploid is helpful in distinguishing chromophobe cell carcinoma from eosinophil tumor [63][64][65]. Nephroblastoma is associated with the inactivation of the WT1 gene [66], which can be confirmed by FISH. HER-2/NEU gene amplification is present in 60% of prostate cancer patients, which indicates a short survival period [56,57], and the FISH assay is feasible to predict patient prognosis. FISH can also diagnose Von Hippel-Lindau syndrome, a dominant multiple tumor genetic disorder, whose pathogenic gene is the deletion of chromosome 3. VHL gene mutation accounts for 75% of familial VHL syndromes [67].
Summary
Consequently, it is important to remember that a positive UroVysion TM result is not specific to UC. Other primary tumors of the bladder, prostatic cancer that invades into the urethra, and tumors metastatic to the bladder are occasionally the cause of a positive urine FISH result. History and imaging information should be combined when interpreting FISH results. Misclassification of tumors can lead to delayed diagnosis and unnecessary or inappropriate surgery or chemotherapy.
Analysis of the Characteristics of Urinary FISH-Positive Cases in Urinary Tract Metastases
The diagnosis of urinary tract metastases has always been a difficult point in clinical work. The most common metastases are from gastrointestinal tumors, gynecological tumors, lung cancer, esophageal cancer, lymphoma, etc. There are few studies on the application of FISH in urinary tract metastases. Hu et al. [55] reported two cases of patients with secondary renal tumors from esophageal cancer and retroperitoneal lymphoma. Before treatment, the urinary FISH detection indicated the presence of chromosome 3, 7 and 17 amplification. After eight cycles of R-CHOP treatment for patients with renal metastatic lymphoma, combined with comprehensive treatments such as kinase inhibitors, the patient's mass was significantly reduced or even disappeared. Renal function was significantly restored, and the FISH test was negative again. Studies [68][69][70][71] have shown that the tumor cells of esophageal squamous cell carcinoma and non-Hodgkin's lymphoma have the possibility of aberrations on chromosomes 3, 7 and 17 or (and) deletion of the p21 gene locus on chromosome 9. Urinary FISH may be positive if tumor cells metastasize to the kidney and invade the renal parenchyma and collecting system, and can shed a sufficient amount into the urine. Korski et al. [72] analyzed a pathological specimen of primary mixed testicular germ cell carcinoma with bladder metastasis and stomach metastasis by using the FISH technique, and found a 12p isoarm chromosome, suggesting that the oncogene was located in 12p, thus providing the genetic basis for mixed testicular carcinoma. However, FISH is a sensitive and specific detection method for the diagnosis of UC. In the absence of the patient's relevant medical history, positive urinary FISH will interfere with the diagnosis of the disease to a certain extent, which will easily lead to preoperative misdiagnosis and wrong treatment plans.
Future Perspectives on U-FISH
Recent advances in searching for genetic mutations have led to a paradigm shift in the treatment of cancers. Currently, there are many biomarkers for urinary tumors, such as urine DNA methylation, exosomes, mini chromosome maintenance 5 (MCM5) urine expression (ADXBLADDER), Bladder EpiCheck Test, mRNA-based urine test (Xpert Bladder Cancer Monitor), NMP22, NMP22 Bladder chek, BTA stat, BTA TARK, ImmunoCyt [73]. Various urine-based examinations have been reported for decades but have not been found to be superior to UroVysion TM in detecting UC. Combining new types of examinations with UroVysion TM or using tailor-made examinations with various urine-based biomarkers are envisioned. In the later stage, we can study the specific changes in the genetic material of urinary tract tumors, so as to design specific probes for the diagnosis, treatment and prognosis of diseases.
Existing Problems
FISH testing has high technical requirements for laboratory personnel. The criteria for determining positive results are not completely uniform, and the price is expensive, which cannot be carried out in many local hospitals. The positive rate of FISH in urinary tract non-urothelial carcinoma is relatively high, but the amount of relevant research data is relatively small, and there is no support from multi-center big data. In addition, U-FISH cannot differentiate UC from adenocarcinoma, squamous cell carcinoma and metastasis, which brings some difficulties to precision diagnosis and treatment
Conclusions
FISH is a powerful clinical tool in the field of urinary tumors, which has proven or potential application value in tumorigenesis, diagnosis, treatment, prognosis, postoperative follow-up and other aspects related to chromosome aberrations. Urologists should strengthen a more comprehensive understanding of the application value of FISH to better complete the precise diagnosis and treatment of urinary tract tumors.
Conflicts of Interest:
The authors declare no conflict of interest.
Abbreviations
The | 2022-11-06T16:20:45.808Z | 2022-11-01T00:00:00.000 | {
"year": 2022,
"sha1": "416cf17bdd950f8d9b93269faf47c4b9f983a722",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6694/14/21/5423/pdf?version=1667475088",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7b01fc7dbf8fd76a4f80002ea31ed8260ca0a2b1",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
} |
75106 | pes2o/s2orc | v3-fos-license | Adverse Effects of Amoxicillin for Acute Lower Respiratory Tract Infection in Primary Care: Secondary and Subgroup Analysis of a Randomised Clinical Trial
A European placebo-controlled trial of antibiotic treatment for lower respiratory tract infection (LRTI) conducted in 16 primary care practices networks recruited participants between November 2007 and April 2010, and found adverse events (AEs) occurred more often in patients prescribed amoxicillin compared to placebo. This secondary analysis explores the causal relationship and estimates specific AEs (diarrhoea, nausea, rash) due to amoxicillin treatment for LRTI, and if any subgroup is at increased risk of any or a specific AE. A total of 2061 patients were randomly assigned to amoxicillin (1038) and placebo (1023); 595 (28%) were 60 and older. A significantly higher proportion of any AEs (diarrhoea or nausea or rash) (OR = 1.31, 95% CI 1.05–1.64, number needed to harm (NNH) = 24) and of diarrhoea (OR 1.43 95% CI 1.08–1.90, NNH = 29) was reported in the amoxicillin group during the first week after randomisation. Subgroup analysis showed rash was significantly more often reported in males prescribed amoxicillin (interaction term 3.72 95% CI 1.22–11.36; OR of amoxicillin in males 2.79 (95% CI 1.08–7.22). No other subgroup at higher risk was identified. Although the study was not powered for subgroup analysis, this analysis suggests that most patients are likely to be equally harmed when prescribed antibiotics.
Introduction
Lower respiratory tract infection (LRTI) is the most common reason for consulting a general practitioner (GP) [1,2]. LRTIs are often treated with antibiotics, even though this is not generally supported by guidelines and recommendations [2][3][4][5][6]. Many trials and observational studies have found no or little benefit of antibiotic treatment for an acute cough [7]. If an antibiotic is prescribed, amoxicillin is the recommended first-line treatment for LRTI [8]. Amoxicillin is the most commonly used broad-spectrum penicillin accounting for an average 40% of the total outpatient antibiotic use in Europe [8][9][10].
All medications have known adverse events (AEs) and antibiotics are no exceptions [11]. Although most antibiotics are generally considered safe, and most AEs are moderate to mild, some antibiotics have been associated with life-threatening AEs [12]. AEs are generally poorly reported in trials, and their true incidence is thought to be much higher than reported in trials [13]. In primary care, a shared decision consultation should include both the benefits and potential harms of the (antibiotic) treatment prescribed [14].
The European multicentre randomised placebo-controlled trial (RCT) of amoxicillin for LRTI in adults in primary care was performed by the GRACE (Genomics to combat Resistance against Antibiotics in Community-acquired LRTI in Europe; http://www.grace-lrti.org) Network of Excellence. The GRACE trial identified significantly more AEs (diarrhoea or nausea or rash) in the amoxicillin group compared to the placebo group (AEs in week one and week two after antibiotic administration) [15]. However, it was not clear whether this applies to each specific AE, or whether particular subgroups of patients suffer more AEs than others. To better inform primary care clinicians and their patients, this secondary analysis of the GRACE trial aims to provide estimates of any and each specific AE (diarrhoea, nausea, and rash) of amoxicillin, and identify subgroups of patients that are more at risk for any or a specific AE.
Subgroup Analysis of Adverse Events
For the whole cohort, at the end of week one, a significantly higher proportion of any AE (diarrhoea or nausea or rash) was reported in the amoxicillin group compared to placebo (OR = 1.31, 95% CI 1.05-1.64) ( Table 1). The number needed to harm (NNH) was 24, i.e., on average, for every 24 patients receiving amoxicillin, one additional patient had reported any AE due to the antibiotic at the end of week one. Any AE was reported significantly more often in patients ever being a smoker (OR = 1.41, 95% CI 1.04-1.90), and patients using OTC treatment before the consultation (OR = 1.44, 95% CI 1.09-1.91), but the interaction terms were not statistically significant. In patients with depression/anxiety on the day of consultation (OR = 1.49, 95% CI 0.99-2.22) and in those on any medication other than study medication (OR = 1.29, 95% CI 0.98-1.69) the odds ratios were borderline significant, but the interaction terms were not significant. This indicates that compared to the whole cohort, no subgroups were at higher risk of any AE (diarrhoea or nausea or rash) due to amoxicillin. Analysing each specific AE, diarrhoea was present significantly more often among patients in the amoxicillin group, compared to those in the placebo group (OR 1.43 CI 1.08-1.90, Table 2) (NNH: 29). Diarrhoea was significantly more often reported by patients 60 years and over (OR 1.97, 95% CI 1.09-3.54), current smokers (OR 2.07, 95% CI 1.15-3.76), ever being a smoker (OR 1.79, 95% CI 1.21-2.65), on OTC treatment before their consultation (OR 1.76 95% CI 1.23-2.53), and on antihypertensive/diuretics (OR 2.27, 95% CI 1.27-4.05). However, the interaction terms were not significant. Table 2. Diarrhoea in the whole cohort and in subgroups of adult patients in the first week after presenting to primary care with a LRTI and allocation to amoxicillin or placebo. Nausea was not associated with amoxicillin treatment for either the whole cohort or any subgroup of patients (Table 3). Rash was only significantly more often reported by males (interaction term 3.72, p = 0.021; odds ratio in males 2.79 (95% CI 1.08-7.22) ( Table 4). Table 3. Nausea in the whole cohort and in subgroups of adult patients in the first week after presenting to primary care with a LRTI and allocation to amoxicillin or placebo.
Summary
To the authors' knowledge, this is the first subgroup analysis of any and specific AEs reported in RCTs of antibiotics for LRTI. Diarrhoea was significantly more likely to be reported in the amoxicillin group compared to the placebo group. No specific subgroups were at higher risk of any or a specific AE due to amoxicillin, apart from males in the amoxicillin group reporting rash significantly more often.
Strengths and Limitations
Our results are based on data from the largest RCT of antibiotics for acute LRTI in general practice to date [15]. Its primary objective was not identifying the incidence of AEs. RCTs, such as the GRACE trial, are not always prospectively powered for subgroup analysis of AEs [16]. Accordingly, subgroup analyses with multiple comparisons are often underpowered, with a greater risk of the false negative results (type II error). Large sample sizes are needed for robust subgroup analysis, which may only be possible by combining trial results in a meta-analysis [17].
Comparison with Existing Literature
Reporting guidelines on RCTs indicate that more details on AEs of medication should be documented and reported to the concerned authority [18]. However, AEs occurring during a trial are often underreported, in particular, when reporting results in trial publications. Underreporting may be a result of poor monitoring, missing data, or unclear case definitions [19]. An important consequence of underreporting of AEs is a misinterpretation of the intervention's effects, particularly its harms [20]. Although the GRACE trial captured AEs from the study medication, the reported AEs were limited to diarrhoea, nausea, and rash. A review paper identified that candidiasis was significantly associated with amoxicillin use [13], and patients treated with amoxicillin were twice as likely, compared to placebo, to report diarrhoea [13,19]. As in the previously published paper from GRACE trial [15], the current study showed significantly more AEs (diarrhoea or nausea or rash) in amoxicillin group compared to placebo. The calculation of any AEs in the previous paper covered the first two weeks after the antibiotic was prescribed, however, this paper reports on any AEs in the first week while patients were taking antibiotics.
This study also showed a higher risk overall of diarrhoea in the amoxicillin group compared to the placebo group. Even though diarrhoea was more often reported in the treatment group for patients 60 years and older, smokers, patients taking OTC treatment before consulting a GP, patients on antihypertensive or non-steroidal anti-inflammatory drugs and those who received an influenza vaccine, no particular subgroup was at higher risk of AEs.
The presented analysis showed that males in the amoxicillin group reported rash significantly more often compared to males in the placebo group. Skin reactions were also associated with amoxicillin use [21]. A borderline significant interaction term was observed for patients reporting anxiety/depression on the day of the consultation. These (borderline) significant results may be due to multiple testing, and with more conservative p-values, and these results would not be considered significant [22]. Sensitivity analysis did not alter our conclusions.
Study Design and Patients
The GRACE trial was performed in 16 primary care research networks in 12 European countries. Details of the study design, patient inclusion, and recruitment were previously published [15,16]. In summary, the study was conducted between November 2007 and April 2010, and recruited adult patients with LRTI that were randomly allocated to receive either 1 g of amoxicillin or placebo three times a day for 7 days.
Data Collection
Data was collected using (a) a case record form (CRF), (b) a symptom diary, and (c) a short version of the diary. The latter was used to collect key outcome variables and AEs during a standardised phone call after 4 weeks, if participants had not returned their diary. For this subgroup analysis, we used information on antibiotic treatment in the previous six months, any medication during the study period, history of regular use of inhaled bronchodilators, steroids, antihypertensive/diuretics, benzodiazepines/antidepressant, oral non-steroidal anti-inflammatory drugs, or influenza vaccination recorded by the responsible clinician in the CRF. The symptom diary was completed by the patient every day from day one, i.e., day of consultation and inclusion, until resolution of symptoms, up to a maximum of 28 days. This diary has previously been validated, is sensitive to change, and internally reliable [23]. Specific AEs, such as diarrhoea, nausea, and rash, were recorded at the end of week one and week two, and over-the-counter (OTC) treatment was recorded on day one. Anxiety and depression related questionnaires were completed by patients on the day of consultation (day one), and at the end of every week for four weeks. For the purpose of this study, we used anxiety and depression reported on day one. All information was collected blind to treatment allocation. All data collection forms were translated into relevant local languages and back-translated to ensure consistency.
Outcomes in the Study
The primary outcomes include the presence of specific AEs, diarrhoea, nausea, and rash. We also created another dichotomous outcome variable for the presence of "any reported AE" for those who had reported either diarrhoea or nausea or rash at the end of week one. Amoxicillin was administered for seven days in the study, and all AEs reported during week one were included in the primary outcome. Subsequent reported AEs was excluded from the analysis.
Sample Size Calculation
As this is a secondary analysis of previously collected data, sample size calculations are no longer relevant. Considering the proportion of any adverse events caused by amoxicillin [15], a subgroup sample size of 136 patients allows detection of a 15% absolute difference in AEs (17.5% versus 2.5%, with 80% power, α = 0.05 (G* power Version 3.1.9.2). Similarly, 302 patients would allow the detection of a 10% absolute difference. Analyses in smaller subgroups were considered underpowered, and were only reported for comprehensive purposes.
Statistical Analysis
The subgroup analyses of the AEs were not pre-specified. For any and each specific AE, we estimated the effect of amoxicillin using logistic regression analysis in Stata (version 13). Subgroup analyses were performed separately for any and each specific AE. The interaction between a particular subgroup (for example, males) and the intervention (in this case amoxicillin) concerns the difference in AEs (of amoxicillin) among the patients in that particular group (males), compared to patients who are not (females). The interaction term is the variable introduced into the statistical model to allow estimation of the size of that difference. The odds ratio in the subgroups estimates the difference in AEs between patients on amoxicillin and those on placebo. The specific subgroups were gender (male/female), age (60 years and older/less than 60 years), and yes/no groups for current and ever smoking, depression/anxiety on the day of the consultation, over-the-counter (OTC) treatment before consultation, antibiotics used in previous six months, use of any medication other than study medication, use of oral bronchodilators, on regular oral or inhaled steroids, on antihypertensive/diuretics, on antidepressant/benzodiazepams, on non-steroidal anti-inflammatory drugs and vaccinated against influenza. Sensitivity analysis was performed by recoding missing information on an AE as the absence of that AE.
Role of Funding Source
GRACE was funded by the European Commission's Framework Programme 6 (LSHM-CT-2005-518226). The work reported in this publication has been financially supported through TRACE (Translational Research on Antimicrobial resistance and Community-acquired infections in Europe; www.esf.org/trace). The researchers are independent of all funders.
Ethical Approval
Ethical approval for the United Kingdom was granted by Southampton and South West Hampshire Local Research Ethics Committee (B) (ref. 07/H0504/104). Competent authority approval for the UK was granted by the Medicines and Healthcare Products Regulatory Agency. Ethical and competent authority approval was obtained from each local organisation at every research site outside of the UK. Patients who fulfilled the inclusion criteria were given written and verbal information, and informed consent was obtained before enrolment.
Conclusions
This subgroup analysis provides some evidence that the observed increased risk of any AE or diarrhoea due to amoxicillin was not specific, or more pronounced, in any subgroup of patients. In other words, all adult LRTI patients prescribed antibiotics are likely to be at the same risk for any AE or diarrhoea. We can reiterate the conclusion of the previous GRACE trial that the results do not suggest the use of amoxicillin where only little benefit has been observed for patients with uncomplicated LRTI in primary care. Before prescribing an antibiotic, their potential benefits and harms should be discussed with patients. | 2018-01-18T21:48:46.154Z | 2017-12-01T00:00:00.000 | {
"year": 2017,
"sha1": "42b40c73964cf587c67362737579026c669f722d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-6382/6/4/36/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "42b40c73964cf587c67362737579026c669f722d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55844732 | pes2o/s2orc | v3-fos-license | How Far to the East Was the Migration of White Oaks from the Iberian Refugium ?
The goal of this study was to investigate the postglacial recolonization pathways of the white oaks Quercus robur and Quercus petrea in Poland, and especially to evaluate the impact of Iberian refugium in this part of Europe. Chloroplast DNA polymorphism of 310 individuals older than 200 years was analyzed. Six haplotypes in total were found to differentiate three maternal lineages: the Balkan (haplotypes 4, 5, and 7), the Apennine (haplotypes 1 and 2), and the Iberian (haplotype 12). The most abundant were members of the Balkan (71.5% of all samples) and the Apennine lineage (23.1%), and only 5.4% of individuals were of Iberian origin. The geographic distribution of the three lineages is clearly structured. The northernmost territories of Poland are occupied by Apennine (haplotype 1) and Iberian (haplotype 12) lineages, whereas samples in central and southern Poland represents the Balkan lineage. The population structure might be the result of competitive colonization among lineages after migration from different refugia. It is likely that colonization of northernmost parts of Poland by the Balkan lineage was halted or at least hampered due to the arrival of the Apennine populations. The most significant result of this study concerns the presence and status of the Iberian lineage in Poland, which is most likely of natural origin.
Introduction
During the last glacial period, vast areas of Europe were covered by ice for approximately 100 000 years.Distribution of living organisms, including oak species, was restricted to the southern areas, where favorable climatic conditions allowed their survival.Glacial refugia were first defined by analyzing beetle exoskeletons and fossil pollen profiles in the layers of bogs, moors, and cave floors.On the basis of paleobotanical and genetic studies, three main Vistulian refugia for Quercus and other species in Europe have been proposed (Taberlet et al. 1998, Petit et al. 2002).They were located on three Mediterranean peninsulas: the Iberian, the Apennine, and the Balkan.Recent investigations have revealed also additional secondary refugia established in Younger Dryas (Brewer et al. 2002).
Recently, the study of a genetic and evolutionary consequences of glaciation for plant species has become a highly important subject (Konnert and Bergman 1995, Demesure et al. 1996, Taberlet et al. 1998, Newton et al. 1999, Hewitt 1999, 2000, Cottrell et al. 2002, Palme and Vendramin 2002).Population genetic structure is the result of both present processes and past events (Taberlet et al. 1998, Comes and Kadereit 1998, Hewitt 1999).Pleistocene ice ages has shaped the patterns of genetic diversity seen in contemporary plant/forest trees populations (Jaramillo-Correa et al. 2004, Acheré et al. 2005, Ran et al. 2006, Pyhäjärvi et al. 2007).However, population structure may be seriously influenced by intensification of forestry and seeds and plants transfer over the long distances.Thus, artificial reforestations which might have disturbed the genetic structure of natural stands should be considered in reconstruction of species postglacial history At present, molecular techniques are useful tools for the analysis of the impact of ice ages on plant distributions and genetics.Several investigations have verified the value of molecular markers for tracing the recolonization routes of different species.These techniques have been successfully used to test hypotheses regarding the existence of isolated glacial populations and refugia during prolonged glacial periods (Sinclair et al. 1999, Palme and Vendramin 2002, Petit et al. 2002, Godbout et al. 2005).Recently, analysis of geographic distribution of cpDNA or mtDNA haplotypes has provided insight into tree species postglacial history (Soranzo et al. 2000, Liepelt et al. 2002, Petit et al. 2002, Palme et al. 2003, Gömory et al. 2004).
In this study we aim to investigate in more detail the postglacial history of white oaks in Poland.A previous survey done by Csaikl et al. (2002) in Central and Eastern Europe did not clarify the impact of the Iberian lineage and the possibility that it is not autochthonous in that region.Our specific questions were: 1) what is the status of Iberian lineage in Poland?2) is the lineage native to Poland? or 3) is there any evidence that the lineage was transferred with seeds material from western Europe?This study attempted to describe the recolonization routes in Poland based on the most reliable and comprehensive material available.Hence, only individuals more than 200 years old were used which most likely reflect past native stands.
Sampling
A total of 297 trees were sampled from 65 oak stands throughout Poland.Two to twelve individuals were chosen from each stand selected based on the age and distributions of oak species.Only individuals older than 200 years were studied.As the forests in the northern and northeastern regions of Poland have been managed by foresters during the past 200 years, the age criterion was used to ensure an autochthonous population.Detailed age assessments of the trees were done using dendrochronological analysis.Tree ring samples were taken from at least two individuals in each stand.The samples were taken at breast height with a Pressler drill.Ages were calculated by counting the number of growth rings and adding 10 years, the period of time required to reach breast height.Very often, the wood core did not include the pith, so the distance to the pith had to be estimated.In case of protected trees age determination was not possible so only individuals that measured more than 300 cm perimeter at breast height were included for genetic analysis.The age of the protected trees was estimated based on the perimeter and average annual radial growth rates.In open areas the growth rate is about 2.5 mm per year and in forested areas it is about 1.8 mm per year (K.Ufnalski, unpublished study).In addition, 13 single trees were included in the study that are defined as natural monuments and thus may represent the oldest oaks in Poland.The single trees were located in cities, along the roads, in private areas, and also in the forests.Information about stands and plant material used is summarized in Table 1.
PCR-RFLP Methods
About 1 cm 2 of frozen leaf material was used for total DNA extraction following the method described by Dumolin et al. (1995).Total DNA was used as a template in the PCR reaction using three primer pairs for amplification of trnD/trnT, trnC/trnD, and psaA/trnS cpDNA regions (Demesure et al. 1995).These were used to distinguished polymorphisms as described by Dumolin-Lapegue et al. (1997).In most cases, polymorphisms in the DT region were enough to denote haplotypes found in Poland.Addition regions (CD and AS) were only necessary for haplotype identification in a few instances.
PCR amplification was carried out in a total volume of 25 µl containing about 20 ng of template DNA, 2.5 mM MgCl 2 , 0.5 mg of BSA, 100 µM of each of dNTP, 0.2 µM of each of primer and 0.25U Taq polymerase, with the respective 1 × PCR buffer (Taq polymerase and 10× PCR buffer were provided by Novazym, Poland), and it followed the cycle profile and primers described by Dumolin-Lapegue et al. (1997).
Amplified fragments of DT, and CD regions were digested with TaqI at 65ºC overnight.Digestion was conducted in a total volume of 20 µl with 3 U of restriction enzyme and 15 µl of PCR product.AS fragments were digested in 20 µl reactions with 3 U HinfI at 37ºC for 5 h (restriction endonucleases supplied by EurX, Poland).
Variation in the restriction patterns was interpreted as a haplotype and the haplotype nomenclature was used as described in Petit et al. (2002).In the study by Petit et al. (2002) a total of 32 haplotypes pooled into six maternal lineages were described and mapped across Europe.For TaqI/ DT/CD, the three biggest fragments were considered.Four fragments were scored for HinfI/AS.
Maternal Lineages
Six haplotypes representing three maternal lineages were identified in this study: haplotypes 1 and 2 (Apennine lineage), 4, 5, and 7 (Balkan lineage), and 12 (Iberian lineage).The geographic distribution of these lineages is shown on Fig. 1.The composition of maternal lineages and frequencies of haplotypes are presented in Tables 1 and 2. Clear geographical structuring of maternal lineages in Poland can be seen (Fig. 1).The Balkan maternal lineage is dominant and it occurs mainly in the southern and central regions of the country.The northernmost regions contain lineages B (Iberian) and C (Apennine), with some individuals from Balkan lineage.
Balkan Lineage (A)
The most common Balkan maternal lineage was represented by 71.4% of samples (Fig. 1.).Haplotypes 4 and 7 were the most common, with equal frequencies 28.1% and haplotype 5 was 15.2%.The majority of Balkan stands were fixed for one haplotype (84%).In four stands, Balkan haplotypes were found together with Apennine (haplotype 1 or 2).This lineage included the oldest trees in this study.Of the Balkan individuals, over 35% were up to 300 years old, 56% were up to 400 years old, and 8.9% (5 individuals) were over 400 years old.The oldest individual was 748 years and it is one of the oldest oak trees in Poland.
Iberian Lineage (B)
Out of six Iberian haplotypes described by Petiti et al. (2002) only haplotype 12 was detected in this study (Fig. 1).It was noted in 17 individuals (5.4% of the total sample set) and was confined to the northernmost part of Poland.All stands were monotypic.Youngest trees were 250 and 273 years old.Ten individuals were more than 300 years old.However, the age of the largest individual, estimated based on perimeter (1015 cm at breast height), was 706 years.It was the second oldest tree in the entire sample set.
Apennine Lineage (C)
The Apennine (C) lineage represents 23.2% of the total sample set and consists of haplotypes 1 and 2 (Fig. 1).The dominant was haplotype 1 detected in 19% of the trees.The haplotypes have exclud-ing geographic distributions.Individuals carrying haplotype 1 were present in the north of Poland, whereas haplotype 2 was found in the south.The age of the oldest individual was estimated at 613 years and it had haplotype 1. Forty seven percent of the trees were up to 300 years old, 30.5% were between 300 and 400 years old, and 22.2% were over 400 years old.Both pedunculate oak and sessile oak were of Apennine origin.
Discussion
Our results indicate that haplotypes of Balkan lineage (A) are the most common in Poland, which is in agreement with findings by Csaikl et al. (2002).This lineage probably appeared in Poland earlier than the other two lineages.
Isopollen maps show that 10 000 BP in southern Poland, oak pollen values were slightly greater than 0.5%.This is suggestive of a forthcoming oak migration front from southern Europe.By 9 000 BP, three distinct routes of oak colonization can be recognized including one along the Baltic Sea, southern one through the Moravian Gate and from the southeast.The maximum distribution of oak took place at 4500-4000 BP.From 3500 BP until the present, oak has consistently declined in pollen profiles (Milecka et al. 2004).The Apennine lineage, the second most frequently found group (23.2% of individuals), is represented by two haplotypes.Haplotype 1 occurred frequently in the north of Poland.This haplotype is commonly found in Central and Eastern Europe and it even reaches Scandinavia, where it is the most frequent haplotype (König et al. 2002, Jensen et al. 2002).Dendrochronological age assessments showed that individuals with haplotype 1 are 200 to 613 years old, which strongly suggests that this lineage is native to Poland.Haplotype 2 was observed only in southern Poland and was detected in four populations.The movement of haplotype 2 in Europe proceeded from Italy northward through eastern Austria, Hungary, and Slovakia to Poland, and then further east.Isolated populations in Lithuania also carry this haplotype, as well as a group of five populations in Slovakia (Tutkova-van Loo and Burg 2004).Based on the results presented here, migration of haplotype 2 in Poland seems to be marginal.
Unlike the Balkan and the Apennine lineages, the Iberian lineage has not been previously considered autochthonous in Poland (Csaikl et al. 2002).Iberian haplotypes were found in a restricted area, mainly along the coast of the Baltic Sea in Poland.Furthermore, only 2% of Iberian individuals were noted in monotypic stands, lending further support to assertion that the Iberian lineage is not autochthonous.In the data presented here, members of Iberian lineage were found only in monotypic stands.In regions where different recolonization paths intersect, fixation for one haplotype or a few haplotypes from one lineage in one population suggests its autochthonous origin (Petit et al. 2002).Thus, stands fixed for Iberian haplotype in Poland could be regarded as native.Moreover, the 200-year age criterion used for sampling renders these results especially reliable.The oldest individuals of Iberian origin were estimated to be approximately 700 years old.Hence, natural migration of oak from the Iberian Peninsula to Poland is likely.The Iberian origin has also been documented for Scots pine (Soranzo et al. 2000).This supports our findings that Iberian refugium indeed, could contribute to postglacial colonization process in that part of Europe.
Although Iberian populations are distributed only in the northwestern part of Poland, along the coast of the Baltic Sea, they appear to connect with the general recolonization pattern.The colonization of Europe by oaks from Iberian refugium took place along the costal areas of Europe (Petit et al. 2002).None of the Iberian haplotypes was found in the eastern regions of Poland or in the other Baltic Countries (Csaikl et al. 2002).The data indicate that the Iberian lineage migration route reached Poland as far east as beyond the Vistula River.This lineage does not reach as far east as Germany, as it was previously stated.The question is, why more Iberian populations have not been detected in the northwestern territories of Poland.There are two reasonable explanations.The Iberian populations may have either become extinct during huge deforestations events noted especially in northern and central Poland since 17th, or they were missed during the sampling procedure due to their low frequency.
In previous study, three Iberian haplotypes were detected in Poland: 10, 11 and the most frequent haplotype 12 (Csaikl et al. 2002).We noted only haplotype 12. Discrepancies can be related to sampling procedure.In this study, old oak trees and stands were selected, as they would resemble past autochthonous populations.Csaikl et al. (2002) studied the most valuable stands, which may not be necessarily the oldest and native ones.Iberian haplotypes were found together with haplotypes from other lineages, which led the authors to the conclusion that Iberian lineage in Poland is the result of seed transport.At 19th century forests in northern Poland were managed by Germans or German-educated foresters, and intensive seed transport might have occurred.Haplotype 10 found by Csaikl et al. (2002) and undetected in this study, was indeed reported in western Poland but in artificial oak stands younger than 200 years (Kedzierska 2004).Haplotype 10 was described by König et al. (2002) as the most abundant haplotype among those found in Germany (10, 11, and 12).Hence, it seems likely that oak seeds transport from Germany to Poland occurred in the past.
Due to its geographic location, Poland is a point of intersection for the three maternal lineages.The geographic distribution of maternal lineages is clearly structured which may result from succeeding colonization and further competition among populations originated in different refugial areas.The most significant result of this study states the natural presence of the Iberian lineage in Poland.However, due to its low frequency, influence of this lineage is difficult to evaluate.The practical aspect of this work is that the results gained could be used to identify autochthonous oak stands what is crucial in delimitating zones for conservation of genetic resources.
Table 1 .
Age of studied oak individuals and detected cpDNA haplotypes.
Table 2 .
Frequency of cpDNA haplotypes and maternal lineages detected in white oaks in Poland. | 2018-12-12T10:22:16.962Z | 2008-01-01T00:00:00.000 | {
"year": 2008,
"sha1": "43f139dfe9efb6074bbf382639aa92585b752244",
"oa_license": "CCBYSA",
"oa_url": "https://silvafennica.fi/pdf/240",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "43f139dfe9efb6074bbf382639aa92585b752244",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
262270001 | pes2o/s2orc | v3-fos-license | Quality and Quantity of Published Studies Evaluating Lumbar Fusion during the Past 10 Years: A Systematic Review
Study Design Systematic review. Clinical Questions (1) Has the proportion and number of randomized controlled trials (RCTs) as an indicator of quality of evidence regarding lumbar fusion increased over the past 10 years? (2) Is there a difference in the proportion of RCTs among the four primary fusion diagnoses (degenerative disk disease, spondylolisthesis, deformity, and adjacent segment disease) over the past 10 years? (3) Is there a difference in the type and quality of clinical outcomes measures reported among RCTs over time? (4) Is there a difference in the type and quality of adverse events measures reported among RCTs over time? (5) Are there changes in fusion surgical approach and techniques over time by diagnosis over the past 10 years? Methods Electronic databases and reference lists of key articles were searched from January 1, 2004, through December 31, 2013, to identify lumbar fusion RCTs. Fusion studies designed specifically to evaluate recombinant human bone morphogenetic protein-2 or other bone substitutes, revision surgery studies, nonrandomized comparison studies, case reports, case series, and cost-effectiveness studies were excluded. Results Forty-two RCTs between January 1, 2004, and December 31, 2013, met the inclusion criteria and form the basis for this report. There were 35 RCTs identified evaluating patients diagnosed with degenerative disk disease, 4 RCTs evaluating patients diagnosed with degenerative spondylolisthesis, and 3 RCTs evaluating patients with a combination of degenerative disk disease and degenerative spondylolisthesis. No RCTs were identified evaluating patients with deformity or adjacent segment disease. Conclusions This structured review demonstrates that there has been an increase in the available clinical database of RCTs using patient-reported outcomes evaluating the benefit of lumbar spinal fusion for the diagnoses of degenerative disk disease and degenerative spondylolisthesis. Gaps remain in the standardization of reportage of adverse events in such trials, as well as uniformity of surgical approaches used. Finally, continued efforts to develop higher-quality data for other surgical indications for lumbar fusion, most notably in the presence of adult spinal deformity and revision of prior surgical fusions, appear warranted.
Study Rationale and Context
Evidence-based medicine (EBM) emphasizes the prioritization of information from well-designed trials in health care decision making. This term now describes the use of the best clinical evidence as the basis for guidelines for the medical and surgical management of problems on a population level. Well-designed randomized controlled trials (RCTs) are considered the highest-level quality of evidence (level 1) regarding a treatment method. As such, clinicians and payers typically refer to them as justification for performance and coverage of specific treatments.
Lumbar fusion surgery is performed for a variety of spinal pathologies. In addition, lumbar fusion can be achieved via a variety of approaches, including isolated posterior fusion, as well as interbody fusion from posterior, lateral, or anterior approaches. 47 More recently, minimally invasive methods of fusion utilizing all of these approaches have also been devised. 7,31 Despite these improvements in surgical technique, some indications for lumbar fusion surgery, such as in the treatment of axial back pain from degenerative disk disease (DDD), remain controversial. 14,16 Other conditions such as instability, tumor, trauma, or spinal deformity are considered better-proven indications, although there remains significant variability of fusion utilization and technique performed nationally and internationally. 1,14 Given a relative lack of RCT-quality data, other analyses of billing databases have questioned the indication and benefit of lumbar fusion. However, in many cases these evaluations fail to define the surgical indication and often resort to a relatively nonspecific diagnosis such as "back pain," which leads to increased confusion for health care economists and hospital administrators, many of whom may lack a clinical understanding of surgical diagnoses. 15 Although many surgical patients' complaints may include back pain, a large number are not undergoing surgical fusion exclusively for that symptom but instead are due to associated features such as spinal instability, deformity, or neurologic compression. Thus, large database analyses are not an adequate substitute for higher-quality RCT data.
With the introduction of the Affordable Care Act and increased emphasis on comparative effectiveness research, more attention has been focused on the costs associated with spine care in the United States. 39 Concomitantly, there have been significant technological advances in spinal surgery, increasing the associated costs. Among other issues, questions about the benefits of bone morphogenic protein and incomplete reportage of its complication profile have emerged. 10 It has also recently been shown that reporting of adverse events in cervical total disk trials was inconsistent. 1 All of these features argue for an increase in the quality of clinical research of spine surgical outcomes, both with respect to study design as well as clinical outcome and adverse events recording and reporting.
In this analysis, we set out to determine if there is a difference in the number and proportion of RCTs in the past 10 years among the four most common indications for lumbar spine fusion: DDD, spondylolisthesis, spinal deformity, and adjacent segment disease. We also sought to ascertain whether there has been an improvement in the consistency of clinical outcomes measured among RCTs over time, as well as in the quality of recording and reporting of adverse events. Finally, we also evaluated whether there were consistent changes in fusion surgical approaches reported over the same period.
ProporƟon of RCTs (%) by diagnosis
Year Fig. 2 Proportion of randomized controlled trials (RCTs) as a surrogate for quality of evidence regarding lumbar fusion increasing over the past 10 years.
Retrieved for full-text
(n = 75) • Over the course of the 10-year period, anterior, posterior, circumferential, transforaminal, and a combination of these approaches have been used. • A posterior approach was used in 33.3%; circumferential in 21.4%; anterior in 19%; transforaminal in 11.9%; combination of one or more approaches used in 9.5%; and one study did not report a specific approach (2.4%). • There were no discernible changes in treatment approaches over time or by diagnosis in the past 10 years.
Discussion
This structured review was performed in an effort to assess whether the quality of clinical research on lumbar fusion has shown consistent improvement over the past decade. In the end, we are unable to make clear statements regarding trends over this period. On the other hand, there are some positive features to be noted from our results. Although there has not been an apparent shift toward a greater percentage of RCT design among published studies, there has been a steady increase in the number of RCT studies published with a focus on DDD and on DS. As the two most common surgical indications for fusion, it is an encouraging finding. Although it is beyond the scope of this article to derive treatment guidelines, the numbers available suggest that there has likely emerged a relatively high level of evidence data on which to base such recommendations.
We are also encouraged by the relatively high percentage (88.1%) of RCTs using validated, patient-centered outcomes over the past decade. The most widely used questionnaire was the Oswestry Disability Index, which was used in 78.6% of reviewed RCTs. Although debate regarding which outcomes instruments are the best designed or the most responsive for patients receiving lumbar fusion is perhaps unsettled, the importance of using validated, patient-reported outcomes as opposed to clinician-reported outcomes is well accepted. This approach appears to be fairly consistently used by authors of the highest level of medical evidence in the field of lumbar fusion.
Unfortunately, the same cannot be said regarding the reportage of adverse events in these same studies. Although 81% of RCTs did include some discussion of adverse events, only 11.9% utilized some classification or scale of complications, which may in part reflect the lack of availability or development of clinical research tools with a valid weighting of adverse events following lumbar fusion surgery. We hope that this review may serve as an illustration of the need for such an effort.
The lack of a consistent approach to surgical fusion remains a barrier to development of a reliable body of high-quality clinical data on which to base treatment recommendations. Although the variety of approaches available does reflect a significant effort and investment in surgical innovation, it is unlikely that all of the approaches currently in use are equally safe or effective. Although undoubtedly some clinical decision making regarding approach is tailored to the needs of an individual patient, it is also likely driven at least in part by the training and experience of the surgeon performing the procedure. 27 This review highlights the need for higher-level comparisons of specific surgical approaches and techniques. The lack of high-level data to assess fusion for patients with adult spinal deformity or adjacent segment disease remains an area of concern. The lack of published RCTs in these areas may reflect the even greater variations of clinical presentation and surgical approach among such patients. The comparatively smaller number of such patients also presents difficulty in obtaining patient cohorts of sufficient size to allow meaningful statistical comparisons. Despite such obstacles, however, patients and surgeons would undoubtedly benefit from efforts at improving the clinical data guiding treatment recommendations. This review ultimately does not prove that the quality of the reported data is truly improved. A more detailed analysis of the actual content of the published studies would be required to gain a better understanding of their true level of quality. Nonetheless, this study does provide at least a partial assessment of the current landscape of lumbar spine clinical research. Our results do show that there appears to be an increasing adoption of an EBM-supported approach within the discipline of lumbar spine surgery over the past decade.
Conclusion
This structured review demonstrates that there has been an increase in the available clinical database of RCTs using patient-reported outcomes evaluating the benefit of lumbar spinal fusion for the diagnoses of DDD and DS. Gaps remain in the standardization of reportage of adverse events in such trials, as well as uniformity of surgical approaches used. Finally, continued efforts to develop higher-quality data for other surgical indications for lumbar fusion, most notably in the presence of adult spinal deformity and revision of prior surgical fusions, appear warranted. | 2018-04-03T01:38:31.906Z | 2015-06-01T00:00:00.000 | {
"year": 2015,
"sha1": "0c06bf16212533506e4d6232fa9268503631b5c3",
"oa_license": "CCBYNCND",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1055/s-0035-1552984",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8843ba7aa741aec9ac7be260419e4f4a9816b417",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
1740108 | pes2o/s2orc | v3-fos-license | APPLICABILITY OF FETAL THORACIC AORTIC DIAMETER MEASUREMENT IN THE PREDICTION OF BIRTH WEIGHT IN HOLSTEIN-FRIESIAN COWS – SHORT COMMUNICATION
Transabdominal ultrasonography has been shown to be a useful and reliable method for assessing fetal well-being in horses and cattle. To test the applicability of fetal aortic diameter measurement in cattle, 44 late-term pregnant cows and heifers were examined 21 to 0 days prior to calving. Mean fetal aortic diameter was 2.07 ± 0.14 cm and mean fetal heart rate (FHR) was 109 ± 17 bpm. Three dead calves were dissected and their aortic diameter was measured in a water bath. The mean birth weight (n = 44) was 39.9 ± 5.8 kg. There was a significant negative correlation between FHR and fetal aortic diameter. However, although some studies have shown that fetal aortic diameter strongly correlates with birth weight in near-term horses and cattle, in this study there was no correlation between fetal aortic diameter and birth weight in Holstein-Friesian cows and heifers irrespective of whether the fetus was born alive or dead.
Despite the availability of improved techniques in the dairy industry, perinatal mortality and morbidity is still high in cattle, and the need for techniques suitable for monitoring fetal well-being is still growing (Kornmatitsuk et al., 2002).To investigate and decrease perinatal losses, the first step is to develop techniques suitable for evaluating fetal well-being under farm conditions.In human obstetrics, non-invasive transabdominal ultrasonography has been used for assessing fetal health status for almost forty years.From the 1980s, there were also attempts to develop antepartum assessment methods in veterinary medicine (Adams-Brendemuehl and Pipers, 1987;Reef et al., 1995Reef et al., , 1996).Reef's methodology has been used as a gold standard for late-term ultrasonographic examinations in horses and other species of domestic animals (Reef et al., 1995(Reef et al., , 1996)).
In cattle, there are only few publications on the assessment of fetal well-being, although it has been shown that evaluation of the near-term fetus is potentially a reliable diagnostic tool to detect fetal abnormalities (Buczinski, 2009;Buczinski et al., 2011;Baska-Vincze et al., 2014).Buczinski et al. (2011) have examined normal, high-risk and cloned groups of late-term pregnant cows under clinical conditions and found that the measurement of fetal thoracic aortic diameter strongly correlates with birth weight as it was shown in horses (Reef et al., 1995).They have suggested fetal aortic diameter measurement as a possible tool for the prediction of intrauterine growth retardation (IUGR) or of an absolute large fetus.In humans, horses, sheep and cattle, fetal heart rate (FHR) is the most commonly reported parameter related to fetal well-being.
The aim of this study was to evaluate the relationship of birth weight with prepartum sonographic measurements of fetal aortic diameter and fetal heart rate.Another objective was to test the applicability of ultrasonography in late-term pregnant cattle under farm conditions.
Late-term pregnant Holstein-Friesian cows and heifers (n = 44) were examined by transabdominal ultrasonography in a Hungarian dairy farm.All cows and heifers were in the last three weeks of gestation with a normal-course pregnancy.Examinations were made by the same veterinarian in the farm's building designated for veterinary examinations.During sonographic assessment, the animals were kept in a stock.The examination lasted 5-15 min and was made on the right side of the animal's abdomen as described by Buczinski (2009).A portable ultrasound equipment (Mindray M5 Vet ® , Mindray Medical International Limited, China) was used with a 2.5-5 MHz macroconvex transducer.For lubrication, only propanol diluted in water (in 1:3 ratio) was used on the skin surface and clipping of the area was not necessary; however, the skin was cleaned before starting the examination.Initially, the fetal thorax was imaged in B-mode with the heart centred on the screen, and fetal heart rate was measured in M-mode and saved to the device (Fig. 1).Then the fetal aorta was captured in B-mode (Fig. 2.) and aortic diameter was measured three times.To reach the highest reproducibility, the measurement was standardised as follows: firstly, the measurements were made by the same operator; secondly, the inner-to-inner edges were captured at the same aortic segment; thirdly, all measurements were made during diastole (European Society of Cardiology Guidelines, 2014).From three measured values of aortic diameter and heart rate, a mean was calculated and used for statistical analysis.Birth weight was measured using a commercially available scale (AEG PW 4923 scale, Electrolux AB, Sweden).The repeatability of the measurements (intraobserver variability) was assessed and the coefficient of variation (CV%) was calculated using the formula CV% = standard deviation/mean × 100 (Sokal and Rohlf, 1973).
After the initial ultrasound examinations, another three calves were weighed and dissected, and their aortic diameter was measured with the water bath method.
Acta Veterinaria Hungarica 65, 2017
The aim was to get further data on the association between aortic diameter and birth weight.These three calves died on the same dairy farm because of traumatic injuries sustained on their day of birth (< 24 hours).The calves were then placed into the freezer to -20 °C until the examination.After having been thawed to room temperature (Dissection Hall, Department and Clinic for Production Animals, Üllő, University of Veterinary Medicine, Budapest), the calves were measured and dissected.The thoracic and abdominal part of the aorta and the heart and lungs were placed into a bucket filled with tap water and aortic diameters were measured with the same ultrasound.The sonographic images were captured using the freeze mode as described by Jeyakumar et al. (2013).In 36 out of the 44 cases fetal aortic diameter, FHR and birth weight were successfully measured.Fetal aortic diameter was 2.07 ± 0.14 cm, FHR was 109 ± 17 bpm, and birth weight was 39.8 ± 5.9 kg (mean ± SD).There was no signifi-cant correlation between fetal aortic diameter and birth weight (R = -0.023,P = 0.893).A significant negative correlation has been detected between fetal aortic diameter and fetal heart rate (R = -0.41,P = 0.012).Statistical calculations (correlation) were done as described by StatSoft Inc. ( 2011).Repeated measurements of the fetal aortic diameter showed a good reproducibility with a CV% of 9.2.The results of the water bath measurement were as follow: Calf 1 was 26 kg with 2.00 cm aortic diameter; Calf 2 was 23 kg and had 1.90 cm aortic diameter; Calf 3 was 40 kg and had 1.78 cm aortic diameter.Although two of the dead calves were smaller than the average of the in vivo group, the results of the post-mortem group highlighted the absence of an association between fetal aortic diameter and birth weight.Perinatal mortality in cattle is still high although veterinarians have increasingly apply advanced diagnostic tools in the everyday practice.Therefore, there is a growing interest to study and monitor fetal well-being in the bovine species.This is the first study in which transabdominal ultrasonographic examinations were performed on late-term cows and heifers under farm conditions to measure fetal aortic diameter.It is suggested that such examinations can be per-Acta Veterinaria Hungarica 65, 2017 formed both under farm conditions and in the clinical setting.From a practical point of view, clipping or shaving the hairs is not always necessary before the examination but removing any dirt possibly present on the skin surface is essential.The mean fetal heart rate in these animals was similar to that described by Breukelman et al. (2006), who reported 114 and 109 bpm, and by Buczinski (2009), who reported 112 bpm in the last three weeks.Although some studies have shown that fetal aortic diameter strongly correlates with birth weight in horses (n = 30) and cattle (n = 13) in late-term pregnancy (Reef et al., 1995(Reef et al., , 1996;;Buczinski, 2009;Buczinski et al., 2011), we did not find a significant correlation between fetal aortic diameter and birth weight in Holstein-Friesian cows and heifers (R = -0.023,P = 0.893).However, there was a significant (P = 0.01) negative correlation (R = -0.41) between FHR and fetal aortic diameter in the animals: the bigger the fetal aortic diameter, the lower the FHR.This is similar to the phenomenon observed in humans (n = 19,200) when examining resting heart rates and infrarenal aortic diameters (Wei et al., 2015).Neither fetal sex and maternal parity nor difficulty of labour influenced the results in the present study.
In conclusion, fetal aortic diameter measurements performed three weeks before parturition cannot be used to predict birth weight in Holstein-Friesian cattle.Although a limited number of animals were examined, there was no association between aortic diameter measurements in the last three weeks of pregnancy.Regarding the dissected calves, there was also no correlation between the measured weights and aortic diameters post mortem.The authors think that there might be a non-linear pattern or tendency in the growth of bovine fetuses in the last weeks of gestation, which could explain the results obtained in the present study and should be evaluated in the future.At the same time, a negative correlation between FHR and fetal aortic diameter has been found; however, the available data are limited even in the human medicine.The associations of pathological conditions (cardiovascular diseases) with resting heart rate and aortic diameter in human beings demonstrated the importance of this research field which is, however, a relatively new subject of interest in veterinary medicine.Measurements of aortic diameters are not always straightforward; therefore, some limitations are present in all examination techniques (European Society of Cardiology Guidelines, 2014).Reproducible, reliable and more sensitive methods should be introduced to monitor fetal and neonatal well-being in bovine medicine to reduce perinatal mortality.
Fig. 1 .
Fig. 1.Measurement of bovine fetal heart rate by M-mode ultrasonography (obtained with a 3.5 MHz macroconvex transducer at 272 days of gestation in a Holstein-Friesian cow) with a built-in program of the ultrasound.LV = left ventricle
Fig. 2 .
Fig. 2. Measurement of fetal lateral aortic diameter by B-mode ultrasonography (obtained with a 3.5 MHz macroconvex transducer at 275 days of gestation in a Holstein-Friesian cow).Note that the measurement is made between the inner edges of the aortic wall with the callipers of the ultrasound | 2018-04-03T03:17:40.949Z | 2017-02-28T00:00:00.000 | {
"year": 2017,
"sha1": "51c14faafc0d09da577e916fbe3cb3958a4e75ae",
"oa_license": "CCBY",
"oa_url": "http://akademiai.com/doi/pdf/10.1556/004.2017.006",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "74c4aff853012b12ac58776922ed4d6f96683902",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
26894244 | pes2o/s2orc | v3-fos-license | Stereo Image Retargeting with Shift-Map ∗
SUMMARY We propose a new stereo image retargeting method based on the framework of shift-map image editing. Retargeting is the process of changing the image size according to the target display while preserving as much of the richness of the image as possible, and is often applied to monocular images and videos. Retargeting stereo images poses a new challenge because pixel correspondences between the stereo pair should be preserved to keep the scene’s structure. The main contribution of this paper is integrating a stereo correspondence constraint into the retargeting process. Among several retargeting methods, we adopt shift-map image editing because this framework can be extended naturally to stereo images, as we show in this paper. We confirmed the e ff ectiveness of our method through experiments.
Introduction
3D image technology plays an important role in virtual reality and augmented reality systems [1], [2]. Stereoscopic vision, as the simplest form of 3D visions, is widely used because it can provide a higher sense of reality than 2D images by just presenting a pair of stereo images corresponding to left and right eyesights. We focus on the problem of stereo image retargeting, which is essential in postproduction of stereo images.
Retargeting is the process of fitting the image size to various display devices with different resolutions. Recent retargeting methods aim to do more than simply scaling or cropping images. These methods add/remove non-salient regions to/from the target image while preserving as much of the salient regions as possible [3]- [8]. However, most studies have focused on monocular images and videos. Only a few studies have been conducted on stereo images. Retargeting stereo images imposes a new challenge because pixel correspondences between the stereo pair should be preserved to maintain consistency. The key idea of this paper is to integrate a stereo correspondence constraint into the retargeting process to keep the underlying scene structure unchanged.
Among several retargeting methods, we adopt shiftmap image editing [7]. This method can be used to perform various editing operations, such as retargeting, inpainting, reshuffling (object rearrangement), and image compo- sition, in a unified manner, with a shift-map, which represents the correspondence between the input and output pixels. The resulting image is obtained by global optimization of an energy function which is defined over the shift-map and encodes pixel saliencies, user-defined constraints, image smoothness, and so on. This framework can be extended naturally to stereo images with our stereo correspondence constraint.
The most similar work to ours is that of Utsugi et al. [9]. They extended seam-carving [3] to stereo images by adding/removing corresponding seams on both images simultaneously. Their method often falls into local optima because seam-carving is based on a greedy search. Our method is more stable due to the nature of global optimization of the shift-map editing framework.
The rest of this paper is constructed as follows. In Sect. 2, we describe the algorithm of shift-map image editing [7] as a preparation. In Sect. 3, we propose our stereo image retargeting method by extending the shift-map framework to stereo images with the stereo correspondence constraint. Experimental validations are presented in Sect. 4. Section 5 summarizes this paper.
Shift-Map Image Editing
A shift-map M represents the correspondence between an input image I and an output image I. When the shift of a pixel p is M(p), the output pixel I(p) is derived from I (p ) where p = p + M(p). An illustration is shown in Fig. 1 (a).
The optimal shift-map minimizes the following energy E.
The data term E data encodes user-defined constraints and pixel saliencies. The smoothness term E smooth penalizes artifacts in the output image and is defined for pairs of neighboring pixels N.
The design of the data term E data is key to implementing various image editing tasks. E data takes complicated forms for image reshuffling applications, but becomes relatively simple for image retargeting. The minimum constraint for E data is that the rightmost and leftmost columns of the output image must be derived from the rightmost and leftmost columns of the input image, respectively.
Copyright c 2011 The Institute of Electronics, Information and Communication Engineers and for the rightmost column: where I w and I w are the widths of I and I , respectively. Optionally, pixel saliencies S can be specified to the remaining pixels.
To enforce that a pixel in the input image I (p ) should disappear in the output image, the algorithm sets a larger penalty for selecting I (p ) by setting a small value to S (p ). Meanwhile, to keep I (p ), it sets a large value to S (p ). The smoothness term E smooth penalizes discontinuities in the output image. E smooth is defined over pairs of neighboring pixels (p, q) ∈ N as: where denotes the gradient. This equation means that a larger penalty is imposed for larger changes in the color channel with discontinuities of the shift-map (M(p) M(q)). Note that image variations without shift-map discontinuities are not penalized.
Objective function E is minimized using multi-label graph cuts [10]- [12]. A hierarchical optimization is used to reduce computational cost. First, all possible shifts are examined in the lowest resolution. In a higher resolution, the shift-map obtained from the lower resolution is used as the initial guess. Limiting the range of possible shifts M(p) also decreases the computational cost. For retargeting applications, {0, . . . , I w − I w } shifts are allowed in the horizontal direction, but vertical shifts are limited within a few pixels.
Proposed Method
We propose a stereo image retargeting method based on the framework of shift-map image editing. We integrate a stereo correspondence constraint into the framework to preserve pixel correspondences between a pair of stereo images.
Shift-Maps for Stereo Images
The proposed method optimizes shift-maps for the left and right images simultaneously to maintain stereo correspondences. Let M L and M R be shift-maps for the left and right images, respectively, as shown in Fig. 1 (b). The optimal shift-maps minimize the following energy. (8) where E intra is the intra-image energy of a shift-map and E inter is the inter-image energy for measuring consistency between stereo images. We use the energies E data and E smooth in Eq. (1) as E intra . E inter is a new term that represents the stereo correspondence constraint we propose, which we describe next.
Stereo Correspondence Constraint
I L and I R denote the output left and right images, and I L and I R the input left and right images. Let p L ∈ I L and p R ∈ I R be the pixels shifted from p L ∈ I L and p R ∈ I R . Based on the definition of shift-maps M L and M R , we obtain These relations are illustrated in Fig. 1 (b). We assume the input stereo images are rectified in advance, and that pixelwise disparities between them are available. Let D LR (p L ) be the left-to-right horizontal disparity of the pixel p L . The right-to-left disparity D RL (p R ) is defined similarly. We describe p L ∼ p R if pixel p L ∈ I L and p R ∈ I R represent the same point of the scene.
where * .x, * .y denote the x-and y-coordinates of the pixel * , respectively. This state is referred to as the stereo correspondence between p L and p R . Without occlusions, is satisfied by definition. According to the definition of the shift-maps, it is naturally required that the output disparities, D LR (p L ) and D RL (p R ), are derived from those of the input images using shift-maps M L (p L ) and M R (p R ): Using Eqs. (13) and (14), we define the stereo correspondence between the output pixels p L ∈ I L and p R ∈ I R as: Note that D LR (p L ) = D RL (p R ) is not always satisfied because p L and p R are given by independent shift-maps as shown in Eqs. (9) and (10). Equations (11) and (15) give the stereo correspondence constraint, which can be used to define the inter-image energy E inter in Eq. (8).
where [·] is a function that returns 1 if the condition in the bracket is satisfied, or 0 otherwise. This equation counts the number of pixel pairs with stereo inconsistency before and after retargeting. Stereo inconsistency arises if the correspondence in the input images is broken in the output images or vice versa. K is a large positive value to penalize stereo inconsistencies. Minimizing this term enforces stereo consistency between the left and right output images.
Experiment
We implemented our method with α = 1 in Eq. (1), β = 2 in Eq. (5), and K = 1000 in Eq. (16). The pixel saliencies S (p L ) and S (p R ) in Eq. (4) were set to zero. A 3-level hierarchical representation was used for optimization. The vertical range of the shifts was set to ±4 pixels.
We tested our method with the "Tsukuba" sequence from the Middlebury stereo dataset [13], which contains five horizontal viewpoints. We used the leftmost and central images as the left and right input images, which are shown in Fig. 2 (a). The image sizes were 358 × 252 pixels. The ground-truth disparity contained in the dataset was used to define the stereo correspondences in the input images.
We compared several methods for resizing the image pair to a 75% width without changing the height. We applied the stereo matching method [14], which is available in OpenCV 2.1, to the resulting image pair to obtain a disparity map. This disparity map is expected to be similar to the one obtained from the input image pair because the underlying scene structure should be unchanged before and after retargeting. Figures 2 (b)-(e) show the results of our experiment. (b) is the result of linear scaling, where the entire scene was uniformly shrunk in the horizontal direction. The disparity values were also shrunk to 75% of the input disparities. (c) is the case where the left and right images were independently resized by shift-map editing without a stereo correspondence constraint. Each of the resulting images looks natural without visible distortions, but the scene structure was entirely destroyed, as can be seen from the disparity map between the resulting images. (d) is the result of our method, where the resulting images were natural, similar to those in (c). Furthermore, the resulting disparity map indicates that the entire scene structure and the disparity values remained unchanged after resizing. This result proves the effectiveness of our stereo correspondence constraint. (e) shows the result of stereo seam carving [9], in which stereo correspondences are also cared in a different manner. However, due to the nature of seam carving, visible distortions (i.e. some straight lines are curved) were found in the resulting images.
Conclusion
We proposed a new stereo image retargeting method based on the framework of shift-map image editing. We integrated a stereo correspondence constraint into the shift-map framework. Our experimental results show that this constraint is effective in preserving the underlying scene structure and producing natural-looking results.
Our future work is to extend other image editing operations to stereo images. Various image editing operations such as reshuffling, inpainting, and image composition can be handled with the shift-map editing framework, and our stereo correspondence constraint would also be applicable to these operations. We also plan to extend our method to handle user-specified constraints such as depth control of each object. | 2022-05-31T19:53:36.471Z | 2011-01-01T00:00:00.000 | {
"year": 2011,
"sha1": "5d804327576e5d5ab7bacc22c960d33adda633e1",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/transinf/E94.D/6/E94.D_6_1345/_pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "5d804327576e5d5ab7bacc22c960d33adda633e1",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
1621884 | pes2o/s2orc | v3-fos-license | Robot-assisted pancreatoduodenectomy with preservation of the vascular supply for autologous islet cell isolation and transplantation: a case report
Introduction For patients with chronic pancreatitis presenting with medically intractable abdominal pain, surgical intervention may be the only treatment option. However, extensive pancreatic resections are typically performed open and are associated with a substantial amount of postoperative pain, wound complications and long recovery time. Minimally invasive surgery offers an avenue to improve results; however, current limitations of laparoscopic surgery render its application in the setting of chronic pancreatitis technically demanding. Additionally, pancreatic resections are associated with a high incidence of diabetes. Transplantation of islets isolated from the resected pancreas portion offers a way to prevent post-surgical diabetes; however, preservation of the vascular supply during pancreatic resection, which determines islet cell viability, is technically difficult using current laparoscopic approaches. With recent advances in the surgical field, robotic surgery now provides a means to overcome these obstacles to achieve the end goals of pain relief and preserved endocrine function. We present the first report of a novel, minimally invasive robotic approach for resection of the pancreatic head that preserves vascular supply and enables the isolation of a high yield of viable islets for transplantation. Case presentation A 35-year old Caucasian woman presented with intractable chronic abdominal pain secondary to chronic pancreatitis, with a stricture of her main pancreatic duct at the level of the ampulla of Vater and distal dilatation. She was offered a robotic-assisted pylorus-preserving pancreatoduodenectomy and subsequent islet transplantation, to both provide pain relief and preserve insulin-secretory reserves. Conclusion We present a novel, minimally invasive robotic approach for resection of the pancreatic head with complete preservation of the vascular supply, minimal warm ischemia time (less than three minutes) and excellent islet recovery (134,727 islet equivalent). Our patient is currently pain-free with normal glycemic control. Robot-assisted pylorus-preserving pancreatoduodenectomy and autologous islet transplantation can be safely performed and has the potential to minimize operative traumas as well as to partially preserve endocrine function. Results from this case report suggest that this dual procedure should be considered as a treatment option for patients with chronic pancreatitis at earlier stages of the disease, before irreversible islet loss occurs.
Introduction
Chronic pancreatitis (CP) is an inflammatory process of the pancreas characterized by irreversible morphological changes that can lead to impaired endocrine and exocrine function. The treatment of patients with CP remains controversial; conservative treatment is typically preferred over surgery. Surgical intervention is often considered late in the course of the disease and remains the last resort for patients presenting with medically intractable abdominal pain [1]. In these cases, treatment needs to be adapted to the individual situation, as CP is a heterogeneous group of diseases with different etiologies and presentations. Several aspects must be considered, such as the surgical approach, metabolic consequences and surgical technique. For instance, decompression of the pancreatic duct may be the most appropriate treatment if the pancreatic duct is dilated, whereas partial or total pancreatic resection may be the only option if the pancreatic duct is normal in size or narrow. In patients whose symptoms are not resolved after drainage procedures or partial pancreatic resections, or those who have diffuse involvement of the gland, a total pancreatectomy may be the only treatment option [2].
While total pancreatectomy classically provides pain relief in about 70% of patients [2], it is not an innocuous procedure. Extensive pancreatic resections are typically performed open and are associated with a substantial amount of postoperative pain, wound complications and long recovery time. While this has motivated several attempts to adopt a minimally invasive approach, laparoscopic pancreatic surgery persists as one of the most challenging applications of minimally invasive surgery. Since the first report of laparoscopic pylorus-preserving pancreatoduodenectomy in 1994, further attempts have failed to report significant benefits over an open approach [3,4]. A recent literature review of 146 procedures since 1994 found that a laparoscopic approach is not universally accepted due to the technical difficulty for the surgeon, length of operating time and the absence of a reduced length of hospital stay for the patient [5].
Recently, surgeons have begun to overcome the limitations of traditional laparoscopic surgery through robotic surgery. The robotic system improves the performance of minimally invasive surgery by restoring three-dimensional vision, enhancing surgeon dexterity and eliminating tremor. Recently, we published the largest series of robotic pancreatic surgeries (n = 124) to date, demonstrating both feasibility and safety [6]. Additional attempts have demonstrated the success of robotassisted pancreatic resections and reconstructions [7].
Although the main goal of the operation is to improve the quality of life in patients with CP by providing pain relief, metabolic consequences should not be ignored. Post-pancreatectomy diabetes is typically brittle, as a result of concomitant deficiency in insulin secretion and counter regulatory hormones, and difficult to manage. Continued alcohol abuse or poor nutrition, combined with maldigestion, further complicate this management. It has been found that Whipple procedures (the standard pancreatoduodenectomy) result in a 20% increase in the incidence of diabetes [1]. In patients seeking surgical treatment for CP, the cost of long-term morbidity from diabetes and endocrine deficiency must be assessed.
Autologous islet transplantation (AIT), first reported to preserve islet function following a near-total pancreatectomy in 1977, has been consistently demonstrated as a safe procedure that can prevent diabetes long-term in patients with CP [8][9][10]. The success rate largely depends on the amount of islets infused and their viability, which is directly linked to the extent of ischemia they are exposed to during the pancreatectomy [9]. In order to avoid extended warm ischemia time, the vascular supply of the pancreas needs to be preserved until the last moment of the resection. Preservation of the vascular supply is difficult during a standard Whipple procedure. Additionally, the insulin-secretory reserves of the pancreatic remnant are typically sufficient to prevent postsurgical diabetes. Therefore, most surgeons would not consider isolating islets from solely the pancreatic head. However, the fate of the pancreatic remnant is unknown. Progressive fibrosis may destroy the remaining islets over time or lead to pain recurrence, necessitating a complete pancreatectomy. Furthermore, the probability of insulin independence after AIT progressively declines with increasing fibrosis [8]. Thus, it would be beneficial to preserve islets during the initial pancreatic head resection.
In addition to providing a platform to perform a minimally invasive pancreatectomy in the technically demanding setting of CP, robot assistance offers the possibility to preserve the vascular supply of the pancreatic head during surgery for islet isolation. In consideration of the high rate of diabetes (approximately 50% five years after onset) observed in the natural history of CP [11,12], surgical intervention that successfully provides pain relief and preserves the endocrine function earlier in the course of CP would be of significant benefit to the patient. Herein we present a case report documenting our experience with robot-assisted pyloruspreserving pancreatoduodenectomy (RA-PPPD) with minimal warm ischemia time and excellent islet recovery for AIT in a patient with CP.
Case presentation
Our patient was a 35-year old Caucasian woman with CP of unknown etiology. Her main complaint at the time of the consultation was severe chronic abdominal pain refractory to narcotic pain medication. Additionally, she was experiencing nausea and vomiting. Her pancreatic function was normal, with a hemoglobin A1C level of 5.4% and basal C-peptide level of 1.3 ng/mL. The history of her present illness was significant for several episodes of recurrent pancreatitis over the past six years after undergoing an open cholecystectomy with subsequent removal of a retained common bile duct stone by endoscopic retrograde cholangiopancreatography (ERCP). Her past medical history was also significant for psoriasis and heavy alcohol consumption, which likely contributed to the subsequent development of CP, although she reported abstinence for the past year.
At the time of her admission, an ERCP was performed and demonstrated the presence of a previous sphincterotomy and a dilated common bile duct of approximately 11 mm without filling defects. A stricture of her main pancreatic duct was observed at the level of the ampulla of Vater; dilatation of the remaining portion of the pancreatic duct was present. An endoscopic ultrasound was performed and revealed sonographic changes consistent with mild CP (2.5 mm pancreatic duct, heterogeneous parenchyma) according to the Cambridge classification [13]. Magnetic resonance cholangiopancreatography and computed tomography of her abdomen were also performed and demonstrated findings consistent with those observed in the ERCP. Given the young age of our patient, we gave priority to surgical therapy to avoid additional endoscopic procedures. After discussing the various therapeutic options with our patient, she chose an RA-PPPD, for better drainage of the distal pancreas and pain relief. Additionally, in consideration of her young age and the future unknown fate of the pancreatic remnant, our patient was offered AIT to preserve endocrine function.
After induction of general anesthesia, our patient was placed in the lithotomy position, with slight reverse Trendelenburg, and her abdomen was prepared and draped in the usual sterile fashion. Trocars were placed as indicated in Figure 1. The Da Vinci robotic surgical system (Intuitive Surgical, Inc. Sunnyvale, CA, USA) was docked into position, with a viewpoint from our patient's head. We mobilized the right colonic flexure, exposed the second portion of her duodenum and completed mobilization of the pancreatic head. The pancreatic head was enlarged and fibrotic. Next, her hepatic hilum was dissected, her common hepatic artery was exposed and her right gastric artery was ligated. The origin of her gastroduodenal artery was prepared with a vessel loop but not divided (Figure 2A) to preserve the blood supply to the head of the pancreas. The dissection of her gastrocolic ligament was completed, exposing the inferior border of her pancreas and the pancreatic neck. The neck of her pancreas was prepared and her superior mesenteric vein was widely exposed. Next, the pylorus was prepared; her right gastroepiploic artery and vein were divided. The first portion of her duodenum was divided 2 cm distal to the pylorus using a stapling device. The first loop of her jejunum was transected using a stapler device as well. Her duodenum was retracted, exposing the uncinate process. Subsequently, her common bile duct was transected and the dissection was conducted in the neck of her pancreas, dividing the pancreatic neck with the Harmonic scalpel. Her pancreatic duct was enlarged, measuring approximately 3 mm to 4 mm. The dissection proceeded cautiously until the entire head of her pancreas was connected only to her gastroduodenal artery and superior pancreaticoduodenal vein. Immediately before the transection, a small Pfannenstiel incision was made; a hand access device (Lap Disc, Ethicon, Cincinnati, OH, USA) was inserted with the aim of preserving the pneumoperitoneum. At this point, her gastroduodenal artery and superior pancreaticoduodenal vein were clipped and divided ( Figure 2B, C). The specimen was placed in an endobag and extracted immediately through the mini laparotomy previously performed. Her pancreas was flushed with University of Wisconsin solution on the back-bench and brought to the islet isolation facility in a sterile bag on ice for processing.
In the interim, the reconstruction phase of the operation was initiated. A pancreaticogastrostomy was performed. Next, a retrocolic end-to-side hepaticojejunostomy was created with the first loop of her jejunum. The last anastomosis was an end-to-side two layer pylorojejunostomy, 40 cm distal to the hepaticojejunal anastomosis. Once the reconstruction was completed, her inferior mesenteric vein was dissected and canulated with a 16-gauge canula in preparation for the islet cell infusion.
The islet isolation procedure was conducted as previously described [14]. In brief, the pancreatic duct was canulated and injected with a purified collagenase solution (Serva, Heidelberg, Germany). The pancreas was cut into small pieces and placed into a modified Ricordi digestion system. Under microscopic control of repeat samples, the digestion was stopped by dilution and cooling as soon as 50% of the islets were free from the exocrine tissue. The digest was collected and washed under repeat centrifugation. After a quality assessment the islet preparation was placed into several syringes and brought into the operating room. The islets were successfully infused into the portal stream and her inferior mesenteric vein was subsequently ligated (Figure 3). To conclude, two Jackson Pratt #10 drainages were placed -one adjacent to the pancreatic anastomosis and one underneath her liver.
Our patient tolerated the procedure well. The weight of the resected pancreas portion was 47 g, after trimming of non-pancreatic tissue. Fifteen milliliters of tissue containing 134,727 islet equivalent (IEQ; 2867 IEQ per gram of pancreas or 2449 IEQ per kilogram of the recipient body weight) were collected. The viability of the tissue was 97%, as measured by propidium iodide and cytogreen fluorescent staining. The operation lasted six hours and thirty minutes, with an estimated blood loss of 200 mL. Our patient was maintained on an intravenous insulin drip during the first two days after the operation. She was subsequently transitioned to 4U to 6U of long-acting insulin daily. Our patient's recovery was uneventful; her C-peptide level on the eighth postoperative day was 2.2 ng/mL. She was discharged home on the ninth postoperative day with pain improvement and was maintained on low dose insulin. During her last follow-up 45 days after the surgery, she was normoglycemic without any insulin injection and reported complete resolution of her pain.
Discussion
In this case, we were able to preserve the vascular supply to the pancreatic head until the conclusion of the procedure, ligating the gastroduodenal artery immediately before transecting the head of the pancreas. This was facilitated by a view and approach, provided by the position of the robotic arms and three-dimensional camera, that is difficult and only rarely possible in open surgery. The warm ischemia time was under three minutes and we were able to isolate 2449 IEQ/kg, which is a remarkable islet yield considering that only the pancreatic head was resected. The chances of remaining insulin independent are significantly increased if the islet yield is above 2500 IEQ/kg [8][9][10].
Conclusion
We present a novel, minimally invasive robotic approach for resection of the pancreatic head with complete preservation of the vascular supply, minimal warm ischemia time and excellent islet recovery. RA-PPPD and AIT can be safely performed, with the potential to minimize operative trauma and partially preserve endocrine function. Transplantation of a high yield of viable islets from the resected pancreatic portion in our patient reduces the risk of her developing diabetes in the future. Pancreatic resection for CP has good long-term efficacy for pain control, but is typically performed late in the course of the disease. With the availability of RA-PPPD and AIT, surgical intervention should be considered and offered earlier, to preserve endocrine function before irreversible islet loss. Figure 2 Resection of the pancreatic head. (A) The gastroduodenal artery (GDA) was prepared and a vessel loop was placed. Once the pancreatic head was completely mobilized, the GDA was (B) ligated and (C) transected, with immediate removal of the specimen for cold flush with preservation solution on the back-bench.
Figure 3
Ligation of her inferior mesenteric vein. Her inferior mesenteric vein was identified, dissected and distally ligated. A canula was inserted for infusion of the pancreatic digest for autologous islet transplantation. | 2017-06-24T17:34:30.669Z | 2012-03-02T00:00:00.000 | {
"year": 2012,
"sha1": "bbb3a70b312b30aed80e0ec44d52caa45f95c1b8",
"oa_license": "CCBY",
"oa_url": "https://jmedicalcasereports.biomedcentral.com/track/pdf/10.1186/1752-1947-6-74",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f92785865d6cac2e253f3442633e1e269cd67f99",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
238743414 | pes2o/s2orc | v3-fos-license | The Legacy of the Idrija Mine Twenty-Five Years after Closing: Is Mercury in the Water Column of the Gulf of Trieste Still an Environmental Issue?
Mercury (Hg) contamination in the Gulf of Trieste (northern Adriatic Sea) due to mining activity in Idrija (Slovenia) still represents an issue of environmental concern. The Isonzo/Soča River’s freshwater inputs have been identified as the main source of Hg into the Gulf, especially following periods of medium-high discharge. This research aims to evaluate the occurrence and distribution of dissolved (DHg) and particulate (PHg) Hg along the water column in the northernmost sector of the Gulf, a shallow and sheltered embayment suitable for the accumulation of fine sediments. Sediment and water samples were collected under unperturbed and perturbed environmental conditions induced by natural and anthropogenic factors. Mercury in the sediments (0.77–6.39 µg g−1) and its relationship to grain size were found to be consistent with previous research focused on the entire Gulf, testifying to the common origin of the sediment. Results showed a notable variability of DHg (<LOD–149 ng L−1) and PHg (0.39–12.5 ng L−1) depending on the interaction between riverine and marine hydrological conditions. Mercury was found to be mainly partitioned in the suspended particles, especially following periods of high discharge, thus confirming the crucial role of the river inputs in regulating PHg distribution in the Gulf.
Introduction
Among potentially toxic trace elements (PTEs) found in the environment, mercury (Hg) is a focus of global concern and was included among the World Health Organization's top ten "chemicals of concern" in 2017 [1].
Mining activity and related mineral processing, as well as coal combustion and industrial activities (e.g., chlor-alkali plants), are generally considered among the major anthropogenic sources of Hg [2] and other PTEs in the environment. Atmospheric deposition, erosion and riverine inputs of suspended particulate matter (SPM) contribute to convey Hg in estuaries and marine-coastal areas where the element is accumulated in the bottom sediments [3,4]. Indeed, the sediment compartment may act both as a sink and a secondary source of contamination due to resuspension events and remobilisation processes with the subsequent release of both dissolved and particulate Hg species into the water column [5,6].
Moreover, the top few centimetres of sediment often represent the primary site for the production of methylmercury (MeHg) [7][8][9][10], the organic form of Hg of main concern due to its high toxicity and bioaccumulation potential in the aquatic food chain [8,[11][12][13].
In the offshore marine sediments of the Mediterranean Sea, Hg reaches concentrations (avg. 0.10-0.20 µg g −1 , [14]) which testify to an enrichment with respect to the world-wide natural background (0.03 µg·g −1 ) [15] due to both natural and anthropogenic sources. In the Gulf of Trieste, the occurrence and behaviour of Hg have been the main topics of several studies focused on coastal sediment contamination [19,30], transport and distribu-tion of Hg associated with the SPM at the Isonzo River mouth [22,23] as well as Hg cycling at the sediment-water interface [31,32]. However, little information is currently available on the occurrence of dissolved (DHg) and particulate (PHg)Hg in the northernmost sector of the Gulf of Trieste (Bay of Panzano), where the main access channel approaching the Port of Monfalcone is located. This embayment represents a suitable environment for the accumulation of suspended particles enriched in Hg.
Moreover, there is growing interest on the part of national and local authorities regarding the potential impact of Hg in the water column related to the resuspension of sediments due to future dredging operations needed to allow the navigation of ships approaching the port area.
In this context, the primary aim of this research is to evaluate the occurrence of Hg in the surface sediments as well as its partitioning behaviour between solid and dissolved phases along the water column under both unperturbed and perturbed environmental conditions, the latest induced by both natural and anthropogenic factors. This study provides a snapshot of the present situation and a baseline for Hg in the water column, useful for future evaluation of the impact of Hg in this coastal environment.
Environmental Setting
The Gulf of Trieste is a semi-closed shallow-water basin located in the northern Adriatic Sea with a maximum water depth of 25 m in its central sector. Water salinity in the Gulf typically ranges between 25 and 38 PSU (using the Practical Salinity Scale), whereas the seawater temperature ranges between 5 and 26 • C following the seasons [33]. The water circulation in the Gulf is mainly dominated by the anticlockwise circulation pattern of the Adriatic Sea and is controlled by tides, seasonal variations in the freshwater inflow, and winds (Bora E-NE, Libeccio SW and Scirocco SE) [34], which significantly influence the vertical water circulation [35].
In the Gulf of Trieste, the Isonzo/Soča River is the main input of both freshwater (average discharge of 83 m 3 ·s −1 , [33]) and SPM, whose distribution is regulated by the interaction between meteo-marine and riverine hydrological conditions.
The Isonzo River is known as the primary source of Hg into the Gulf of Trieste and the element at the river mouth was found to be almost completely partitioned in the SPM [36]. In this context, the river freshwater inputs play a crucial role in the occurrence and distribution of PHg both under periods of medium-high river discharge [22] and during extreme river plume events when the influx of PHg into the Gulf ranged between 37.0 and 112 ng·L −1 [23].
Regarding the surface sediment of the Gulf of Trieste, the highest concentrations of Hg were found at the Isonzo River mouth (23.3 µg·g −1 ) due to the prevalence of cinnabar particles in the coarser sandy-silty fraction of the sediment [19,37]. Although the amount of Hg in the sediment decreases with increasing distance from the river mouth [19], extensive Hg contamination may also be present in the nearshore areas of the northernmost sector of the Gulf of Trieste (Bay of Panzano), a shallow and sheltered embayment promoting the accumulation of fine sediments and contaminants ( Figure 1). The area is affected by several anthropogenic activities including agricultural and industrial settlements in the hinterland and tourist and mussel farming areas along the coast. An additional source of potential contamination is represented by the city of Monfalcone, which is home to a thermoelectric plant, several coal, petroleum and other cargo handling equipment and an extended port area which can be reached through a main channel located between the Isonzo River mouth and a mussel farm ( Figure 1). Industrial activity in the port area was thought to be a potential source of organic contaminants (PAH and PCB) in the sediments of the Bay of Panzano [38] and residues from antifouling paints used on boats have been identified as a source of PTEs (e.g., Cu and Zn) [38,39].
Sampling Strategy
Sampling operations for the collection of sediment and water samples were performed at six sites (P1-P6) located in the vicinity of the main access channel to the Port of Monfalcone ( Figure 1). With the exception of site P6 (located in the offshore marine area of the Bay of Panzano), all the sampling stations are representative of different targets: mussel farming (P1, P2 and P3), marine phanerogam meadows (P4) and tourist attractions along the beach (P5). Moreover, site P4 is located in the marine coastal sector adjacent to a confined disposal site for the storage of dredged sediments (Figure 1). These targets could be affected by both the Isonzo River plume events and the resuspension of fine Hg enriched particles induced by natural and anthropogenic factors (e.g., dredging).
Daily average discharge from the Isonzo River at the time of sampling was recorded from the gauging station of Pieris (Gorizia) located approximately 15 km upstream from the river mouth (Table 1). Vertical profiles of salinity (PSU), temperature ( • C) and turbidity (NTU) were recorded by means of a CTD multiprobe (Hydrolab H20 Multiprobe, OTT HydroMet, Loveland, CO, USA) with a 0.10 dbar pressure step and a sampling rate of 1 s) before sampling. Two water samples were collected using a Niskin bottle (Hydro-Bios Apparatebau GmbH, Altenholz, Germany) from the surface (0-0.5 m depth) and bottom (0.50 m from the bottom sediment) water layers, respectively. Sampling operations were performed during five sampling campaigns carried out under different environmental conditions including (i) unperturbed conditions characterised by low river flow, the absence of wind and good weather (sampling campaigns 1 and 4); (ii) perturbed conditions induced by natural factors such as periods of moderate-high river discharge (sampling campaigns 2 and 5) and conditions of windy sea (sampling campaign 3); (iii) perturbed conditions induced by anthropogenic activities (the movement of ships, sampling campaign 6) ( Table 1). Although sampling campaigns 2 and 5 were both performed following a period of moderate-high river discharge, it should be pointed out that the river discharge was notably low during the sampling campaign 5 (87.3 m 3 s −1 ) compared to sampling campaign 2 (328 m 3 s −1 ), which was performed following a period of particularly heavy river flow ( Figure 1; Table 1). Water samples for the analytical determination of DHg were filtered (Millipore Millex HA, 0.45 µm pore size, Millipore, Burlington, MA, USA) in the field, collected into preconditioned borosilicate glass containers and immediately oxidised by adding bromine chloride (BrCl, Hg-free from Brooks Rand Instruments, Seattle, WA, USA, 0.5 % v/v, until the sample turned the colour yellow) according to the EPA Method 1631e [40]. Additional 2 L water samples were taken to the laboratory where the SPM was separated from the dissolved fraction by vacuum filtration.
During sampling campaign 1, surface sediments were also collected at each site (P1-P6) using a stainless steel Van Veen grab (1.7 L, Hydro-Bios Apparatebau GmbH, Altenholz, Germany)). Three distinct aliquots of sediment were collected and a stainless steel spoon was employed to rapidly scrape off the first 2 cm of the sediment surface which was then homogenised in situ to get a composite sample, stored in appropriate containers and transported to the laboratory.
In addition, three multiprobes were placed at sites P2 (approximately 2 m depth and at the bottom, Aanderaa RCM9, Aanderaa Data Instruments AS, Bergen, Norway) and P3 (approximately 2 m depth, Hydrolab DS5 OTT HydroMet, Loveland, CO, USA) in order to achieve in situ continuous measurements of temperature ( • C), salinity (PSU, Practical Salinity Unit) and turbidity (NTU, Nephelometric Turbidity Unit) along the water column.
Sampling campaign 6 was performed at different sites (A-E and P3) towards the main access channel to the Port of Monfalcone in order to compare unperturbed and perturbed conditions which occurred before and after a large draught ship (8 m) had entered and subsequently left the area ( Figure 1; Table 1). To achieve this objective, the area (site A) located between the mussel farm (site P3) and the navigation channel (site B) was selected as the most representative ( Figure 1). There (sites A, B and P3), as well as along the main channel to the port area (sites C-E), turbidity vertical profiles were recorded before and after a ship had entered and left the area.
In detail, the unperturbed condition was evaluated by means of turbidity vertical profiles recorded at sites P3 and B before the ship had entered the selected area. After the ship had passed by, turbidity profiles were recorded approximately every 10 min at site A in order to evaluate variations in the turbidity values along the water column over time. In addition, two water samples for the analytical determination of DHg and PHg were collected at site A, one at the bottom and one at approximately 7 m depth, where the maximum turbidity zone was observed. Subsequently, turbidity vertical profiles were also recorded following the ship at sites C, D and E, towards the main channel to the port area and once again at site P3 where additional water samples were collected at the bottom and at approximately 7 m depth, respectively.
Surface Sediments: Grain Size Analysis and Total Hg Content
For grain size analysis, 15-20 g of fresh sediment sample were processed using hydrogen peroxide (H 2 O 2 , 10%) for 24 h to eliminate most of the organic matter, and then wet-sieved using a 2 mm sieve. The resulting < 2 mm fraction was analysed by means of a laser granulometer (Malvern Mastersizer 2000, Malvern Panalytical Ltd., Malvern, UK).
A subsample of the sediment was frozen and freeze-dried (CoolSafe 55-4 SCANVAC, Scientific Laboratory Supplies Ltd., Nottingham, UK), homogenised and ground for Hg determination. Total Hg was determined by means of a Direct Mercury Analyser (DMA-80, Milestone, Sorisole, Italy) according to the EPA Method 7473 [41]. Three replicates were analysed for each sediment sample and the quality of the analysis was evaluated by means of certified reference material (PACS-3 Marine Sediment CRM, NRCC, Whitehorse, YT, Canada), obtaining acceptable recoveries ranging between 88 and 101%. The limit of detection (LOD) was approximately 0.005 ng and the precision of the analysis expressed as RSD% was <2%.
Analytical Determination of Particulate and Dissolved Hg
The SPM concentrations were determined by vacuum filtration on pre-conditioned and pre-weighed (Mettler, precision 0.00001 g) Millipore HA membrane filters (ø 47 mm, 0.45 µm pore size). Filters were dried at room temperature to avoid Hg 0 volatilisation due to heat sources and then stored in air-tight containers over silica gel for 4-5 days, thereby protecting them from humidity in the air. Filters were acid-digested in a closed microwave system (Multiwave PRO, Anton Paar GmbH, Graz, Austria) using aqua regia (suprapure HCl ≥ 37% VWR and HNO 3 ≥ 69% VWR, 3:1) following the modified EPA Method 3052 [42]. The obtained solutions were diluted up to a volume of 25 mL by adding Milli-Q water and appropriately stored before analysis.
The analytical determination of Hg in the dissolved (DHg) and in the SPM (PHg) fractions was performed by means of Cold Vapor Atomic Fluorescence Spectrometry coupled with a gold trap preconcentration system (CV-AFS Mercur, Analytic Jena GmbH, Jena, Germany). Water samples were analysed following the EPA Method 1631e [40] which requires a pre-reduction using NH 2 OH-HCl (250 µL/100 mL sample) until the yellow colour disappeared, followed by a reduction with SnCl 2 (Sigma-Aldrich 2% in HCl 2%). The instrument was calibrated using standard solutions obtained via dilution from NIST 3133 certified solution and acidified with BrCl (0.5%, v/v). Certified reference material (ORMS-5 CRM, Brantford, ON, Canada) was analysed in the same batch as the water samples for quality control and an acceptable recovery was obtained (105%). The limit of detection was 0.60 ng L −1 and the precision of the analysis expressed as RSD% was < 3%.
Exploratory Multivariate Data Analysis
Principal component analysis (PCA) was used as an unsupervised exploratory chemometric tool to evaluate the relationships within samples (PC scores and score plot), within variables (PC loadings and loading plot) and between samples and variables (biplot) [43]. In detail, PCA was performed on physico-chemical parameters (salinity, temperature, SPM concentration and river discharge), PHg and DHg observed at the six investigated sites (P1-P6) under different environmental conditions (sampling campaigns 1-5). Column autoscaling was applied to data matrices to minimise systematic differences between variables [44] and multivariate data processing was performed using the CAT (Chemometric Agile Tool) package, based on the R platform (The R Foundation for Statistical Computing, Vienna, Austria) and freely distributed by Gruppo Italiano di Chemiometria (Italy) [45].
Physico-Chemical Parameters of the Water Column
Riverine inputs of suspended particles play a major role in the transport of Hg and other PTEs in estuarine and marine-coastal environments [46][47][48]. In these ecosystems, the composition of the SPM may be affected by several factors including hydrodynamic conditions, interactions between freshwater and saltwater, adsorption/desorption processes, sedimentation and resuspension of bottom sediments [46]. In this context, the physicochemical boundary conditions along the water column (e.g., temperature, salinity, turbidity, pH, redox potential, dissolved oxygen) may affect Hg partitioning behaviour between solid and dissolved phases as well as its speciation, mobility and bioavailability [47].
A summary of the basic physico-chemical parameters (salinity, temperature and turbidity) measured along the water column at the six investigated sites (P1-P6) under different environmental conditions (sampling campaigns 1-5) is reported in Figure S1. Two distinct water masses were observed under unperturbed conditions (sampling campaigns 1 and 4) as a result of the interaction between river freshwater and seawater. Although slightly higher salinity values were recorded in the surface water in April (sampling campaign 4) at sites P2, P3 and P4 (31-33 PSU), brackish salinity values were generally observed at the other sites (22-28 PSU) increasing with depth and reaching typical marine salinity values at the bottom (36-37 PSU).
The river freshwater input was especially evident in March during sampling campaign 3 at site P6 and sampling campaign 2, which was performed following a period of intense discharge from the Isonzo River. Indeed, brackish water down to a depth of 1 m (ranging overall between 14 at site P1 and 26 at site P6) along with a sharp deeper halocline was observed at all the investigated sites ( Figure S1).
Conversely, brackish water (18 PSU) was observed only at sites P1 and P4 in May during sampling campaign 5, most likely due to a generally lower river discharge (87.3 m 3 s −1 at the time of sampling) if compared to that seen in March (328 m 3 s −1 , sampling campaign 2) (Table 1; Figure 1).
Temperature showed slight variations along the water column and among different sampling campaigns ( Figure S1). The lowest values were recorded in the surface water in February and March (10.9 ± 0.9 and 10.6 ± 0.5 • C during sampling campaigns 1 and 2, respectively) and comparable values were measured at the bottom (9.43 ± 0.23 and 10.2 ± 0.1 • C during sampling campaigns 1 and 2, respectively). Conversely, higher values of temperature were observed in April and May, both in the surface water (15.8 ± 0.5 and 15.8 ± 0.4 • C during sampling campaigns 4 and 5, respectively) and at the bottom (15.0 ± 0.5 and 14.6 ± 0.3 • C during sampling campaigns 4 and 5, respectively).
Turbidity showed relatively low values in February (sampling campaign 1) ranging between 1.10 and 21.0 NTU in the surface water (at sites P1 and P6, respectively) and generally decreased with increasing depth reaching values <10 NTU most likely due to mixing and dilution processes between different water masses. Surprisingly, relatively low values of turbidity were also observed during the sampling campaigns performed following periods of high and moderate discharge from the Isonzo River ( Figure S1) with the only exception being site P2 in May (38.9 and 20.0 NTU in the surface and bottom water, respectively). The maximum turbidity values were observed in April (sampling campaign 4) in the surface water at sites P1 (67.1 NTU) and P2 (58.2 NTU), decreasing with increasing depth at each sampling site ( Figure S1). In this case, the relatively elevated turbidity values may be related to enhanced biological activity during late spring [49], in particular at the mussel farm. The only exception was the vertical profile recorded at sites P3 and P5 where almost constant values of turbidity were observed along the water column (approximately 15 and 25 NTU).
Moreover, turbidity vertical profiles recorded before and after a large draught ship had passed by (sampling campaign 6, Figures 1 and 2) showed that before the ship had approached, turbidity was found to be extremely low (<5 NTU) at sites P3 and B, testifying to unperturbed conditions. A clear increment of the turbidity values was evident immediately after the ship had passed site A and the maximum values (20)(21)(22)(23)(24)(25) were recorded at approximately 7 m depth about 30 min after the ship had sailed out of the area ( Figure 2). However, the perturbation induced by the movement of the ship did not reach particularly high values of turbidity and lasted only a brief period of time. Indeed, unperturbed conditions were restored in less than two hours as highlighted by the vertical profile recorded at site P3 (<5 NTU) ( Figure 2). In this context, the characteristics of both the ship (e.g., draught, speed) and the location (e.g., water depth, distance to shore, sediment grain size) may represent the two main factors governing the amount of the resuspended material [50].
Additional information was provided by the measurements of the Isonzo River discharge as well as the continuous measurements of salinity, temperature and turbidity recorded at the beginning of March at sites P2 (surface and bottom) and P3 (surface) ( Figure S2). The effects induced by the high river discharge at the beginning of March (471 and 391 m 3 s −1 ) were clearly evident at both sites P2 and P3, where a decrease in the salinity values corresponded to a decrease in temperature in the surface water, most likely due to notable freshwater input. Regarding turbidity, relatively low values were observed in the surface water reaching maximum values of 9.60 (at site P2) and 15.6 NTU (at site P3) which appeared to persist for a brief period of time. Indeed, unperturbed conditions were rapidly restored according to the results obtained from the comparison between unperturbed and perturbed conditions before the ship had approached and after it had sailed out of the area (Figure 2 and Figure S2).
Conversely, a notable increase in the turbidity values was observed at the bottom at site P2 (maximum value of 112 NTU). However, this perturbation lasted approximately 24 h, suggesting that it may have been related to technical operations at the mussel farm such as the lowering of a boat's anchor.
Surface Sediments: Grain-Size and Hg Content
The surface sediments were found to be heterogeneous in terms of grain-size composition, although those collected at the mussel farm (P1, P2 and P3) showed a very similar grain-size spectra and composition ( Figure 3). According to the classification proposed by Shepard [51], the surface sediments consisted predominantly of silt (23.3-82.8%), followed by sand (5.01-73.8%) and clay (2.87-14.5 %). The silty fraction clearly prevailed in the sediment collected at the mussel farm (sites P1, P2 and P3), followed by the offshore marine sector (site P6) and, to a lesser extent, site P4. Conversely, the surface sediment collected at site P5 showed the highest content of sand (73.8%) most likely due to its location close to the coast and the relatively shallow waters (3-4 m) and high wave energy which favour the settling of coarser particles in suspension ( Figure 3). Additional information was provided by the measurements of the Isonzo River discharge as well as the continuous measurements of salinity, temperature and turbidity recorded at the beginning of March at sites P2 (surface and bottom) and P3 (surface) ( Figure S2). The effects induced by the high river discharge at the beginning of March (471 and 391 m 3 s −1 ) were clearly evident at both sites P2 and P3, where a decrease in the salinity values corresponded to a decrease in temperature in the surface water, most likely due to notable freshwater input. Regarding turbidity, relatively low values were observed in the surface water reaching maximum values of 9.60 (at site P2) and 15.6 NTU (at site P3) which appeared to persist for a brief period of time. Indeed, unperturbed conditions were rapidly restored according to the results obtained from the comparison between unperturbed and perturbed conditions before the ship had approached and after it had sailed out of the area (Figures 2 and S2).
Conversely, a notable increase in the turbidity values was observed at the bottom at site P2 (maximum value of 112 NTU). However, this perturbation lasted approximately 24 h, suggesting that it may have been related to technical operations at the mussel farm such as the lowering of a boat's anchor. The Hg concentration in the investigated surface sediments varied between 0.77 (site P1) and 6.39 (site P6) µg g −1 and the grain-size composition was consistent with previous research focused on the main channel approaching the Port of Monfalcone [30] ( Table 2) showing that the surface sediments were dominated by silt, and Hg ranged between 0.30 and 13.5 µg g −1 , decreasing from the offshore area to the innermost sector of the access channel to the port area [30].
proposed by Shepard [51], the surface sediments consisted predominantly of silt (23.3-82.8%), followed by sand (5.01-73.8%) and clay (2.87-14.5 %). The silty fraction clearly prevailed in the sediment collected at the mussel farm (sites P1, P2 and P3), followed by the offshore marine sector (site P6) and, to a lesser extent, site P4. Conversely, the surface sediment collected at site P5 showed the highest content of sand (73.8%) most likely due to its location close to the coast and the relatively shallow waters (3-4 m) and high wave energy which favour the settling of coarser particles in suspension (Figure 3). The Hg concentration in the investigated surface sediments varied between 0.77 (site P1) and 6.39 (site P6) µg g −1 and the grain-size composition was consistent with previous research focused on the main channel approaching the Port of Monfalcone [30] (Table 2) showing that the surface sediments were dominated by silt, and Hg ranged between 0.30 and 13.5 µg g −1 , decreasing from the offshore area to the innermost sector of the access channel to the port area [30]. Table 2. Ranges of Hg concentration in surface sediments from this study compared to local areas of the Gulf of Trieste and other similar environments in the world as reported in the literature.
The concentration of Hg in the surface sediments investigated in this study (0.77-6.39 µg g −1 , Figure 3) exceeded the Italian regulatory threshold limit of 0.30 µg g −1 (Decrees of the Italian Ministry of the Environment 260/2010 and 172/2015 according to EU Directive 2000/60/EC). Although the results from this study testified to a total Hg concentration in the surface sediments which remains of concern, speciation analyses performed on sediments collected along the main access channel to the Port of Monfalcone recently demonstrated that the element appeared to be strongly associated with the less mobile chemical fractions [30]. This suggested that most of the Hg in the investigated sediments was not available for MeHg production unless under conditions of anoxia [32]. Indeed, the methylation rate does not only depend on the total amount of Hg [17,19,30] since several factors (e.g., temperature, pH, Eh, dissolved oxygen) may also have a role in MeHg production [47,64].
Mercury values of the same order of magnitude were also reported for the surface sediments of the Bay of Panzano (1.40-5.54 µg g −1 , [52]) as well as for the northern Adriatic Sea [21] (Table 2). Conversely, notably lower values were found both in the central and southern sector of the Adriatic Sea [21] as well as the northern Tyrrhenian Sea [58] and at other marine coastal areas and estuarine environments worldwide ( Table 2).
The amount of Hg in the investigated surface sediments was comparable to that observed in the offshore sector of the Gulf (ranging between 0.10 and 11.7 µg g −1 , [19]) and significantly lower with respect to the Isonzo River mouth where the highest concentrations of Hg were observed in previous research (ranging between 4.45 and 23.3 µg g −1 , [19]) (Figure 3), and primarily related to the occurrence of the detrital form of Hg (cinnabar particles) [37].
According to the linear function displaying the relationship between the concentration of Hg and the percentage of the 2-16 µm grain size fraction proposed by previous research, two groups of samples were identified [19] (Figure 3). The first included sediments collected at the Isonzo River mouth, whereas the second referred to sediments from the whole Gulf. The surface sediments investigated in this study belonged to the second group, confirming their common origin with respect to the offshore sediments of the Gulf of Trieste (Figure 3).
Suspended Particulate Matter: Distribution and Hg Concentration
No notable differences in the SPM were observed at the six investigated sites and the highest values were observed during sampling campaign 2 which was performed following a period of generally high discharge from the Isonzo River (Table S1). The surface-bottom SPM ratios were generally low and <1 both under unperturbed (0.87 ± 0.07, 0.63 ± 0.17 and 0.72 ± 0.17 during sampling campaigns 1, 3 and 4, respectively) and perturbed environmental conditions (sampling campaign 5, 0.78 ± 0.10). Conversely, high surface-bottom SPM ratios were observed following a period of high river discharge at the beginning of March (sampling campaign 2, 1.94 ± 0.94 with maximum values of 3.59 and 2.45 at sites P1 and P2, respectively) as a result of high freshwater and SPM inputs from the Isonzo River. This confirms that the SPM distribution in the investigated area depends heavily on the river discharge, as also suggested by the PCA output ( Figure 4) and the significant correlation (N = 30, r = 0.734, p < 0.01; on average N = 5, r = 0.995, p < 0.01) observed between the average SPM concentration in the surface water at the six investigated sites and the Isonzo River discharge during the 5 sampling campaigns ( Figure 5A).
The highest concentrations of PHg were observed under perturbed conditions during sampling campaign 2, both in the surface water (8.37 ± 2.11 ng L −1 ) and at the bottom (6.26 ± 1.62 ng L −1 ), especially at site P3 (12.5 and 8.64 ng L −1 in the surface water and at the bottom, respectively) (Figures 4 and 6). Moreover, a moderate correlation was observed between PHg and the Isonzo River discharge ( Figure 5B; N = 30, r = 0.644, p < 0.01; on average N = 5, r = 0.761, p < 0.5) as well as between PHg and the SPM concentration ( Figure 5C; N = 30, r = 0.634, p < 0.01; on average N = 5, r = 0.927, p < 0.1) confirming the role of the Isonzo River as the primary source of Hg which enters the Gulf, mainly in the form of SPM, as highlighted by previous research [22,36,65]. Indeed, it has been demonstrated that the dispersion of Hg from the Isonzo River mouth depends heavily on the interaction between riverine and meteo-marine hydrological conditions and occurred following four principal directions, including in the direction of the Port of Monfalcone [19]. Consequently, suspended particles enriched in Hg were trapped in the Bay of Panzano, especially when winds such as the Scirocco and Libeccio are dominant.
following a period of generally high discharge from the Isonzo River (Table S1). The surface-bottom SPM ratios were generally low and <1 both under unperturbed (0.87 ± 0.07, 0.63 ± 0.17 and 0.72 ± 0.17 during sampling campaigns 1, 3 and 4, respectively) and perturbed environmental conditions (sampling campaign 5, 0.78 ± 0.10). Conversely, high surface-bottom SPM ratios were observed following a period of high river discharge at the beginning of March (sampling campaign 2, 1.94 ± 0.94 with maximum values of 3.59 and 2.45 at sites P1 and P2, respectively) as a result of high freshwater and SPM inputs from the Isonzo River. This confirms that the SPM distribution in the investigated area depends heavily on the river discharge, as also suggested by the PCA output ( Figure 4) and the significant correlation (N = 30, r = 0.734, p < 0.01; on average N = 5, r = 0.995, p < 0.01) observed between the average SPM concentration in the surface water at the six investigated sites and the Isonzo River discharge during the 5 sampling campaigns ( Figure 5A). The highest concentrations of PHg were observed under perturbed conditions during sampling campaign 2, both in the surface water (8.37 ± 2.11 ng L −1 ) and at the bottom (6.26 ± 1.62 ng L −1 ), especially at site P3 (12.5 and 8.64 ng L −1 in the surface water and at the bottom, respectively) (Figures 4 and 6). Moreover, a moderate correlation was observed between PHg and the Isonzo River discharge ( Figure 5B; N = 30, r = 0.644, p < 0.01; on average N = 5, r = 0.761, p < 0.5) as well as between PHg and the SPM concentration ( Figure 5C; N = 30, r = 0.634, p < 0.01; on average N = 5, r = 0.927, p < 0.1) confirming the role of the Most likely due to dilution effects between riverine freshwater and saltwater, the PHg concentrations at the six investigated sites were generally found to be notably low with respect to those observed at the Isonzo River mouth by previous research [22,36], in front of the river mouth within a buoyant river plume [23] as well as along the Aussa River, flowing in the adjacent Marano Lagoon, which was affected by the discharge of Hg from a chlor-alkali plant [17,55,68] (Table 3). Conversely, PHg values were found to be of the same order of magnitude with respect to the offshore marine area of the Gulf during a river plume event [23]. Table 3. Ranges of Hg concentration in the dissolved fraction (DHg, ng L −1 ) and in the SPM (PHg, µg g −1 and ng L −1 ) from this study compared to local areas of the Gulf of Trieste and other similar environments as reported in the literature.
Location
Water Layer DHg (ng L −1 ) PHg (ng L −1 ) PHg (µg g −1 ) Bay In this study, the maximum river discharge (925 m 3 s −1 ) was reached approximately three weeks before sampling, remaining relatively elevated (131-471 m 3 s −1 ) for several days after sampling campaign 2. Accordingly, sampling campaign 5 was performed when the river discharge was moderate (87.3 m 3 s −1 ) and slightly lower values of PHg were observed (1.11 ± 0.66 and 2.42 ± 1.15 ng L −1 in the surface and bottom water layers, respectively) ( Figure 6). Moreover, the amount of PHg observed during sampling campaign 5 was found to be comparable to that of the sampling campaigns performed both under conditions of windy sea (sampling campaign 3) and under unperturbed conditions (sampling campaigns 1 and 4) (Figures 4 and 6). This suggests that relatively elevated values of PHg in the investigated area may be restricted to brief periods of particularly intense discharge from the Isonzo River, as evidenced by the PCA output ( Figure 4) and the correlation between PHg and the river discharge ( Figure 5B). Indeed, river flooding is responsible for large inputs of freshwater, SPM and particulate-associated contaminants [66,67]. In this context, it has been demonstrated that notable concentrations of PHg were discharged into the Gulf of Trieste during extreme Isonzo River flood events (maximum value of 49 µg g −1 following a river discharge of 1600 m 3 s −1 , [65]).
Most likely due to dilution effects between riverine freshwater and saltwater, the PHg concentrations at the six investigated sites were generally found to be notably low with respect to those observed at the Isonzo River mouth by previous research [22,36], in front of the river mouth within a buoyant river plume [23] as well as along the Aussa River, flowing in the adjacent Marano Lagoon, which was affected by the discharge of Hg from a chlor-alkali plant [17,55,68] (Table 3). Conversely, PHg values were found to be of the same order of magnitude with respect to the offshore marine area of the Gulf during a river plume event [23]. Table 3. Ranges of Hg concentration in the dissolved fraction (DHg, ng L −1 ) and in the SPM (PHg, µg g −1 and ng L −1 ) from this study compared to local areas of the Gulf of Trieste and other similar environments as reported in the literature.
Location
Water Layer DHg (ng L −1 ) PHg (ng L −1 ) PHg (µg g −1 ) Bay As previously mentioned, a ship with a large draught (8 m) moving through the main access channel to the port area may temporarily affect the turbidity vertical distribution along the water column, reaching the maximum values at approximately 7 m depth at site A ( Figure 2). The water sample collected at the maximum turbidity zone showed a PHg concentration of 14.0 ng L −1 , which was two orders of magnitude higher than the PHg at the bottom (0.55 ng L −1 ) ( Figure 6). Approximately 2 h after the ship had sailed out of the area, the PHg concentration at the same depth (approximately 7 m) at site P3 was notably lower (2.01 ng L −1 ), confirming that unperturbed conditions were restored after a brief period of time. However, a higher PHg concentration was observed at the bottom (11.8 ng L −1 ), most likely due to the settling of fine Hg enriched particles ( Figure 6).
Dissolved Hg
The occurrence of DHg at the six sampling sites was investigated during unperturbed and perturbed environmental conditions both in the surface water and at the bottom ( Figure 7; Table S1). The highest concentrations of DHg were detected in winter both under unperturbed (sampling campaign 1, 25.9 ± 10.2 and 35.9 ± 37.4 ng L −1 in the surface water and at the bottom, respectively; Figure 4) and perturbed conditions (sampling campaign 2, 16.5 ± 16.4 and 40.1 ± 59.5 ng L −1 in the surface water and at the bottom, respectively), reaching the maximum concentration in the bottom saltwater at sites P5 (sampling campaign 1, 112 ng L −1 ) and P2 (sampling campaign 2, 149 ng L −1 ) (Figures 4 and 7). Dissolved Hg concentrations were of the same order of magnitude with respect to previous research focused on the Isonzo River mouth [22] and notably higher than those reported for the Gulf of Trieste [69] (Table 3). Generally, DHg at the investigated sites was higher compared to other aquatic systems along the Portuguese coast [72], the Tagus estuary [71], Tinto and Odiel estuaries [61] as well as the Gulf of Cádiz in Spain [61], especially in the bottom saltwater ( Table 3).
As in the case of PHg, DHg reached a concentration of 19.0 ng L −1 at approximately 7 m depth where the maximum turbidity zone was observed at site A after a ship had left the area during sampling campaign 6, decreasing at the bottom (9.40 ng L −1 ). Notably lower values were found both at 7 m depth (7.11 ng L −1 ) and at the bottom (2.35 ng L −1 ) at site P3 approximately 2 h after a ship had passed by (Figure 7).
Mercury Partitioning between the Suspended Particulate Matter and the Dissolved Fraction: Distribution Coefficients (KD)
In aquatic systems, the partitioning behaviour of trace elements is mainly governed by adsorption/precipitation and desorption/dissolution processes. Indeed, trace elements can be preferentially associated with suspended particles (solid phase) and the dissolved fraction [77][78][79]. In this context, distribution coefficients (KD, L kg −1 ) are commonly employed to investigate trace element partitioning behaviour, although information regarding the element chemical form is not provided by this index.
According to the Equation (1), Hg distribution coefficients (KD, L kg −1 ) were calculated as the ratio between Hg concentration in the SPM (PHg, µg g −1 ) and in the dissolved fraction (DHg, ng L −1 ) and expressed on a logarithmic scale [77] (Table 4): Conversely, DHg was mainly <LOD during the sampling campaign performed at the end of March (sampling campaign 3) (Figure 7), most likely due to the intense windy conditions during sampling operations. Indeed, turbulence induced by wind [74] and the subsequent mixing of the water column [75] may promote the release of gaseous elemental Hg from the surface water to the atmosphere, although the highest Hg evasion was found to occur in summer [48,76]. Moreover, sampling campaign 3 was performed following a period of low discharge from the Isonzo River (<100 m 3 s −1 ) and relatively low PHg concentrations were observed (1.42 ± 0.97 and 2.81 ± 2.51 ng L −1 in the surface and bottom water layers, respectively). This suggests that low amounts of PHg were available to desorption and or dissolution processes with subsequent limiting of Hg release from the suspended particles to the dissolved fraction.
Dissolved Hg concentrations were of the same order of magnitude with respect to previous research focused on the Isonzo River mouth [22] and notably higher than those reported for the Gulf of Trieste [69] (Table 3). Generally, DHg at the investigated sites was higher compared to other aquatic systems along the Portuguese coast [72], the Tagus estuary [71], Tinto and Odiel estuaries [61] as well as the Gulf of Cádiz in Spain [61], especially in the bottom saltwater ( Table 3).
As in the case of PHg, DHg reached a concentration of 19.0 ng L −1 at approximately 7 m depth where the maximum turbidity zone was observed at site A after a ship had left the area during sampling campaign 6, decreasing at the bottom (9.40 ng L −1 ). Notably lower values were found both at 7 m depth (7.11 ng L −1 ) and at the bottom (2.35 ng L −1 ) at site P3 approximately 2 h after a ship had passed by (Figure 7).
Mercury Partitioning between the Suspended Particulate Matter and the Dissolved Fraction: Distribution Coefficients (K D )
In aquatic systems, the partitioning behaviour of trace elements is mainly governed by adsorption/precipitation and desorption/dissolution processes. Indeed, trace elements can be preferentially associated with suspended particles (solid phase) and the dissolved fraction [77][78][79]. In this context, distribution coefficients (K D , L kg −1 ) are commonly employed to investigate trace element partitioning behaviour, although information regarding the element chemical form is not provided by this index.
According to the Equation (1), Hg distribution coefficients (K D , L kg −1 ) were calculated as the ratio between Hg concentration in the SPM (PHg, µg g −1 ) and in the dissolved fraction (DHg, ng L −1 ) and expressed on a logarithmic scale [77] (Table 4): (1) At the six investigated sites and under different environmental conditions, logK D values ranged overall between 3.92 and 6.42 and between 3.69 and 6.68 in the surface and bottom water layers, respectively ( Table 4).
The logK D values were relatively high, thus testifying to the preferential partitioning of Hg in the suspended particles as also observed at the Isonzo River mouth [36] and other similar environments such as the New York/New Jersey Harbor, where Hg was found to be mainly associated with the SPM (98-99%) [7]. In addition, the distribution of DHg in the surface water did not appear to be simply governed by salinity, since Hg is generally high particle reactive and easily involved in removal processes through adsorption and/or precipitation [7,11,48,61]. A significant correlation was observed between logK D and DHg (N = 60, r = 0.897, p < 0.01), especially following periods of high freshwater discharge from the Isonzo River (sampling campaigns 2 and 5) (Figure 8), although the logK D values did not notably vary among different sampling conditions. At the six investigated sites and under different environmental conditions, logKD values ranged overall between 3.92 and 6.42 and between 3.69 and 6.68 in the surface and bottom water layers, respectively ( Table 4).
The logKD values were relatively high, thus testifying to the preferential partitioning of Hg in the suspended particles as also observed at the Isonzo River mouth [36] and other similar environments such as the New York/New Jersey Harbor, where Hg was found to be mainly associated with the SPM (98-99%) [7]. In addition, the distribution of DHg in the surface water did not appear to be simply governed by salinity, since Hg is generally high particle reactive and easily involved in removal processes through adsorption and/or precipitation [7,11,48,61]. A significant correlation was observed between logKD and DHg (N = 60, r = 0.897, p < 0.01), especially following periods of high freshwater discharge from the Isonzo River (sampling campaigns 2 and 5) (Figure 8), although the logKD values did not notably vary among different sampling conditions.
Conclusions
The occurrence of Hg in the coastal area of the Gulf of Trieste still remains an issue of environmental concern, although extraction activities at the Idrija Hg mine (Slovenia) ceased in 1996.
Results from this research confirmed the role of the Isonzo River as the primary source of both dissolved and particulate Hg in the northernmost sector of the Gulf of Trieste (Bay of Panzano), especially following periods of high discharge of the river.
Conclusions
The occurrence of Hg in the coastal area of the Gulf of Trieste still remains an issue of environmental concern, although extraction activities at the Idrija Hg mine (Slovenia) ceased in 1996.
Results from this research confirmed the role of the Isonzo River as the primary source of both dissolved and particulate Hg in the northernmost sector of the Gulf of Trieste (Bay of Panzano), especially following periods of high discharge of the river. However, contrary to DHg, which showed both a notable spatial and temporal variability (<LOD-149 ng L −1 ), the amount of PHg (0.39-12.5 ng L −1 ) appeared to be strongly related to the river inputs of freshwater and SPM. Indeed, the highest amounts of PHg both in the surface water and at the bottom were found to be restricted to brief periods of intense river discharge. In agreement with previous investigations, Hg in the water column was still found to be mainly partitioned in the SPM, as also confirmed by the elevated logK D values (3.69-6.68), thus testifying to its behaviour as showing a high affinity for fine sediment particles transported in suspension.
At the investigated area in the Bay of Panzano, the relatively shallow water depth allowed PHg accumulation in the surface sediments, which showed remarkable Hg concentrations (0.77-6.39 µg g −1 ). However, the amount of Hg in the sediments was found to be notably low with respect to the littoral zone surrounding the Isonzo River mouth and of the same order of magnitude if compared to sediments from the offshore sector of the Gulf of Trieste.
Resuspension events caused by natural and anthropogenic factors certainly affect the mobility of Hg from the sediment compartment to the water column, but results from this research showed that they can be limited. Indeed, perturbed conditions along the water column due to the presence of a large draught ship approaching the port area lasted only a brief period of time (approximately 2 h). The observed increase for both PHg and DHg in the water column is temporary since unperturbed conditions were promptly restored. This evidence suggests that a similar scenario would also occur for dredging activities where the effect in terms of widespread Hg in the water column should be restricted both to the operation area and time period.
Results from this research also suggested that the magnitude of a natural event, such as the increase in wave motion or extreme Isonzo River flood events, would alter DHg and PHg concentrations in the water column more significantly than a local perturbation caused by anthropogenic activities. Moreover, considering the degree of contamination reported for the Isonzo River basin [80,81], it may be expected that the metal will continue to be transported from inland to the Gulf's waters for the foreseeable future.
Since the Isonzo River's discharge conditions were identified as a crucial factor in regulating the amount of Hg in the northernmost sector of Gulf of Trieste, future research should address the effects of variations in the Isonzo River discharge on the contribution of Hg associated with the SPM as well as the evaluation of Hg and SPM fluxes in the investigated area.
Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/ijerph181910192/s1, Figure S1: Vertical profiles of turbidity (NTU, grey dots), salinity (PSU, blue line) and temperature ( • C, red line) recorded at the six investigated sites in the vicinity of the main access channel to the Port of Monfalcone (Bay of Panzano, Gulf of Trieste), Figure S2: Isonzo River daily discharge (m 3 s −1 ) and variations of salinity (PSU), temperature ( • C) and turbidity (NTU) at site P2 (surface and bottom water layers) and site P3 (surface water layer) between the 1 and 10 March 2016, Table S1
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. | 2021-10-14T05:21:29.450Z | 2021-09-28T00:00:00.000 | {
"year": 2021,
"sha1": "01d1144bb69d81945ba62b321b2ecbb449616a2b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/18/19/10192/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "01d1144bb69d81945ba62b321b2ecbb449616a2b",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233895967 | pes2o/s2orc | v3-fos-license | Orthopedic Day-case Surgery in Nigeria: A Single-center Experience
Background: The concept of day-case surgeries is relevant in orthopedic specialty in developing countries, where orthopedic elective procedures have relatively longer duration of surgical waiting time, mainly due to lack of inpatient bed space. We aimed to determine the scope, safety and outcome of orthopedic day-case surgeries in a Nigerian setting, and identify potential areas for intervention to improve the practice. Methods: This was a 12-month prospective study of 71 eligible, consenting and consecutive patients who presented at the National Orthopedic Hospital Enugu and were carefully selected and prepared for orthopedic day-case surgeries. Results: Within the period of study, 53 of 540 elective orthopedic procedures were carried out as daycase, giving a day-case surgery rate of 9.8%. Of the patients enrolled, male to female ratio was 1.2:1 and age range was 8 months to 76 years. Eighteen (25.4%) patients had their day-case procedure cancelled on the day of surgery. The commonest procedure was removal of implant. Conversion rate was 32% mainly due to operation occurring late. Complication (mainly pain) rate was 30%, and correlated with duration of procedure (p<0.006). The satisfaction rate among patients was 98%; no re-admission or mortality was observed. Conclusion: In this study, orthopedic day-case procedures were safe, though there was low use of daycase surgery in scope, complexity and number of procedures. This and the high conversion rate observed call for a dedicated day-case unit and measures to facilitate timelines of the procedures.
Introduction
Day-case surgery is an important part of elective surgery globally. It accounts for over 50% and 60% of elective surgeries in the UK and USA respectively (1). Published reports show that day-case surgery shortens hospital waiting list, facilitates efficient use of resources, and provides high-quality, safe and cost-effective surgical care in selected and well-prepared cases (2,3,4,5). It is also acceptable to patients and health workers (6). The concept of day-case surgery could not be more apt in developing countries where demand for elective surgery usually outstrips inpatient facilities and long waiting list is often the norm (7). Day-case surgery is even more relevant in orthopedic specialty in developing countries. Published reports show that compared with other surgical specialties, orthopedic elective procedures have relatively longer duration of length of surgical waiting time mainly due to lack of inpatient bed space (8). Safety and patient satisfaction/acceptability are two critical issues in day surgery (6,9). Safety can be gauged with parameters such as the direct admission, readmission, and postoperative complication and mortality rates (9). The rates of these parameters and scope of orthopedic day-case vary from and within subregions (2,9). Detailed knowledge about the scope, safety and outcome of orthopedic day-case surgery in a setting can facilitate strategies and policy response towards improving the practice. However, data is limited on orthopedic day-case surgery in developing countries; the two previous reports were retrospective studies with associated inherent limitations (1,2). This underscores the importance of prospective data to evaluate orthopedic day-case surgery in our environment. Therefore, this study aimed to determine the scope, safety and outcome of orthopedic day-case surgery in a low-resource setting, and identify potential areas for intervention to improve the practice.
Study setting and design
This was a prospective descriptive study carried out among patients for orthopedic day-case surgery at the National Orthopaedic Hospital Enugu, Enugu State, Nigeria, from April 2016 to March 2017.
Ethical approval
The approval to carry out this study was obtained from the hospital's ethical committee (IRB/IIIC No. S/3131850, Protocol No. 132). A written informed consent was obtained from the patients and/or next of kin.
Study population
The study included patients of both sexes and all age groups who presented to the hospital within the stipulated study period for orthopedic day-case surgery; and satisfied the inclusion criteria.
Inclusion criteria
ASA 1 patients, ASA11 patients with controlled comorbidities, patients with hemoglobin of at least 10 mg/dL, and cases with expected duration of surgery less than 120 minutes.
Exclusion criteria
Patient living more than 30 km or 1-hour drive from the hospital and without relatives in town to stay with / nearby health care facility, ASA III patients, patient without responsible escort, poor domestic circumstances inappropriate for postoperative care, failure to meet inclusion criteria, patient not willing to be part of the study, and patients with uncontrolled comorbidities.
Sample size
A pilot survey of the hospital operation record book showed that in 2012 and 2013, an average of 647 elective orthopedic procedures were done (population size) and 15% of these procedures were day cases. Based on the population size and average percentage of daycase surgery in the pilot survey, a sample size of 68 was calculated from the formula: Sample size=n/[1+n/ population] where n=[Z2P(1-P)/D2] (10).
Procedure
Patients underwent clinical assessment and laboratory investigations, hemoglobin, urinalysis and others if needed, then selected for day-case surgery. A proforma was opened for all eligible patients. Data entered included: age, sex, highest educational level attained by patient/caregiver, domicile address, estimated distance from hospital, mobile phone number, ASA grade, mode of anesthesia, procedure performed, status of surgeon, status of anesthetist, duration of surgery, time of commencement and time of end of surgery, tourniquet time (if used), access to hospital and family doctor, type of discharge analgesics, complication(s) and its duration, satisfaction of day-case surgery among patient/guardian along with reasons for satisfaction or dissatisfaction. Additional data were entered into the proforma for those that were converted (direct admission) and reasons for conversion, cases that were re-admitted and reasons for readmission, and for cases cancelled along with the reasons for cancellation.
ORTHOPEDIC DAY CASE SURGERY IN NIGERIA
Conversion (direct admission) rate, otherwise called unplanned overnight admission, refers to that proportion of patients initially planned for day-case procedures who were subsequently admitted immediately after operation for any reason (6,9). Re-admission rate refers to that proportion of day-cases that were operated and discharged home as planned but were admitted back within 30 days for complication developed back at home (6,9). The patients were given a detailed explanation of the objective of the study, and consent was obtained from each patient. Further explanation that patient should present in the morning of surgery with a responsible adult and would go back home after surgery was given. Patients were instructed to start fasting at midnight the night preceding surgery, using the preoperative guideline of 2 hours for clear fluid, 4 hours for breast milk and 6 hours for formula milk and solids. Patients were asked to report at theatre on the morning of surgery. At arrival in the morning and after fulfilling all administrative procedures, they were prepared for surgery by the nurse at the ward. Prophylactic antibiotic (Ceftriaxone 1 gm and metronidazole 500 mg) was administered intravenously to all patients at induction of anesthesia. Patient was observed in the theatre recovery room after surgery and then moved to the ward. Patient was discharged home accompanied, after being assessed by the first author in conjunction with a senior member of the operating unit, along with satisfying the following discharge criteria: alert and oriented in time and place, stable vital signs, pain controlled by oral analgesic, nausea or emesis controlled, able to walk without dizziness, regional anesthesia appropriately resolved, prescription given, patient accepts readiness for discharge, and a responsible adult present to accompany patient home. Postoperative pain as a complication was measured at the point of discharge from hospital using numeric rating scale (NRS) 0-10, where 0 represented no pain and 10 the severest pain intensity. Patients were followed up through phone calls at least once a day after discharge till next clinic appointment. The patient/caregiver was also given the contact number of the first author. Patients were followed up for at least four weeks and at each clinic visit they were clinically assessed by one of the authors together with the managing unit.
Data analysis
Data were analyzed using SPSS version 20 (SPSS Chicago IL, USA) for graphs, bar charts, pie charts and frequency tables, and for cross tabulation. Continuous and categorical variables were summarized using mean, frequency, standard deviation and percentages. Mean comparison of continuous variables was done using Student's t test while associations between categorical variables were done using chi square; a p-value <0.05 was considered significant.
Results
This study enrolled 71 consecutive eligible consenting patients for orthopedic day-case surgery. The male to female ratio was 1.2:1 and the age range 8 months to 76 years. The estimated distance from the patient's home to the hospital ranged from 5 to 30 km, with a mean of 16 km±7.536. Of the 71 patients, 18 (25.4%) had their day-case procedure cancelled on the day of surgery, and 53 patients underwent surgery as planned. In the study period, there were 540 elective procedures and 53 of these were done as day-case, giving a day-case surgery rate of 9.8%. The three top procedures performed as day-case were removal of implants (plates and screws), biopsy, and manipulation under anesthesia (Table 1). Most (84.6%) of the day-case surgeries were therapeutic procedures, 5 (9.6%) were diagnostic and 3 (5.8%) therapeutic/ diagnostics. All patients in this study were ASA grade 1 status. Spinal, general and local anesthesia was the mode of anesthesia given to 24 (45%), 22 (42%) and 7 (13%) of patients respectively. Most 47 (89%) patients were anaesthetized by nurse anesthetists; 6 (11%) patients by a consultant anesthetist. The senior registrar performed 42 (79%) procedures whereas a consultant orthopedic surgeon performed the remaining 11 (21%). Twelve (22.6%) patients had access to a family doctor to care for them at home, the rest did not have family doctors. Sixteen (30.2%) patients had private cars to access the hospital in an emergency, the rest of the patients (69.8%) depended on public transport to access the hospital. Seventeen patients were admitted as in-patients after surgery, giving a conversion rate of 32%. Social reasons and operation occurring too late (operation after normal working hours of 4 pm when priority and attention of theatre workforce is focused on emergency cases) was the reason in 16 (94.1%) patients that were converted to inpatient hospital admission; extensive surgical procedure was the reason in 1 (5.9%) of the converted cases. Sixteen (30%) patients who had day-case procedure had complication. These complications were surgical related: pain and hematoma. Of these 16 patients, 15 had pain and 1 had hematoma. Hematoma was observed in a patient following excision biopsy of a popliteal mass without a wound drain. None of the patients had wound infection. There was no association between the complication and the rank of anesthetist ( Table 2). The rate of complication was higher in procedures done by senior registrar than those done by the consultants but this difference was not statistically significant, p=0.330 ( Table 2). The complication rate correlated (p<0.006) with mean duration of surgery (Table 3). There was no case of re-admission into the ward for complications developed back home, and no mortality among patients. Patients were given different types of postop analgesics: 20 were given non-steroidal anti-inflammatory drugs (NSAID), 1 had opioids, 8 NSAID+opioids, 6 NSAID+paracetamol, and 18 opioids+paracetamol. There was no significant association (p=0.272) between the incidence of postoperative pain and the type of discharge analgesics. There was immediate postoperative pain in 93.8% of patients the first day postoperative; the incidence of pain reduced to 31.3% and afterward complete resolution of pain in all the patients. The three top reasons for the cancellation of day-case procedure during the period of this study were: failure to arrive to hospital, time constraint on the part of surgeon, and lack of theatre space (Table 4). Day-case surgery was highly recommended by 52 patients, giving a satisfaction rate of 98%.
Discussion
The age distribution of eligible patients enrolled in this was similar to the finding reported by Ajibade et al. in another orthopedic hospital setting in northern Nigeria (2). The wide age range in this and previous studies shows availability of standard anesthesia facilities with qualified and experienced anesthetists to handle different age categories for day-case procedures in a low-resource setting such as ours (2,11). Most procedures in this study were carried out under spinal anesthesia (45%), this was quite different from general and local anesthesia for most of the procedures in the series reported by Ajibade et al. and Adewole et al. respectively (2,12). In this study, spinal anesthesia was mostly used because implant removals, the commonest procedure observed, were carried out mostly on the lower limb. Spinal anesthesia is a preferred option because the residual analgesia from block also reduces postoperative pain (4). Therapeutic procedures constituted 84.6% of all procedures; with removal of implant (59.6%) the commonest specific procedure performed in this study is at variance with biopsy as the commonest procedure reported by Ajibade et al. (2). The reason for this variation is not evident. (13,14). Procedures such as subacromial decompression with tendon transfer and tarsal coalition excision have been reported as day-case procedures (2,15). The limited development of minimally invasive orthopedic surgery, unavailability of dedicated day-case units and lack of provision of community services by community physicians and nurses in Nigeria perhaps explains the wide gap in scope of procedures in this setting and in other countries (2,16). The period of this study coincided with the peak of economic recession in Nigeria and is a plausible explanation for the rate of day-case surgery that is below the average rate observed from the pilot survey. However, the day-case surgery rate in this study was higher than the 3.48% reported by Ajibade et al. (2). This level of utilization falls short of the reported rates in other surgery specialties, such as urology (61.6%) and plastic surgery (37.2%), reported in Nigeria, and the reason is not evident (14,17 (2,18,19,11). The very high conversion rate in this study was due mainly to social reason of operation occurring too late, whereas in these previous reports, conversions were due to surgical and anesthetic reasons (18,19,11). In this study, only one patient had surgical reason, extensive operation, for conversion. If late operations were eliminated, then a conversion rate of (1/53) 1.9% (resulting from extensive surgical procedure) is within the range of 2-3% recommended by the Royal College of Surgeons (19). This implies the safety of the practice in our setting and calls for measures to reduce the incidence of late operations among patients for day-case procedures.
In this study, the mean estimated distance was 16 km from the hospital and only 30% of patients had private car access to the hospital, bearing in mind the relative lack of good and efficient public transport system in our setting. This is an important factor for high conversion rate: most patients cannot get back home when operations are done late, considering the security situation in our setting and therefore were admitted overnight. It is therefore important that patients are operated on at the beginning of the morning list and every effort made to ensure that day-cases are dealt with before mid-day to ensure early, safe discharge. This also calls for a dedicated day-case unit so that day-case patients will not compete with inpatients for theatre space.
The complication rate in this study was higher than 2.7% by Ajibade et al. (2) (18,11,2). There was no significant correlation between complication rates with mode of anesthesia, rank of surgeon, rank of anesthetist, duration of tourniquet, education level of patient/caregiver, and type of discharge analgesics, and this is similar to the findings reported by Cardosa et al. (18). There was significant correlation between duration of surgery and complication rate in this study but the reason is not evident. The resolution of pain in all patients after day one postoperation shows adequacy of our discharge analgesics and patient compliance with their intake, and ability of patient/caregiver to comply with instructions for preparation for surgery and postoperative care, as previously reported by Abdurrahman (21). That none of the patients had postoperative wound infection also shows adequacy of prophylactic antibiotics and patient/caregiver compliance with the instructions. In this study, there was no readmission and mortality; a similar finding was reported by Adewole et al. (12). This shows proper patient selection and the safety of orthopedic day-case practice in our setting. The cancellation rate in this study was higher than 15.6% and 11.06% reported by Ramyil et al. and Kolawole et al. respectively (5,22). In this study, the main reason for cancellation-failure to arrive to the hospital due to financial difficulty-was similar to the findings by Ramyil et al. but at variance with surgical-related factor of time constraint reported by Kolawole et al. (5,22). An organized health insurance scheme with wide coverage may help cover the financial bills of patients. The surgeon-related factor for cancellation could be mitigated by making a realistic theatre list. A dedicated day-case unit, to eliminate day-case and inpatient competition for theatre space and surgeons' time could have prevented more than a third of the cancellations ( Table 5). In this study, that over 90% of patients will recommend day-case surgery to other people is an indication of its acceptability in our environment.
Conclusion
In this study, orthopedic day-case procedures were safe, though there was low utilization of day-case surgery in scope, complexity and number of procedures. This and the high cancellation and conversion rates observed call for provision of a dedicated day-case unit and measures to facilitate timeliness of the procedures. | 2021-05-08T00:04:42.187Z | 2021-02-09T00:00:00.000 | {
"year": 2021,
"sha1": "bbc2ee51e96afe6a69dd105b656bd23473cacae4",
"oa_license": "CCBY",
"oa_url": "https://www.ajol.info/index.php/aas/article/download/203756/192167",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e34658e1ca1111f297816b6d3cf796a9dfdac864",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
268359487 | pes2o/s2orc | v3-fos-license | Clinical usefulness of linked color imaging in identifying Helicobacter pylori infection: A systematic review and meta-analysis
BACKGROUND Accurate diagnosis of Helicobacter pylori (H. pylori) infection status is a crucial premise for eradication therapy, as well as evaluation of risk for gastric cancer. Recent progress on imaging enhancement endoscopy (IEE) made it possible to not only detect precancerous lesions and early gastrointestinal cancers but also to predict H. pylori infection in real time. As a novel IEE modality, linked color imaging (LCI) has exhibited its value on diagnosis of lesions of gastric mucosa through emphasizing minor differences of color tone. AIM To compare the efficacy of LCI for H. pylori active infection vs conventional white light imaging (WLI). METHODS PubMed, Embase, Embase and Cochrane Library were searched up to the end of April 11, 2022. The random-effects model was adopted to calculate the diagnostic efficacy of LCI and WLI. The calculation of sensitivity, specificity, and likelihood ratios were performed; symmetric receiver operator characteristic (SROC) curves and the areas under the SROC curves were computed. Quality of the included studies was chosen to assess using the quality assessment of diagnostic accuracy studies-2 tool. RESULTS Seven original studies were included in this study. The pooled sensitivity, specificity, positive likelihood rate, and negative likelihood rate of LCI for the diagnosis of H. pylori infection of gastric mucosa were 0.85 [95% confidence interval (CI): 0.76-0.92], 0.82 (95%CI: 0.78-0.85), 4.71 (95%CI: 3.7-5.9), and 0.18 (95%CI: 0.10-0.31) respectively, with diagnostic odds ratio = 26 (95%CI: 13-52), SROC = 0.87 (95%CI: 0.84-0.90), which showed superiority of diagnostic efficacy compared to WLI. CONCLUSION Our results showed LCI can improve efficacy of diagnosis on H. pylori infection, which represents a useful endoscopic evaluation modality for clinical practice.
INTRODUCTION
Growing evidences has supported the predominant role of Helicobacter pylori (H.pylori) infection in development of gastric cancer, since World Health Organization designated H. pylori a type 1 carcinogen in 1993.It has been widely accepted that H. pylori infection leads to the progressive way from chronic atrophic gastritis, intestinal metaplasia, to dysplasia [1].Moreover, prolonged infection with H. pylori cause inflammation, abnormal cell proliferation, release of bacterial virulence factors, and nitrate reduction, all of which contribute to the development of gastric cancer [1].Recent random controlled trials and meta-analysis have verified that H. pylori eradication therapy appears to reduce new-onset gastric cancer [2][3][4][5].Therefore, from the perspective of clinical practice, it is important to make diagnosis accurately of active H. pylori infection by endoscopic observation with the prevalence of gastroscopy screening in population.
The Kyoto classification of gastritis was advocated in 2013 to evaluate the gastric background mucosa by endoscopic features, eventually to assess the risk of developing gastric cancer [6,7].Some typical endoscopic findings of gastric mucosa have been literally associated to active H. pylori infection, including diffuse redness, gooseflesh-like nodularity in antrum, and enlarged folds, while regular arrangement of collecting venules presents a sign of non-infection status of H. pylori [8][9][10].With the advances of endoscopic techniques, it is feasible to make diagnosis of presence or absence of active H. pylori infection of stomach by using conventional white light imaging (WLI) and imaging enhancement endoscopy (IEE).
Linked color imaging (LCI) is a novel mode of IEE recently launched by FUJIFILM Corporation (Tokyo, Japan), which uses a color tone like WLI by emphasizing minute differences in mucosal colors [11].In common, mucosal lesions seen in red or white by WLI get redder or whiter under LCI endoscopy, thereby making the lesions more visible during screening.Growing studies have demonstrated that LCI endoscopy can obviously improve the visibility of diffuse redness, map-like redness as well as atrophy and intestinal metaplasia, thus showing the reliability of LCI in recognition of gastritis and early gastric cancer [12][13][14].Meanwhile, studies have also conducted to evaluate the diagnostic effect of LCI endoscopy on H. pylori infection status.H. pylori infected mucosa is redder than other uninfected areas due to postinflammatory congestion and oedema [15].Compared to WLI, this difference in coloration was amplified by LCI, which may lead to easier identification of lesions suspected of H. pylori infection by the endoscopist, increasing the accuracy of the diagnosis for H. pylori infection.However, the difference between WLI and LCI for H. pylori diagnostic rates remains unknown.Hence in current study, we aim to assess the diagnostic value of LCI for H. pylori active compared to WLI by performing a meta-analysis, to provide evidences for extending the clinical application of LCI endoscopy.
Literature search strategy
Online English literatures were searched using electronic literature databases including PubMed, Embase, Cochrane and Web of Science.The cut-off time of the articles published was set on April 15, 2022.The keywords used in literature search were "linked color imaging" and "Helicobacter pylori infection" as well as their corresponding abbreviations.
Study inclusion and exclusion
Literature reviews, letters, meeting abstracts, case reports were not included.In addition, duplicated data records were also excluded.In all included studies, the diagnosis of H. pylori active infection under LCI endoscopy was eventually determined by rapid urease test which is the most common test for diagnosis of H. pylori infection.There were no restrictions in terms of the age or sex of study participants.
Data extraction and quality assessment
Data extracted from each study mainly included the following information: First author, year of publication, country, study design, object of research, number of cases, endoscopic system, and test parameters (true positive, false positive, false negative, and true negative).The first and second authors screened the enrolled studies and extracted relevant data.When critical data was not clearly stated, it would be resolved through discussion with the corresponding author.
Risk of bias assessment
The quality assessment tool of diagnostic tests, the quality assessment of diagnostic accuracy studies-2 was used to evaluate the risk of bias [16].The scale comprises assessment of risk of bias and applicability.The risk-of-bias assessment is composed of patient selection, evaluated tests, the criterion and patient flow and progress.The applicability assessment included 3 aspects: Patient selection, evaluated tests and the golden criterion.In each aspect, the of bias was defined as "high", "low", or "unclear".
Statistical analysis
The "midas" command of Stata 15.0 (StataCorp LLC, College Station, TX) was used to fit the two-variable mixed-effect model, and the point estimates of the sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, and diagnostic ratio and their corresponding 95% confidence interval (CI) in each group were combined to draw the comprehensive subject working characteristics [symmetric receiver operator characteristic (SROC)], area under the curve (AUC) and its 95%CI were calculated.The Deek's funnel plot was used to determine publication bias, and Q statistics and I 2 statistics were used to determine whether there was heterogeneity between studies.Levels of 0%-25%, 26%-50%, 51%-75% and more than 75% indicate insignificant, low, moderate, and high heterogeneity respectively.P < 0.05 was considered statistically significant.
Searched literatures and bias risk
We initially searched 94 articles including 25 in PubMed, 16 in Embase, 19 in Cochrane, and 34 in Web of Science.Careful review of the title and abstract and full-text reading were performed independently by two reviewers and the Kappa value was calculated as 0.849.Finally, 7 research articles were selected [17][18][19][20][21][22][23] (Table 1).Of them, two studies evaluated the diagnostic effect of LCI by computer-aided diagnosis system (CAD) and artificial intelligence (AI) but not by endoscopist.The specific literature screening process for the included studies is shown in Figure 1.The assessment of bias risk is shown in Figure 2. Of the seven included studies, two were case-free, so there was some bias on patient selection.In addition, in the study performed by Sun et al [21], both measures were tested interchangeably in the same group, and the outcome data were not completely distinguished.
WLI has moderate effect on detecting active H. pylori infection of gastric mucosa
For the overall detection effect on active H. pylori infection in the enrolled studies, WLI endoscopy had a moderate effect of diagnosis with a heterogeneity (I 2 = 97) by pooled sensitivity = 0.63 (95%CI: 0.46-0.77)(Figure 3A), pooled specificity = 0.73 (95%CI: 0.66-0.78)(Figure 3B), positive likelihood rate (PLR) = 2.32 (95%CI: 1.8-3.0)(Figure 3C, Supplementary Figure 1C), and negative likelihood rate (NLR) = 0.51 (95%CI: 0.34-0.76)(Figure 3C, Supplementary Figure 1C).The posterior probability was calculated by plotting Fagan diagram assuming the anterior probability was 50%.When H. pylori infection was diagnosed based on WLI, the probability of confirming H. pylori infection was 70%.In the case of negative results, the probability of H. pylori infection was 34% (Figure 3C).In addition, the diagnostic odds ratio (DOR) was 5 (95%CI: 2-9), and SROC was 0.75 (95%CI: 0.71-0.78)(Figure 3D).The Deeks' funnel plot was used to evaluate publication bias.The P value was calculated as 0.12 which indicates the risk of publication bias is not significant (Supplementary Figure 1A).The high heterogeneity existed among the studies with I 2 = 97 (95%CI: 94-99).The further bivariate box-type diagram showed that two of the seven included studies (10 groups) fell outside the box-type diagram suggesting the two studies might be the main source of heterogeneity, The high heterogeneity of the enrolled publications may be caused by the small sample size, study type and study population (Supplementary Figure 1B).
DISCUSSION
Diagnosis of the status of H. pylori infection represents a crucial step in prior to assess the risk of atrophy, intestinal metaplasia and H. pylori associated gastric cancer, according to current consensual strategy on prevention and treatment of gastric cancer.However, the endoscopic diagnosis of H. pylori associated gastritis does not often correspond with the histological findings in clinical practice [24].Previous studies have disclosed that the accuracy of endoscopic diagnosis of H. pylori infection ranged from 64% to 71% based on the endoscopic appearance alone [19,25].This moderate accuracy of diagnosis suggests that endoscopy may not be definitive method, but can be important part of comprehensive diagnosis with other invasive or noninvasive tests such as biopsy based rapid urease test or urea breath test.
In the past decades, image enhancement technique upgraded the conventional endoscopy to an indispensable test for diagnosis of gastrointestinal diseases including early malignancies.Emerged researches have demonstrated that various types of IEEs such as blue laser imaging, narrow band imaging and LCI can improve accuracy of diagnosis on H. pylori infection status [20,[26][27][28].As the latest IEE technique, LCI endoscopy can theoretically highlight the color tone of mucosa thus facilitating the visuality of endoscopic features for active infection of H. pylori, such as diffuse redness, mucosal edema, hemorrhagic spots, enlarged folds, and gooseflesh-like nodularity[29].Correspondingly, growing evidences have emerged that LCI endoscopy significantly improves recognition of H. pylori associated changes of mucosa to help making diagnosis of H. pylori infection more accurately than conventional WLI endoscopy [18][19][20][21][22][23].
The combined accuracy of LCI endoscopy on diagnosis of H. pylori active infection concluded by our meta-analysis, is obviously higher than that of conventional WLI endoscopy, which is demonstrated by 0.85 (95%CI: 0.76-0.92) of sensitivity, 0.82 (95%CI: 0.78-0.85) of specificity, 4.71 (95%CI: 3.7-5.9) of PLR, and 0.18 (95%CI: 0.10-0.31) of NLR, with the AUC being 0.87.Although this accuracy is not high enough, it apparently indicates the advantage of LCI endoscopy before patients with suspected H. pylori infection are subjected to invasive tests.Moreover, it has been elucidated that LCI endoscopy not only have good efficacy on diagnosis of current H. pylori infection, but also superior in diagnosis of other abnormalities of H. pylori associated gastritis, such as gastric intestinal metaplasia and atrophy [14,30,31].Some most recent studies have further demonstrated better effects of LCI endoscopy, in comparison with WLI endoscopy or indigo carmine chromoendoscopy, on identifying featured mucosal appearances after successful H. pylori eradication, thus facilitating to recognize early gastric cancer [32][33][34][35].Therefore, LCI endoscopy is exhibiting the potential as an important alternative modality of endoscopy for gastrointestinal disease screening in future, or at least, as a feasible supplementary method of WLI endoscopy-based screening strategy.Our analysis had several limitations that may have influence on the results.Firstly, there haven't been insufficient original studies related to the diagnosis efficacy of LCI on H. pylori infection.The selected studies in our analysis were almost performed in single center, and enrolled relatively small size of patient samples, which restrict further subgroups analysis based on variables.Secondly, these enrolled studies performed different tests to make definite diagnosis of H. pylori infection after LCI endoscopy, such as biopsy based histological staining or rapid urease test, urea breath test, and serological test.Thirdly, two studies of Nakashima et al [17,18] proposed inconsistent diagnosis accuracy of LCI on H. pylori infection, when using AI or CAD instead of endoscopists, that was 96.7% of sensitivity, 83.3% of specificity, 0.95 of AUC for AI, and 62.5% of sensitivity, 92.5% of specificity, 0.82 of AUC for CAD.These problems mentioned above may bring heterogeneity of the analysis and further lead to instability of the results.
CONCLUSION
Summarily, as a novel technique of image enhancement endoscopy, growing evidences have proved that LCI can significantly improve accuracy of diagnosis on H. pylori infection, as well as H. pylori associated changes of gastric mucosa, including atrophy and gastric intestinal metaplasia.Moreover, by emphasizing the difference of color tone between lesion and surrounding normal mucosa, LCI also shows promising usefulness in detecting early gastric cancer.Combined with current knowledge, it is anticipated to use LCI endoscopy alone for detection of gastric diseases instead of WLI endoscopy in future, while a screening strategy of LCI followed by magnifying IEEs may theoretically have better clinical prospects for early cancer detection.
Figure 1
Figure 1 Flow diagram of specific literature searching process.
Figure 2
Figure 2 The assessment of bias risk.
Figure 3
Figure 3 Pooled results of efficacy of white light imaging on Helicobacter pylori infection diagnosis.A-D: Pooled sensitivity (A), specificity (B), positive likelihood ratio and negative likelihood ratio (C).Symmetric receiver operator characteristic curve and area under the curve (D).95%CI: 95% confidence interval; SROC: Symmetric receiver operator characteristic; PLR: Positive likelihood rate; NLR: Negative likelihood rate.
Figure 4
Figure 4 Pooled results of efficacy of linked color imaging on Helicobacter pylori infection diagnosis.A-D: Pooled sensitivity (A), specificity (B), positive likelihood ratio and negative likelihood ratio (C).Symmetric receiver operator characteristic curve and area under the curve (D).95%CI: 95% confidence interval; SROC: Symmetric receiver operator characteristic; PLR: Positive likelihood rate; NLR: Negative likelihood rate. | 2023-12-17T16:14:40.820Z | 2023-12-16T00:00:00.000 | {
"year": 2023,
"sha1": "8fbab74a4d45c0433f2f7b970424ae8fbf646e62",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.4253/wjge.v15.i12.735",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ae7156edf74894d31c415289ace4f2009ff5d703",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
38222913 | pes2o/s2orc | v3-fos-license | Occupational physical activity, energy expenditure and 11-year progression of carotid atherosclerosis
JT. Occupational physical activity, energy expenditure and 11-year progression of carotid atherosclerosis. Scand J Work Environment & Health . 2007;33(6):405–424. Objectives This study prospectively assessed the effects of occupational physical activity on atherosclerosis progression. Methods This population-based prospective study of ultrasonographically assessed carotid intima media thickness (IMT) used repeated measures of occupational physical activity during baseline, 4-year, and 11-year examinations of 612 Finnish men 42–60 years of age at baseline. The association between five measures of energy expenditure and the 11-year change in maximum IMT was evaluated in regression models adjusting for 21 potential confounders, including biological factors, leisure-time physical activity, smoking, socioeconomic status, psychosocial job factors, and baseline health status. Results At baseline, 31% of all the men and 51% of those with ischemic heart disease (IHD) exceeded the recommended maximum levels of relative aerobic strain. All five measures of energy expenditure were significantly associated with adjusted 11-year IMT change. Significant interactions were found between IHD and several measures of energy expenditure. Maximum relative aerobic strain resulted in a 90% increase in IMT among the men with IHD compared with a 46% increase among those without IHD. The men with preexisting carotid stenosis also had higher rates of IMT progression than the men without this condition. Conclusions This study shows that high energy expenditures at work are associated with an accelerated progression of atherosclerosis even after control for virtually all known cardiovascular risk factors, especially among older workers and workers with preexisting IHD or carotid artery stenosis. The findings support the hemodynamic theory of atherosclerosis and have important implications for workplace surveillance and disease prevention.
In contrast to leisure-time physical activity, little is known about the cardiovascular disease (CVD) risks and benefits associated with occupational physical activity. Most epidemiologic studies to date either failed to differentiate between leisure-time physical activity and occupational physical activity or excluded occupational physical activity from their analyses altogether (1)(2)(3)(4). While the beneficial effects of leisure-time physical activity on the circulatory system are relatively well established, the literature about the health effects of occupational physical activity remains inconsistent (5). Higher levels of occupational physical activity were associated with a reduced risk of CVD in some prospective population-based studies (6)(7)(8)(9)(10)(11), showed no association in others (7,12,8,(13)(14)(15)(16)(17), or were associated with an increased CVD risk (18)(19)(20)(21). A few studies showed Occupational physical activity and atherosclerosis differential effects, leisure-time physical activity being protective and occupational physical activity having no effect (17,22), leisure-time physical activity having an effect only among persons with low levels of occupational physical activity (11), or leisure-time physical activity constituting a CVD risk (19).
Most studies used only crude categorical measures of occupational physical activity and did not assess changes in occupational physical activity during follow-up. The few studies that used a continuous measure of energy expenditure did not adjust for individual aerobic fitness, which determines the actual cardiovascular load at any given caloric job demand. In addition, only one study adjusted for psychosocial job factors (20), although job stress has been found to be an important risk factor for CVD in several studies (23)(24)(25)(26)(27) and could confound associations between occupational physical activity and CVD. Such limitations may be responsible for the inconsistent findings in the literature. The current study was designed to address these methodological issues by (i) using a validated interview instrument to assess occupational physical activity at baseline and repeatedly during 11 years of follow-up, (ii) using continuous rather than categorical exposure measures, (iii) supplementing absolute with relative measures of energy expenditure (relative aerobic strain, percentage of oxygen uptake reserve), and (iv) by adjusting for a comprehensive set of 21 possible confounders including virtually all known biological, behavioral, and psychosocial risk factors.
Furthermore, this study circumvents the thorny issue of selection due to the so-called healthy worker effect by using the change in carotid intima media thickness (IMT) as the outcome measure instead of CVD symptoms or clinical events. Workers with impaired health often migrate into less demanding jobs, and this migration could lead to a spurious association between little occupational physical activity and morbidity or mortality outcomes. Ultrasound measurements in asymptomatic populations allow an examination of the relationship between work characteristics and atherosclerosis before disease-based selection effects occur (28,29). Ultrasound measurement of IMT in the carotid arteries has been shown to be reliable, to relate to the extent of disease in the coronary arteries, and to have predictive validity with regard to the risk of coronary events (30,31,29,32).
Building on the hemodynamic theory of atherosclerosis (33), this study used a biological model of disease causation that rests on established hemodynamic changes triggered by physical activity and an increased heart rate, resulting in changes in intravascular turbulence and wall shear stress causing injury and inflammatory processes in the arterial wall that manifest as atherosclerosis (20). Specifically, an increased heart rate shortens the cumulative time spent in systole when wall shear stress is optimal and leads to more time spent in diastole when wall shear stress fluctuates in a suboptimal range (33). Increased turbulence and the resulting reduction in shear stress at the arterial walls are considered some of the main hemorheologic phenomena that induce endothelial damage in human arteries (34,33,35). Such endothelial damage sets the stage for the absorption of lipids and other pathogenic substances and cells into the arterial wall, leading to an inflammatory process currently believed to be the basis of IMT, the formation of atherosclerotic plaques, and eventual stenosis of the arteries (36). Progression of lumen-reducing stenosis in turn will lead to suboptimal poststenotic wall shear stress because wall shear stress is an exponential function of vessel radius. These mechanisms were proposed as an explanation for the previously observed higher rate of progression of atherosclerosis associated with a standing work posture for people with preexisting beginning stenosis when compared with people without preexisting stenosis (20).
Reduced cardiorespiratory fitness due to a lack of training or preexisting ischemic heart disease (IHD) has also been associated with the progression of atherosclerosis (37). Again, the hemodynamic theory of atherosclerosis would explain this association through disproportional elevated heart rates when these persons engage in demanding physical activities. On the other hand, engagement in physical activity can be expected to have a training effect that could lead to lower heart rates during daily activities and rest and thereby decelerate the progression of atherosclerosis. Therefore, two-sided statistical tests need to be applied in the study of the effect of occupational physical activity on atherosclerosis.
In accordance with the hemodynamic theory of atherosclerosis, four hypotheses were tested in this investigation. After taking leisure-time physical activity and other individual behavioral and biological risk factors, as well as psychosocial job factors, into account, we hypothesized that the progression of atherosclerosis is associated with (i) absolute levels of energy expenditure at work and (ii) relative levels of energy expenditure (relative aerobic strain and the percentage of oxygen uptake reserve). Furthermore, we hypothesized that (iii) any association of occupational physical activity with the progression of atherosclerosis is stronger in people with preexisting IHD or with (iv) preexisting carotid stenosis.
Study population
The participants were Finnish men 42-60 years of age at baseline who participated in the Kuopio Ischemic Krause et al Heart Disease Risk Factor Study, a prospective population-based investigation of established and potential risk factors for heart disease and extracoronary atherosclerosis. Details of the study design have been published elsewhere (38,37). In all, 2682 men who resided in the town of Kuopio or its surrounding rural communities in eastern Finland participated in the study. Baseline data were collected for two cohorts, a random sample of 1166 men aged 54 years, initiated in March 1984, and an age-stratified random sample of 1516 men aged 42, 48, 54, or 60 years (participation rate 78%), initiated in August 1986 .
Ultrasound measurements of IMT in the common carotid arteries were conducted beginning in March 1987 on 1229 men in the second cohort. These 1229 men were invited to participate in a follow-up assessment approximately 4 years after the baseline examination. By that time, 47 had died or were suffering severe illness, 37 had moved or could not be contacted, and 107 refused, leaving 1038 participants (participation rate 84.5%). Of these, 1007 men were alive prior to the start of a follow-up 11 years after the baseline examination. Follow-up examinations were scheduled between March 1998 and February 2001. During this time, 58 more men died before being examined, 38 had a severe illness, 27 had moved or could not be contacted, 25 refused, and 5 did not participate for other reasons, leaving 854 participants in the 11-year follow-up (participation rate 84.8%).
Of the 854 participants in the 11-year follow-up, 223 were excluded because they had not worked at all between the baseline examination and the 11-year follow-up, 2 because they did not participate in the 11year ultrasound examination, 2 because of unnreliable information on worktime (they had reported working 24 hours during their last workday and no alternative information on typical workhours was available for them), and 15 because of missing values on one or more of the exposure variables, leaving 612 men for the analyses. Missing values for one or more of the covariates had been replaced by sample mean values for 11 (ie, less than 1.8% of the observations). The follow-up time between the ultrasound examinations ranged from 9.23 to 13.82 (mean 11.13) years.
Assessment of atherosclerotic progression
Measurements of IMT were taken at approximately 100 sites along a 1.0-to 1.5-cm section of both the left and right common carotid artery below the carotid bulb using high-resolution B-mode ultrasonography. Measurements were made with the participants supine and the image focused on the posterior (far) wall. Additional technical details have been published elsewhere (29). IMT was measured as the distance from the leading edge of the first echogenic line to the leading edge of the second echogenic line. Maximum IMT was defined for the participants as the average of the maximum IMT values from the right and left common carotid arteries. The maximum narrowing of the lumen is the most relevant for arterial flow changes according to the hemodynamic theory. Our outcome measure was defined as the natural log of the maximum IMT at 11 years minus the natural log of the maximum IMT in the baseline examination. The reliability of the baseline and longitudinal ultrasonic measurements of carotid IMT is high (39,40,32).
Assessment of occupational physical activity
An interview on occupational physical activity was administered by trained interviewers at the baseline and at the 4-year and 11-year follow-ups to the men who had worked at least some time in the past 12 months. The interview addressed a typical workday. The participants were asked, with an accuracy of 15 minutes, how long they had performed the following activities at work: sitting, standing, walking on level ground, walking on uneven ground, climbing stairs, or any other activities. The 12-month test-retest correlations for the occupational activity interview was found to be 0.69, indicating good reliability for the instrument (41). The lifetime job stability of people living in the Kuopio region is relatively high (42), and therefore the probability of misclassifying work activities between the follow-up examinations was reduced.
A self-administered questionnaire was also completed at the baseline, the 4-year follow-up, and the 11-year follow-up; it provided information on work status (full-time work, part-time work, unemployment, retirement, not working for another reason). Those not currently working were asked about the year when an unemployment or retirement period began, the number of days worked per week in the last job, and the number of hours worked per day. For those working, workdays per week, the number of hours and minutes worked per day, and the number of days they missed work due to illness during the past 12 months were assessed.
The data from the self-administered questionnaire and interview on occupational physical activity were linked to the pension registers of the social insurance institution and the central pension security institute of Finland, covering all old-age, disability, and early retirement pensions of the participants from baseline through the end of May 2000. These administrative retirement data were used to obtain more exact retirement dates (month and year rather than just year) for the men who reported they had retired between the follow-up surveys. Occupation was assessed with a questionnaire and 3digit code according to the Finnish Classification of Occupations of Tilastokeskus (Statistics Finland).
Measures of energy expenditure
We estimated energy expenditure at work by using the interview data on time spent in various activities at work and combined this information with reference data giving the energy requirements [kcal/(kg·hour)] of these activities.
This method was used at baseline and in the 4-and 11-year follow-up surveys. In addition, cardiorespiratory fitness and the weight of the participants were measured at baseline. Other basic data were the number of days worked per week at each examination time and the dates of the ultrasound examinations. Finally, information about sick leave (only for the 12 months preceding the follow-up surveys), unemployment, and retirement (both for the entire follow-up period) were obtained to estimate the actual time spent working during each follow-up segment. These basic measurements provided the data for constructing five measures of energy expenditure at work that were used as predictors in this report. We first describe the basic measurement of energy expenditure per typical workday at each examination and the determination of cardiorespiratory fitness. Then we describe the five measures of work-related energy expenditure in more detail.
Energy expenditure per typical workday at baseline and after 4 and 11 years of follow-up. Energy expenditure reflects the duration and intensity of each type of occupational physical activity. The duration (hours/typical day) of different physical activities at work was assessed in an occupational interview. The energy requirement of these activities was estimated as multiples of the baseline metabolic rate (MET) in kilocalories/(kg·hour) of an average male with values of 1.6 for work while sitting, 2.4 for standing, 3.3 for walking on level ground, 4.9 for walking on uneven ground, 7.3 for climbing stairs, and a mean value of 3.9 for other unspecified activities on the basis of previously published data (43,44). Energy expenditure in kilocalories for each reported activity was calculated by multiplying the duration (hours per day) by the respective intensity (MET) and body weight (kg) of the person. The sum of these estimates gives the energy expenditure measured in kilocalories per typical workday. These measures were obtained at baseline and in the 4-year and 11-year follow-up interviews.
Cardiorespiratory fitness at baseline. Cardiorespiratory fitness (aerobic capacity or maximal oxygen uptake) was assessed by a maximal but symptom-limited exercise test on an electrically braked ergometer as explained in detail elsewhere (45,37,46). Oxygen consumption was measured using an analysis of respiratory gas exchange. Maximal oxygen uptake (VO 2 max) was defined as the highest value or the plateau in oxygen uptake and was standardized by body weight and measured as milliliters of oxygen per kilogram per minute.
Using the basic data already presented, the following five measures of work-related energy expenditure were constructed and used as predictors of the carotid artery changes during follow-up: (i) energy expenditure per typical workday at baseline, (ii) total amount (volume) of energy expenditure at work during 11 years of followup, (iii) energy expenditure per potential 8-hour standard workday during follow-up, (iv) relative aerobic strain (%VO 2 max) at baseline and (v) percentage of oxygen uptake reserve (%VO 2 Res).
Energy expenditure per typical workday at baseline is simply the baseline assessment of energy expenditure per typical workday according to the described method. It does not take account of any changes in the duration or mix of activities during the follow-ups, nor does it account for such things as periods of unemployment or the termination of work due to retirement. In contrast, the following measure does take such changes into account.
Total work-related energy expenditure was first calculated separately for the two follow-up segments, 0-4 years and 4-11 years, and their results were added to get the result for the full follow-up period of 0-11 years. The first step in these calculations was to determine the number of calendar days (including weekends) during each of the two follow-up segments defined by dates of the "bracketing" ultrasound measurements. The total work-related calendar time in each segment was reduced by vacation, unemployment, sick leave, and retirement. Then, in each segment, the resulting actual duration of worktime (in calendar days) was multiplied by the average of the energy expenditures (kilocalories/calendar day) at the beginning and end of the segment. At each examination time, kilocalories/calendar day was obtained by multiplying the energy expenditure per typical workday by the number of workdays per week divided by 7. This latter factor distributed the energy expended in the workweek over the 7-day calendar week.
Energy expenditure per potential 8-hour standard workday during the follow-up is the ratio of the total energy expended during the actual worktime from baseline to 11 years divided by the calendar time during which the participant could potentially have worked during the follow-up period [assuming regular standard 8-hour workdays, 5-day workweeks, and 46 workweeks per year (representing the Finnish standard 1840 workhours per year)]. In other words, total energy expenditure during the follow-up is expressed as an intensity measure calibrated to available standard workdays during the same period. If each participant had worked the standard worktime between the baseline and their final follow-up examination, this measure would be perfectly correlated with the total work-related energy expenditure during Krause et al 11 years of follow-up. It differs from that measure by accounting for some person-to-person variation in the typical length of their workdays, workweeks, and duration of employment during the follow-up. In contrast to relative measures of energy expenditure taking cardiorespiratory fitness into account (described next), this measure takes into account the potentially available number of regular standard workdays between the ultrasound examinations for each person.
Relative aerobic strain (%VO 2 max) is a relative energy expenditure measure that expresses the caloric demands of work as a percentage of the individual worker's aerobic cardiorespiratory fitness or maximal work capacity (47). It has traditionally been used to define recommended maximum levels of aerobic work demands. The assessment of %VO 2 max was based only on the measurement obtained at baseline since this was the only examination time for which data on maximal oxygen uptake were generally available.
The percentage of oxygen uptake reserve is an alternative relative energy expenditure measure that expresses the caloric demands of work in relation to the individual workers' aerobic cardiorespiratory fitness or maximal work capacity as a percent of oxygen uptake reserve (%VO 2 Res) (48). While %VO 2 max is based on the total energy expenditure at work, including the energetic cost of metabolic rate for both rest and work activity, %VO 2 Res is based on the energy expenditure associated with the work activity only and is measured as %VO 2 Res = (VO 2 work -3.5) / (VO 2 max -3.5) × 100% because the resting energy expenditure is 1 MET = 3.5 ml O 2 /(kg · minute) (49,48). In our study, VO 2 work was determined by calculating the weighted average of MET during work activities based on the occupational interview multiplied by 3.5 ml/(kg · minute). Recently, %VO 2 Res has been suggested as the preferred measure of relative energy expenditure for use in job analyses and epidemiologic field studies because it allows for more adequate comparisons than %VO 2 max does when the energy expenditure varies greatly in the study population. A further advantage of this measure is the fact that, in contrast to %VO 2 max, %VO 2 Res corresponds directly to the percentage of heart rate reserve that can be measured more easily in the field than the %VO 2 Res itself (48).
Proportion of workers exceeding the recommended maximum level of aerobic strain at work. A maximum relative aerobic strain of 33% VO 2 max has traditionally been recommended as a safe level of aerobic work demands for a typical 8-hour workday on the basis of the physiological criteria of a steady state of blood lactate or heart rate (50,51). However, no widely accepted recommendations are available for non-8-hour workshifts, to which an increasing proportion of workers is being exposed. Rogers et al (50) adapted the 8-hour standard to 4-, 10-, and 12-hour workshifts. A recent laboratory study by Wu & Wang (52) among seven young males suggests that the recommendations need to be adjusted upward to 34% for 8 hours, further upward to about 43.5% for a 4-hour day, and downward for longer shifts to about 28.5% for a 12-hour day on the basis of a steady-state heart rate plus maximal 10 beats at the end of the work period as the criterion for sustainable maximal work effort. However, empirical laboratory data were not gathered beyond 10-hour periods, and extrapolation to longer shifts may be problematic. One of the features of Wu & Wang's exponential function, used to fit the data, is that maximum allowable %VO 2 max is not substantially reduced for workshifts that exceed 8 hours. For example, the result of 34% for %VO 2 max for an 8-hour workday changes to 32.4%, 31%, 29.7%, 28.5%, 27.4%, 26.4%, 25.4%, and 24.5% for workdays equal to 9, 10, 11, 12, 13, 14, 15, and 16 hours, respectively. These limited reductions for longer workshifts seem somewhat implausible, but, unfortunately, the literature does not provide an empirically based alternative to this extrapolation outside of the range of the empirical data used by Wu & Wang to construct the formula. In acknowledgement of these uncertainties, we present the proportion of workers exceeding recommended levels of energy expenditure according to two assessment methods, the first with the assumption that all men work standard 8-hour days and the second with adjustment for length of workdays based on Wu & Wang's results.
Assessment of covariates
The 21 covariates used in the multivariate analyses can be grouped into the following five categories: (i) age and technical factors (participation in an unrelated lipidlowering trial, baseline maximum IMT values, physician performing 11-year sonography (all baseline ultrasounds conducted by the same person), (ii) biological factors (blood glucose, fibrinogen, serum low-density lipoprotein (LDL) cholesterol, serum high-density lipoprotein (HDL) cholesterol, use of cholesterol-lowering medication, systolic blood pressure, use of blood-pressurelowering medication, body mass index), (iii) behavioral factors (alcohol use, smoking, conditioning leisure-time physical activity, and cardiorespiratory fitness); (iv) socioeconomic status measured by personal income, and (v) psychosocial work-related factors (social support from co-workers or supervisors, stress from work deadlines, and mental strain at work). Socioeconomic status, psychosocial work factors, and all of the behavioral factors except cardiorespiratory fitness were assessed by self-administered questionnaires at baseline and at the 4-year and 11-year follow-ups. A complete list of covariates and their distribution is provided in appendix A. Details of the measurement of these variables have been described previously (53,54). In the following, we give a short summary of the measurement of some of the key covariates.
Blood pressure was measured with a random-zero sphygmanometer after 5 minutes of rest in a supine position. Three measurements were then taken while the participant was still supine, one while standing, and two while sitting, in that order. The average of these six measurements was used in our analyses. Body mass index (BMI) was defined as weight in kilograms divided by height in meters squared at baseline. The use of cholesterol-and blood-pressure-lowering medications was assessed by a questionnaire.
Alcohol consumption in grams per week during the past 12 months was assessed with a structured quantity-frequency method using the Nordic Alcohol Consumption Inventory (55). Cigarette use was a fourlevel categorical variable (never smoker, former smoker, irregular smoker, regular smoker). In the preliminary analyses, tertiles of regular smoking were used. The tertiles were then collapsed into one category ("current smoker") because the effect sizes were very similar for these tertiles, and the confidence intervals overlapped widely. Conditioning leisure-time physical activity, in hours per year, was measured using a modified version of the Minnesota Leisure Time Physical Activity questionnaire (56) that included the 16 most common leisure-time physical activities of middle-aged Finnish men (43). The respondents were asked to estimate the duration, frequency, and intensity of each of 16 activities performed for each of the 12 previous months. Hours of conditioning physical activities with a mean intensity of 6.0 MET have been associated with a decreased risk of myocardial infarction in this cohort (45). Cardiorespiratory fitness (VO 2 max), based on respiratory gas exchange, was measured as milliliters per kilogram per minute by a maximal symptom-limited bicycle ergometer test at baseline (45).
Socioeconomic status was measured by personal income in Finnish marks, social support at work from co-workers and supervisors was measured by several standard items, stress from work deadlines was measured by one item, and a 10-item mental-strain index measured job stress as described previously (54).
For most of the continuous predictors, averages of the baseline, 4-year, and 11-year values were used in all of the regression models. For the continuous variables that have previously been linked to CVD outcomes and that are known to be influenced by physical activity (HDL, LDL, BMI, and VO 2 max) only the baseline values were used in order to avoid overadjustment for occupational physical activity measured during the course of the follow-up. There may still have been some overadjustment because the baseline values partly reflect past occupational exposures that are often highly correlated with current exposures. For cholesterol-and blood-pressure-lowering medications, the analyses used the proportion of examinations when medication use was reported.
Assessment of cardiovascular health at baseline
Ischemic heart disease. The participants with existing IHD at baseline were those who (i) had a history of prior myocardial infarction or angina pectoris, (ii) currently used anti-angina medication, or (iii) had positive findings of angina according to the London School of Hygiene cardiovascular questionnaire (57).
Carotid artery stenosis. Baseline IMT recordings were classified by one physician, blind to other measures, into the following four categories: (i) no atherosclerotic lesion, (ii) IMT, (iii) nontenotic plaque, and (iv) large stenotic plaque. IMT (category 2) was defined as more than 1 mm between the lumen-intima interface and the media-adventitia interface in the common carotid arteries below the bulb. Nonstenotic plaque was defined as a distinct area of mineralization of focal protrusion into the lumen. A plaque was defined as stenotic if it obstructed more than 20% of the lumen diameter, and this definition constituted carotid artery stenosis in this study (29). The participants were not informed about these ultrasound results except for a limited number of examinees who were judged to require medical attention.
Statistical methods
The baseline characteristics of the men with and without cardiovascular disease (IHD or carotid stenosis) were compared using t-tests for continuous variables and chisquare tests for categorical variables.
To study the progression of maximal intima media thickness (maxIMT) over 11 years of follow-up, we used a multiple linear regression analysis implemented in Stata 9.1 (Stata Corporation, College Station, TX, USA). The outcome for these analyses was [ln(y F )-ln(y I )]/Dt, where y I is the initial maxIMT at baseline and y F is the final maxIMT in the follow-up examination Dt years after the baseline examination. The maxIMT values at baseline and the follow-up were ln-transformed because this procedure normalized the original skewed maxIMT measurements. In addition, the residual distribution of the changes in ln(maxIMT) was more nearly normal than changes based on maxIMT without transformation. The division by Dt handles variation from the nominal follow-up time of 11 years by expressing change on a per-year basis. In these analyses, we included a predictor based on a measure of energy expenditure along with some or all of the 21 covariates listed in appendix A. Continuous covariates were centered at the mean if they had no natural interpretation of zero values.
Krause et al
The use of changes the in ln-transformed maxIMT leads naturally to an interpretation of the results in terms of relative change and the percentage of change. Relative change is RC = y F / y I , and note that [ln(y F ) -ln(y I )]/ Dt = ln(y F / y I )/Dt = ln(RC)/ Dt. Consequently, for any specified values of the predictors, the fitted model provides a way to estimate average ln where f is the back-transformation correction factor, which, with our data, was so close to 1 that it had no effect. Correspondingly, the average percentage of change for K years, Several tables present the estimated expected average of the percentage of change for 11 years using the coefficients from the fitted model. We calculated the estimated relative change for the minimum, median, and maximum value, for each energy expenditure measure. Other variables were set to zero, which corresponds to using the mean value for centered continuous variables, and the reference level coded 0 for any predictor was used to represent categorical variables. We also studied whether or not the association between the energy variable and the outcome differed for the subgroups with and without IHD at baseline. Similar subgroup-specific results were examined for the subgroups with and without carotid stenosis at baseline.
The relative change ratio (RCR), defined as the ratio of the relative change at a comparison level of a predictor of interest divided by the relative change at a reference level for the predictor, provides a summary measure of the association between an energy measure, x 1 , and the outcome. The RCR depends on the years of follow-up (K). With a multiple regression model, E[ln(RC) / Dt]=B 0 + B 1 x 1 + … + B p x p , in which there are no interaction terms involving the predictor, x 1 , the RCR for K years of follow-up is RCR = exp(B 1 D 1 K), where D 1 = x 1 C -x 1 R is the difference between the comparison level and the reference level for the predictor, x 1 .
To check the adequacy of a simple linear representation of the energy expenditure variables, we assessed whether a significantly improved fit resulted from using both linear and quadratic terms in the fully adjusted model. Models without the quadratic term were not rejected in favor of those with the quadratic terms.
Characteristics of the study population
At baseline, the average age was 49.5 (SD 5.9) years, with 203 men at 42 years of age, 184 at 48 years of age, 167 at 54 years of age, and 58 at 60 years of age. Conditioning leisure-time physical activity averaged 119 (SD 98) hours per year, the mean BMI was 25.6 (SD 3.2) kg/m 2 , alcohol consumption averaged 78 (SD 99) grams per week, and 25.5% were regular smokers. The distributions of all of the independent variables by IHD at baseline are listed in appendix A. Men with IHD were older, earned less, reported more mental strain at work, had higher levels of fibrinogen, and had lower values for blood pressure and cardiorespiratory fitness. As expected, the men with IHD spent less energy per potential standard workday than those without IHD. However, they were exposed to higher levels of energy expenditure at work than the men without IHD with respect to all of the other energy expenditure measures. Differences were also found for the men with and without carotid stenosis at baseline (data not shown).
Progression of atherosclerosis
The maxIMT at baseline averaged 0.91 (SD 0.21, range 0.54-2.62) mm. The average change in maxIMT was 0.027 (SD 0.017, range -0.033-0.095) mm per year, corresponding to a 0.33 (SD 0.24, range -0.82-1.75) mm change during the entire 11-year follow-up. This report focuses on the percentage of change in the maxIMT that averaged 2.72% per year and 29.9% (95% CI 28.5%-31.4%) for the entire 11.13-year follow-up. Table 1 shows the distribution of the energy expenditure measures by age cohort and survey time. At baseline, the energy expenditure per typical workday ranged from 616 to 5418 kcal/day with an average of 2046 kcal/day. The average had changed little after 4 years (2032 kcal/day) and had dropped slightly to 1916 kcal/day after 11 years. The measures of relative energy expenditure showed an increase with age that indicated that physical demands at work were relatively higher for the older workers than for the younger workers. Table 2 shows the proportion of men by age group that exceeded the recommended maximum levels of %VO 2 max [ie, 33% for work involving mostly lower extremities according to method 1 (50,51,58) and 34% according to method 2]. At the baseline, 29.6 (method 1) to 31.2% (method 2) of all the men exceeded these levels, and there was a monotone increase in the proportion of men exposed to excessive levels of %VO 2 max from about 20% of the men in the youngest age group up to 53% of the men in the oldest age group. The proportion of men experiencing excessive aerobic strain was higher (50-52%) among those with IHD than among those without IHD (26-28%).
Energy expenditure at work
By occupational group, the recommended (method 1) level was exceeded by 70% of the 71 men working in agriculture, forestry, or commercial fishing, 44% of the 191 men working in manufacturing or construction, 26% of the 27 service workers, 25% of the 60 men in sales, 24% of the 58 men employed in transport or communication, 5% of the 55 men employed in administrative, managerial or clerical jobs, and 5% of the 139 in technical, science or artistic work (data not shown). To check for thresholds and nonmonotone dose-response relationships, we explored models by using categorical exposure measures and also by entering quadratic terms of the exposure variables into the model. These models did not provide any evidence for thresholds and confirmed a positive monotone exponential association between all of the energy expenditure measures and the change in IMT. Table 4 shows the percentage of change in IMT for all of the men during the 11-year follow-up at minimum, median, and maximum levels for each alternative measure
Krause et al
of energy expenditure, together with the respective relative change ratios. The highest change was observed for the men at maximum %VO 2 Res (60%, 95% confidence interval 38%-85%). Table 5 shows the percentage of change in IMT separately for the men without and with preexisting IHD and their respective relative change ratios. Across all of the exposure measures, the men with IHD experienced consistently higher rates of IMT change than the men without IHD at baseline. Significant interactions (P<0.10) were found between IHD and the kilocalories per typical day at baseline, the %VO 2 max, and the %VO 2 Res. At a %VO 2 max of 119%, the 11-year change in IMT among the men with IHD (90%) was nearly twice as high as among those without IHD (46%). Figure 1 shows the same comparison for the minimum (6), mean (22, not median), and maximum (142) %VO 2 Res values. Table 6 shows the percentage of change for IMT and the RCR values separately for the men without and with preexisting carotid artery stenosis. Across all of the exposure measures, the men with carotid stenosis experienced consistently higher rates of IMT change than the men without preexisting stenosis. Significant interactions (P<0.20) were found between carotid stenosis and the total amount of energy expenditure and the kilocalories per potential 8-hour standard workday.
Energy expenditure and the percentage of change in intima media thickness by baseline cardiovascular health status
There was some overlap between the two cardiovascular health status subgroups in that 40.3% of the men with IHD also had stenosis of the carotid arteries and 24.4% of the men with carotid stenosis also had IHD. Table 3. Relative change ratio a (RCR) for maximum intima media thickness (IMT) over the 11-year follow-up with 95% confidence intervals (95% CI), by measure of energy expenditure-results from the multiple regression analyses with incremental adjustment for the covariates (all men, N=612). (%VO 2 max = relative aerobic strain, %VO 2 Res = percent of oxygen uptake reserve)
Progression of atherosclerosis
During the entire 11-year follow-up, we observed an average change of 0.33 (SD 0.24, range -0.82-1.75) mm in the maximum IMT and 0.20 (SD 0.16, range -0.55-1.25) mm in the mean IMT. Changes of this magnitude are within the range reported in other studies (59). Such changes may be clinically significant because it has been shown in previous studies that cross-sectional differences of the order of 0.1 mm are associated with an 11% increase in the risk of acute myocardial infarction (60,61). While most of the participants experienced an increase in maxIMT over time, a minority of 4.25% experienced a decrease, pointing to the dynamic nature of the atherosclerotic disease process or inherent measurement error in assessing change.
Observed levels of energy expenditure
Counter to the widely held belief that most workers in industrialized countries lead a sedentary lifestyle devoid of aerobic strain, the study participants were physically rather active both during leisure and during work. On the average, these middle-aged Finnish men participated in 20 minutes of conditioning leisure-time physical activity per day (119 hours per year). At work, nearly one-third of all the participants exceeded the maximum level of 33% VO 2 max recommended for 8 hours of work (47,50,51). It should be noted that this recommendation was based on prolonged dynamic work of large muscle groups used in walking or bicycling and that acceptable physical workloads are smaller for work involving the upper extremities or static work (62). The older men were exposed to much higher relative levels of energy expenditure than the younger men were; this finding indicates that the absolute caloric demands at work remained unchanged for these men even when their aerobic capacity decreased with age. In general, aerobic capacity declines by 1-2% per year after 25 years of age (63). Over 50% of the men with IHD exceeded the recommended maximum levels. The men working
Krause et al
in agriculture, forestry, fishery, manufacturing, or sales were the most often exposed to levels exceeding the recommendations. The average energy expenditure at work remained rather constant over time, this finding indicating that caloric work demands did not decline over an 11-year period for middle-aged men even as they transitioned into old age (data not shown). Another representative population study from the mid-1990s in Sweden found that 27% of working men and 22% of working women were required to do work that exceeded their aerobic capacity (64). These findings indicate that physically demanding work is still a prevalent feature among some occupational groups even in so-called modern service economies. Exposure is not limited to manufacturing, mining, or farming. In fact, many service workers perform their work standing or walking for many hours (eg, in sales, health care, or distribution). These upright work activities not only lead to high levels of energy expenditure, but also to additional cardiovascular strain due to venous pooling in the legs and resultant heart rate and blood pressure increases (20). For example, 62% of the male employees in Quebec, Canada, perform their work in a predominantly standing posture (65). In emerging market economies with more agriculture and manufacturing jobs that already require larger amounts of physical labor, this problem is often compounded by unregulated work and recovery times that lead to regular workdays that may exceed 12-14 hours per day. Safe levels of energy expenditure under these circumstances are likely to be considerably lower than the maximum of 33-34% VO 2 max recommended for an 8-hour workday (50,52), although more empirical work needs to be done to determine the safe levels of energy expenditure and work-rest patterns for long workshifts.
Misclassification of exposure may have occurred because the type and duration of the work activities were based on self-reported data rather than on direct observations and because the assessment of energy expenditure did not include upper-extremity work or the handling of external loads, but instead was limited to the energetic costs of moving one's own body or maintaining one's body posture (sitting, standing, walking, and climbing stairs). The amount of static work and the ambient temperature was also not accounted for, and the average MET values assigned to work activities may differ according to the individual body composition of fat and fat-free mass (66). Therefore, we believe that our estimates are conservative and underestimate the actual amount of energy expended at work. On the other hand, the nearly exclusive focus on lower-extremity activities in the computation of energy expenditure increases the validity of measures of relative energy expenditure based on the use of bicycle ergometer tests for the determination of maximum aerobic capacity (67,62).
In addition, the use of a validated detailed occupational interview is an important methodological improvement over the methods used in most population-based Table 6. Change in maximum intima media thickness (IMT) at the minimum, median, and maximum levels of energy expenditure, measures of association between energy expenditure and IMT progression [relative change ratio a (RCR)], and interactions between energy expenditure and the baseline status of carotid artery stenosis over the 11-year follow-up-results from the multiple regression analyses with adjustment for 21 covariates b (N=612). (Kcal per workday = kilocalories per typical workday at baseline, Total kcal during follow-up = total number of kilocalories during the follow-up, Kcal per potential workday = kilocalories per potential 8-hour standard workday during the follow-up, %VO 2 max = relative aerobic strain at baseline, %VO 2 Res = percent of oxygen uptake reserve at baseline) Energy studies of CVD that used limited exposure information from questionnaires typically yielding only broad exposure categories with lower power to detect any associations (3).
Associations between energy expenditure and the progression of atherosclerosis
Higher levels of energy expenditure at work were significantly associated with an increased progression of carotid atherosclerosis regardless of the type of exposure measure used and even after control for a total of 21 potential confounders, including several not controlled for in previous studies, such as leisure-time physical activity, psychosocial job factors, and fibrinogen, among others. The observed increases in the rate of progression of atherosclerosis were consistently higher when better exposure measures were used, either utilizing a cumulative repeat-exposure measure alone, a cumulative repeat-exposure measure relative to the number of standard workdays, or baseline information relative to the individual workers' aerobic capacity. [See table 3.] It is also noteworthy that adjustment for behavioral factors increased the risk estimates associated with occupational physical activity (model 3); this finding indicates negative confounding. Risk estimates remained at this level even after further adjustment for income, mental stress at work, stress from work deadlines, and social support from supervisors and co-workers (model 4). This finding indicates that the effects of energy expenditure are independent of socioeconomic status and psychosocial job factors that were associated with 4-year IMT progression in this study population as reported previously (26).
Our analyses indicate an exponential dose-response relationship between both absolute and relative measures of energy expenditure and the progression of atherosclerosis. On the basis of these findings, the hypothesis of a net protective effect of occupational physical activity (via a training effect) on the progression of atherosclerosis needs to be rejected. This result is in line with previous observations of work physiology and epidemiologic studies that physical workload does not have similar training effects on individual work capacity as aerobic physical exercise does (68,69,51,70,71). Instead, our findings are consistent with the hypothesis of an atherogenic effect of occupational physical activity in which an increase in energy expenditure at work is associated with a progression of atherosclerosis.
Interaction of energy expenditure with baseline cardiovascular disease
IHD at baseline showed strong interactions with energy expenditure. Similarly, preexisting carotid artery plaque or stenosis in combination with energy expenditure increased the progression of atherosclerosis. These results are consistent with the findings of an earlier 4-year prospective study in this population showing that prolonged standing at work (presumably leading to venous pooling and compensatory increases in heart rate) was found to be significantly associated with the progression of atherosclerosis and with effects significantly stronger among the men with preexisting IHD or carotid artery stenosis than among the healthy men (20).
These findings are consistent with the hemodynamic theory of atherosclerosis. Departing from the clinical fact (corroborated in this study) that men with IHD have a lower aerobic capacity, the hemodynamic theory predicts that men with IHD will respond to identical physical demands with higher elevations in heart rate than healthy men do, leading to increased intravascular turbulence and suboptimal wall shear stress, which has been implicated in the atherosclerotic disease process as earlier described and in detail elsewhere (34,33,35,36,20). Similarly, the hemodynamic theory predicts that arterial stenosis leads to increased poststenotic turbulences and suboptimal wall sheer stress as well. Elevated blood pressure, especially after static work, is another potential mechanism between occupational physical activity and the progression of atherosclerosis (72)(73)(74).
Comparison with other studies and known risk factors
We are aware of only one previous study that examined the relationship between occupational physical activity and IMT. The study found a significant positive association between occupational physical activity and IMT, however, only for blacks. This finding needs to be interpreted with caution because of the cross-sectional design, the use of crude measures of occupational physical activity based on occupational ratings, and failure to adjust for leisure-time physical activity (75).
While some beneficial effects of leisure-time physical activity on the circulatory system are relatively well established, the literature about the health effects of occupational physical activity remains inconsistent (5). Research in the 1950s and 1960s, comparing different occupational groups, identified sedentary work as an important cardiovascular risk factor (76)(77)(78)(79). However, these earlier studies were vulnerable to alternative explanations because of selection bias and uncontrolled confounding. For example, in their pioneering work, Morris et al attributed the lower risk of coronary heart disease among London bus conductors versus drivers to the sedentary work of the drivers (79). Since then, research has shown that the excess risk in cardiovascular disease among urban bus drivers is not experienced by rural bus drivers, and, for urban bus drivers, is independent of both leisure-time physical activity and occupational Krause et al physical activity (80,81). Instead the excess risk is now thought to be attributable to the high levels of job stress experienced by urban bus drivers (82)(83)(84)(85), a factor that may have confounded the reported association with sedentary work (5,21). Clearly, the lack of control for psychosocial job factors in most previous studies of occupational physical activity is a major limitation of the literature. The current study overcomes this limitation by adjusting for several psychosocial job factors that were shown to be associated with the progression of atherosclerosis (26), myocardial infarction, and cardiovascular and all-cause mortality in this population (54). Similarly, in contrast to most other published studies, our study examined the independent effects of occupational physical activity by controlling for conditioning leisure-time physical activity in multivariate models.
Leisure-time physical activity did not predict the 11-year progression of atherosclerosis in any of our 20 regression models (P-values ranging from 0.21 to 0.84), confirming the results of an earlier 4-year prospective study of this cohort finding no significant associations of conditioning or nonconditioning leisure-time physical activity with a 4-year progression of atherosclerosis (37). However, energy expenditure of conditioning leisure-time physical activity showed a trend for an inverse association with a 4-year change in IMT (37). These findings deserve further inquiry because conditioning leisure-time physical activity, the measure used in our study, had been associated with a reduced risk of myocardial infarction in an earlier study of this population (45). One possible explanation for these paradoxical findings is the fact that typically only men with low levels of occupational physical activity engage in conditioning leisure-time physical activity outside of work so that occupational physical activity is inversely associated with leisure-time physical activity and may mask an inverse relationship of leisure-time physical activity with IMT. In fact, we observed a negative, albeit modest correlation between conditioning leisure-time physical activity and baseline energy expenditure at work (correlation=-0.19) or relative aerobic strain (cor-relation=-0.22) in our study population. Our findings therefore raise the question of whether previous reports on the benefits of leisure-time physical activity in studies not controlling for occupational physical activity could be due to uncontrolled confounding by occupational physical activity. It may be that the positive effects of leisure-time physical activity are confined to training effects of the cardiovascular system among people with little or no occupational physical activity. A recent case-control study of myocardial infarction supports this hypothesis (22). Training effects can be achieved with a short duration of leisure-time physical activity of less than 30 minutes per day and can be expected to lead to a net reduction in average heart rate during other daily activities and during rest, a beneficial effect according to the hemodynamic theory of atherosclerosis (33). None of these beneficial training effects would be expected in people who are already engaged in physically demanding work, and there is some empirical evidence for an interaction between leisure-time physical activity and occupational physical activity (86). This finding corresponds to the observation that depressed endothelial function and shear stress regulation in people with CVD is more amenable to improvement through exercise training than normal endothelial function is in the young and healthy (87). Since recommended levels of leisuretime physical activity are a full magnitude smaller than typical levels of occupational physical activity, one would also not expect a substantially increased health risk due to conditioning leisure-time physical activity among men performing physically demanding work; in fact increases in leisure-time physical activity were not associated with IMT change in our study (P=0.745, see appendix B).
Although our observed negative correlation between leisure-time physical activity and occupational physical activity is in line with observations made by the Surgeon General's report that blue-collar workers' participation in leisure-time physical activity is relatively low, we would question the report's promise of an agenda that states as its only goal to increase leisure-time physical activity among blue-collar workers (3,88). For some workers, targeted worksite exercise programs may be beneficial for increasing fitness or reducing weight (88). However, among the nearly 30% of workers already exceeding safe limits for energy expenditure at work, additional aerobic exercise outside work may cause more fatigue and overexertion injuries of the musculoskeletal system without any proved benefits for the cardiovascular system. Instead, these workers may be in need of nonaerobic activities such as nonstrenuous stretching exercises to maintain flexibility and more recovery time between work periods to allow their heart rates and blood pressure to fall to sustainable levels (89). Ergonomic interventions that change work methods may help to reduce aerobic strain in specific occupations (90). Therefore, instead of trying to increase leisure-time physical activity indiscriminately among blue-collar workers, intervention research should be directed to (i) develop feasible screening programs to identify workers at risk, (ii) identify ways of lowering the physical demands for workers who still expend unsustainable amounts of energy at work, (iii) determine safe work-rest schedules, and (iv) guide regulators and stakeholders in the creation of workplaces promoting both cardiovascular and musculoskeletal health. In addition, it seems necessary to (v) empirically determine safe levels of energy expenditure for workdays exceeding 8 hours (91) and to (vi) find ways to provide aging Occupational physical activity and atherosclerosis workers and workers who have preexisting IHD or stenosis of carotid (or other) arteries with jobs that do not expose them to an increased risk of progression of their atherosclerotic disease. Research is also needed to (vii) determine the proportion and costs of CVD that are attributable to excessive levels of occupational physical activity and could be prevented through job redesign. Such data will be helpful in allocating the necessary resources to this field of research and respective worksite health promotion and disease prevention programs.
With regard to other known cardiovascular risk factors, our study confirms age, elevated LDL cholesterol, systolic blood pressure, and heavy smoking as independent risk factors for the progression of atherosclerosis, but failed to show an independent association with leisure-time physical activity, BMI, blood glucose, HDL cholesterol, personal income, or psychosocial job factors. [See regression coefficients in appendix B.] Several studies have linked exposure to specific physical and psychosocial job factors with cardiovascular disease and mortality (92,(23)(24)(25)27) but, similar to studies on occupational physical activity, findings have also been both positive and negative (23,(93)(94)(95) and typically did not adjust for occupational physical activity and leisuretime physical activity. Further investigations are needed to disentangle the interdependent relationships between occupational physical activity, leisure-time physical activity, and other factors as predictors of the progression of atherosclerosis and CVD. It should be noted that our method of adjusting for all these covariates together in one model might have led to overadjustment with respect to some covariates because they may represent intermediate pathway variables between, for example, socioeconomic status and CVD.
The findings of this study await confirmation in comparable prospective studies of IMT in other populations. It is also necessary to investigate further the associations between occupational physical activity and manifest CVD or mortality. However, studies of symptomatic chronic disease outcomes often fail to detect associations with work exposures because of disease-based selection out of strenuous jobs by those affected. The study of preclinical outcomes such as IMT changes is less likely to be influenced by these selection effects and therefore may be more important in the determination of causal relationships and appropriate interventions.
Implications for prevention and medical practice
The findings of this study may have important implications for the practice of occupational and rehabilitative medicine. Primary CVD prevention efforts may benefit from a reduction in the caloric demands of physically demanding jobs. Jobs in agriculture, forestry, commercial fishing, manufacturing, or sales are at especially high risk of leading to excessive aerobic strain. Secondary and tertiary prevention efforts may be indicated for persons who do not have a sitting desk job. Specifically, %VO 2 max or %VO 2 Res should be routinely assessed in such workplaces during the placement of new employees, and in the process of designing work modifications for employees returning to work after being diagnosed with IHD. Both bicycle ergometry and ambulatory electrocardiography may be warranted for workers with CVD (96). Indirect assessments of relative energy expenditure based solely on heart rate (HR) measurements at work have become feasible through commercially available portable heart rate monitoring equipment and may be sufficient for assessments of workers without CVD.
Because it has been shown that %VO 2 Res is highly correlated with the percentage of heart rate reserve [%HRR = (HRwork -HRrest)/(HRmax -HRrest) × 100%] across the aerobic fitness spectrum (67,48), it is possible to estimate the percentage of heart rate reserve (%HRR) (67), HR-estimated energy expenditure (HREEE), and %VO 2 Res using recently validated procedures (97) in combination with standard procedures estimating maximum heart rate based on resting heart rate and age (98) without the necessity of employing laboratory-based gas exchange analyses or bicycle ergometer tests.
It is best to use these relative measures of energy expenditure because they take individual differences in VO 2 max into account. VO 2 max has been found to differ markedly by gender, age, health status, and other factors (63). Relative measures correlate better with fatigue, heart rate elevations, and related health consequences of aerobic strain at work than absolute measures as shown by others (99), and, for the first time, for atherosclerosis in this study.
Concluding remarks
In conclusion, this study demonstrates for the first time that high energy expenditure at work is associated with an accelerated progression of carotid atherosclerosis even after control for virtually all known cardiovascular risk factors, including leisure-time physical activity, aerobic fitness, socioeconomic status, and psychosocial job factors that have been rarely controlled simultaneously. Older workers, workers with preexisting IHD, and workers with carotid stenosis appear to be especially vulnerable to the atherogenic effects of increasing levels of energy expenditure. The findings are consistent with the hemodynamic theory of atherosclerosis.
The results of this study do not support the notion that heavy physical labor has ceased to be a potential health hazard in the so-called modern service economy. To the contrary, they show that a substantial proportion Krause et al of aging men and over 50% of those with IHD in this sample were still exposed to excessive caloric job demands according to current recommended maximum levels for %VO 2 max.
Job evaluations using ambulatory heart rate monitoring to estimate %VO 2 max or %VO 2 Res should be considered for every job requiring physical effort other than mostly sitting at a desk and for the evaluation of work modifications for workers with CVD.
Regulatory statutes dealing with worktime and rest schedules need to assure that workers are protected from excessive aerobic strain even if individual monitoring is not available. Such prevention measures are especially needed for older workers with age-or disease-related reduced cardiorespiratory fitness, existing IHD, or known atherosclerosis.
Appendix A
Characteristics of the study population and the distribution of the independent variables by IHD (ischemic heart disease) status at baseline (N=612) (FIM = Finnish marks, %VO 2 max = relative aerobic strain, %VO 2 Res = percent of oxygen uptake reserve) Independent The predictive role of established cardiovascular risk factors and other covariates in the progression of atherosclerosis was determined in the same multiple regression model 4 (fully adjusted) as used in tables 3-6 in the text. All 21 covariates were examined simultaneously in the same model that included occupational exposure measured as the percentage of oxygen reserve at baseline. In this model, statistically significant associations with the change in the ln-transformed maximum intima media thickness (IMT) were observed for the following
Appendix B
Established cardiovascular risk factors, other covariates and change in intima media thickness variables: age, baseline IMT, participation in the placebo group of an unrelated trial with lipid-lowering medication, proportion of follow-up time under lipid-lowering medication, low-density lipoprotein, systolic blood pressure, and current regular smoking. The remaining 14 covariates, including known predictors of cardiovascular disease such as body mass index, health-enhancing leisure-time physical activity, and plasma fibrinogen, were not statistically significant. Regression coefficients and P-values are shown in the table below. | 2017-10-14T10:33:22.687Z | 2007-12-01T00:00:00.000 | {
"year": 2007,
"sha1": "ede68d3525deed42f0060e674a58783755777748",
"oa_license": "CCBY",
"oa_url": "https://www.sjweh.fi/download.php?abstract_id=1171&file_nro=1",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "b6fd5a00234de65e54dc0c7ba4e338b69070a4b8",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
92133232 | pes2o/s2orc | v3-fos-license | Subsoiling and Sowing Time Influence Soil Water Content , Nitrogen Translocation and Yield of Dryland Winter Wheat
Dryland winter wheat in the Loess Plateau is facing a yield reduction due to a shortage of soil moisture and delayed sowing time. The field experiment was conducted at Loess Plateau in Shanxi, China from 2012 to 2015, to study the effect of subsoiling and conventional tillage and different sowing dates on the soil water storage, Nitrogen (N) accumulation, and remobilization and yield of winter wheat. The results showed that subsoiling significantly improved the soil water storage (0–300 cm soil depth) and increased the contribution of N translocation to grain N and grain yield (17–36%). Delaying sowing time had reduced the soil water storage at sowing and winter accumulated growing degree days by about 180 ◦C. The contribution of N translocation to grain yield was maximum in glume + spike followed by in leaves and minimum by stem + sheath. Moreover, there was a positive relationship between the N accumulation and translocation and the soil moisture in the 20–300 cm range. Subsoiling during the fallow period and the medium sowing date was beneficial for improving the soil water storage and increased the N translocation to grain, thereby increasing the yield of wheat, especially in a dry year.
Introduction
Wheat is the dominant crop in the Loess Plateau, accounting for 35% of the total planting area [1,2].Dryland wheat production in Loess Plateau is highly dependent on the timing and extent of rainfall, whereas most of the precipitation is mainly concentrated during the summer fallow period.Moreover, significant climatic changes have been observed in this area, such as average precipitation is decreasing by 3 mm and the average temperature is increasing by 0.6 • C per decade with the sudden incidence of drought [3,4].These climatic changes are causing the unstable wheat production in dryland areas of Shanxi province due to extreme variation in precipitation and low water retention capacity of soil [5].In the Loess Plateau, a short summer fallow of about three months is practiced after the harvest of the previous winter wheat in late June and planting of the succeeding crop in late September to conserve soil water.Available soil moisture at sowing time depends on the tillage method used during the fallow period [6][7][8].Thus, improving soil water conservation is crucial to increase the yield of dryland wheat.
The sowing date also has a significant effect on the yield response of wheat [9,10].Sowing time influences the accumulating temperature before winter, and affects the nutrient uptake and transportation of plants, and ultimately affects the yield [11]. Sowing date strongly influences the use of environmental resources and optimal sowing can make full use of resources such as pre-winter Agronomy 2019, 9, 37 2 of 15 light, heat, nutrient, and water to develop strong seedlings and promote yield formation [12].Under irrigation, the sowing time can be adjusted whereas in rain-fed dryland farming the sowing time might be delayed due to the scarcity of residual soil moisture under erratic rain condition [13].Furthermore, the conventional tillage method also results in excessive soil disturbance and drying of surface soil.Therefore, saving the residual soil moisture from precipitation during the fallow season and the adjustment of the sowing date become key determining factors for yield determination of dryland winter wheat [6].
Yield formation of rain-fed winter wheat is effected to a different extent under early and late sowing [13].Sun et al. [14] studied the impact of different sowing dates on yield in the North China Plain and found that the yield of wheat sown after October 10 was significantly reduced with the delay in sowing date.Zhou et al. [5] showed that late planting could increase the pre-anthesis accumulation of nitrogen in the vegetative organs and the contribution rate of nitrogen to the grain.In contrast, Qu et al. [15] showed that the pre-anthesis transport and translocation of nitrogen in the vegetative organs, and contribution rate of nitrogen to grain were decreased with the delay of sowing date and the grain yield was increased significantly under delayed sowing time and increasing density.
Subsoiling has previously proved a promising technique for increasing water storage, reducing water loss, enhancing water availability, and saving energy, as well as increasing wheat yield.Liu et al. [16] showed that subsoiling improved soil moisture content in the 0-160 cm soil layer before sowing than traditional tillage.Wang et al. [17] showed that during the fallow period, the subsoiling improved the soil water storage capacity of 0-180 cm by 9-24 mm before sowing.Wang et al. [18] showed that subsoiling can effectively accumulate precipitation during the fallow period, significantly increased soil water storage capacity from 0 to 200 cm before sowing, improving water use efficiency by 39%, and finally increasing yield.
In addition, different tillage methods could also affect the uptake and accumulation of nitrogen in plants by affecting soil moisture.Zheng et al. [19] have shown that subsoiling can increase the nitrogen accumulation of wheat after jointing and the translocation of nitrogen to grain during the maturity period, and obtain a high grain yield.Wang et al. [20] also reported that subsoiling can improve the efficiency of nitrogen utilization, and increased wheat yield by enhancing the distribution and translocation of nitrogen from vegetative organs to grain after flowering.The amount of nitrogen in the vegetative organs before flowering and its contribution to grain were found highest in leaves, followed by glume + cobs and the lowest was in stem + leaf sheaths.Furthermore, different tillage practices had increased the soil moisture, which increased the amount of nitrogen uptake than no-tillage and in turn increased the final yield [21].
It can be seen that tillage and sowing time can affect the translocation of plant nutrients thus affecting the yield.The need is to further explore how to adjust the sowing time to realize the increase of production under the premise of realizing the water storage.Therefore, the aim of the present research was to explore the effects of different sowing times on the changed source-sink ratio, accumulation and translocation of N and its contribution to yield of dryland wheat and condition of water storage in soil under subsoiling, in order to provide a theoretical basis for the realization of yield increase in dryland.
Site Characteristics and Description
The experiment was carried out from 2012 to 2015 at the dryland wheat experimental station of Shanxi Agricultural University, located at Wenxi (35 • 20 N, 111 • 17 E), Shanxi Province, China.Rain-fed agriculture is popular in this area due to unavailable irrigation conditions.Winter wheat is usually planted in early October and no irrigation was supplied.After the harvesting of wheat, the field was left fallow until the next sowing.
Meteorological Conditions
The experimental area is a hilly arid land with a semiarid climate typical of the Northeast Loess Plateau, where 60%-70% precipitation occurred in the summer months during the fallow season (July-September).Precipitation during the experimental years for 2012-2015 is shown in Table 1.The average rainfall of the site from 2009 to 2014 was 487.6 mm.Therefore, the annual precipitation in 2012-2013 growth season was lower than usual and 188.4 mm rainfall was during the fellow period and 167.3 mm in the growth period.The total precipitation during 2013-2014 was close to the average annual precipitation, from which 283.7 mm rain occurred during the fallow period and 242 mm during the growth period.
The accumulated growing degree days (GDD) at wintering, jointing, booting, and maturity were calculated by using the following equation [22]: where n is the number of days taken for the completion of a particular growth phase, Tmax and Tmin are the daily maximum and minimum air temperature respectively in • C, Tb is the minimum base temperature (threshold temperature) for a crop ( • C) and for wheat, Tb = 4.0 • C
Field Trial Management and Experimental Design
The experiment consisted of two tillage methods, i.e., subsoiling (SS) and conventional tillage (CT), and three sowing dates, i.e., early (T 1 ), conventional (T 2 ), and late sowing (T 3 ) times.The experiment was arranged as a split-plot design in randomized complete block design (RCBD), taking tillage methods as main plots and sowing dates as sub-plot factors and each treatment was replicated three times.Former wheat stubble (20-30 cm) which was left in the field was shredded, followed by tillage in mid-July.Tillage practices were performed during the fallow season.Subsoiling (SS) was conducted with a subsoiling chisel plow at the depth of 30-40 cm on the 15th of July in 2012, 2013 and 2014.Local conventional tillage (CT) was taken as a control.Rotary tillage was used to crumble large lumps and level the fields on 25 August 2012 and 23 August 2013 and 2014.The area of each plot was 150 m 2 (50 m × 3 m).
The wheat variety 'Yunhan 20410' provided by the Agriculture Bureau Wenxi, was sown on three different dates: 20 September (T 1 , early sowing), 1 October (T 2 , conventional normal sowing), and 10 October (T 3 , late sowing) in 2012, 2013 and 2014.Seeds were sown at a density of 2.25 × 10 6 seeds ha −1 in rows spaced 30 cm apart.Before planting, 150 kg N ha −1 (Urea, 46%), P 2 O 5 (150 kg ha −1 ) and K 2 O (150 kg ha −1 ) were broadcasted evenly on the surface of plots.No top fertilizer was applied during the growth period.Basic soil properties were determined from a 0-20 cm soil layer and soil was classified as silty clay loam.Soil properties recorded on 10 June 2012 were: organic matter 11.9 g kg −1 , available nitrogen 38.6 mg kg −1 , and available phosphorus 14.6 mg kg −1 , whereas soil properties recorded on 10 June 2013 were: organic matter 10.2 g kg −1 , available nitrogen 39.3 mg kg −1 , and available phosphorus 16.6 mg kg −1 .
Soil Moisture Content
Soil samples were collected from 0-300 cm soil depth with soil drilling after every 20 cm soil layer.The samples were weighed and dried at 105 Dry matter and total nitrogen content of plant were measured at overwintering, jointing, booting, flowering and maturity stages.Twenty whole plants were sampled from each plot and at the jointing and booting stages divided into two parts (stems and leaves sheath), at the flowering stage plants were divided into three parts (leaves, stems, and ears), and at the maturity stage were divided into four parts (leaves, stems + sheath, glumes + spikes, and grains).The samples were kept at 105 • C for 30 min and at 75 • C until constant weight, after which they were weighed, ground, and the total nitrogen content was determined by using the Kjeldahl method.The parameters, related to translocation, accumulation, and remobilization of nitrogen within the wheat plant were calculated by using the following equations: Pre-anthesis N translocation = N content in vegetative organ at anthesis − N content in vegetative organ at maturity Contribution of pre-anthesis N to grain N (%) = (pre-anthesis N translocation)/(grain N content at maturity) ×100 Post-anthesis N accumulation = N content of the whole plant at maturity − N content of the whole plant at anthesis Contribution of post-anthesis remobilized N to grain N (%) = (post-anthesis remobilized N)/(grain N content at maturity) ×100
Yield and Yield Component
Plants from 20 m 2 were harvested from each plot and the grains were air-dried to determine plot yield at 12% moisture content and economic output was calculated.
Statistical Analysis
Data were analyzed using SAS 9.0 (SAS Corp., Cary, NC, USA) software to determine the statistical significance and the differences between the treatments were analyzed by LSD (least significant difference) test at p < 0.05, and graphs were constructed using Microsoft Excel 2003 and SigmaPlot 12.5 (Systat Software Inc., San Jose, CA, USA).
Effects of Tillage Practices and Sowing Timing on Soil Water Storage
Soil water storage in 0-300 cm layer storage was more under the subsoiling practices as compared to conventional tillage (Figure 1).Under subsoiling, the soil water storage in the 0-300 cm soil layer increased by 35, 55 and 68 mm in 2012-2013 and 40, 35 and 52 mm in 2013-2014 at T 1 , T 2 and T 3 sowing dates respectively, as compared to conventional tillage.
Effects of Tillage Practices and Sowing Timing on Soil Water Storage
Soil water storage in 0-300 cm layer storage was more under the subsoiling practices as compared to conventional tillage (Figure 1).Under subsoiling, the soil water storage in the 0-300 cm soil layer increased by 35 Furthermore, soil water storage was highest under early sowing as compared to medium and late sowing (Figure 1).Subsoiling especially increased the soil moisture in 0-160 cm and 200-240 cm soil layers in 2012-2013, and in 0-160 cm and 220-300 cm soil layers during 2013-2014.Maximum soil water storage in 0-300 cm during all three years was observed at early sowing.With the delay of the sowing date, soil water storage was decreased as compared to early and timely sowing.Soil water storage was significantly lower in the later planting than early and medium sowing period, especially 60-140 and 240-300 cm soil layer during 2013-2014.
Effects of Subsoiling and Sowing Time on the Number of Tillers at Different Stages of Winter Wheat and the Effect of Sowing Time on Accumulated Growing Degree Days
Accumulated growing degree days were decreased with the delay of sowing time.The accumulated growing degree days of late sowing was reduced by 379 °C and 172 °C than in the early and conventional sowing time of winter wheat, respectively (Figure 2).The number of tillers under subsoiling was significantly higher than that in conventional tillage.The highest number of tillers was recorded in conventional sowing time (T2), but during the wintering stage, the difference was not significant with early sowing time (T1), whereas in the joining, booting and maturity stages, the Furthermore, soil water storage was highest under early sowing as compared to medium and late sowing (Figure 1).Subsoiling especially increased the soil moisture in 0-160 cm and 200-240 cm soil layers in 2012-2013, and in 0-160 cm and 220-300 cm soil layers during 2013-2014.Maximum soil water storage in 0-300 cm during all three years was observed at early sowing.With the delay of the sowing date, soil water storage was decreased as compared to early and timely sowing.Soil water storage was significantly lower in the later planting than early and medium sowing period, especially 60-140 and 240-300 cm soil layer during 2013-2014.
Effects of Subsoiling and Sowing Time on the Number of Tillers at Different Stages of Winter Wheat and the Effect of Sowing Time on Accumulated Growing Degree Days
Accumulated growing degree days were decreased with the delay of sowing time.The accumulated growing degree days of late sowing was reduced by 379 • C and 172 • C than in the early and conventional sowing time of winter wheat, respectively (Figure 2).The number of tillers under subsoiling was significantly higher than that in conventional tillage.The highest number of tillers was recorded in conventional sowing time (T 2 ), but during the wintering stage, the difference was not significant with early sowing time (T 1 ), whereas in the joining, booting and maturity stages, the number of tillers were significantly higher in T 2 than T 1 and T 3 .Under medium sowing time (T 2 ), subsoiling resulted in an average 13% increase in tillage number as compared to conventional tillage.Late sowing resulted in a 17% and 15% reduction in the number of tillers as compared to medium sowing time under subsoiling and conventional tillage respectively.Early sowing (T 1 ) and the conventional medium sowing (T 2 ) times were favorable to the formation of more tiller in winter, but the medium sowing is more favorable to the formation of an effective spike number, thus increasing the yield.
subsoiling resulted in an average 13% increase in tillage number as compared to conventional tillage.Late sowing resulted in a 17% and 15% reduction in the number of tillers as compared to medium sowing time under subsoiling and conventional tillage respectively.Early sowing (T1) and the conventional medium sowing (T2) times were favorable to the formation of more tiller in winter, but the medium sowing is more favorable to the formation of an effective spike number, thus increasing the yield.
Pre-Anthesis N Translocation and Post-Anthesis N Accumulation
The contribution rate of N translocation in the plant before anthesis (about 75%) was greater than the contribution rate of N accumulation after anthesis (about 25%) to the grain N (Table 2).Under subsoiling, the pre-anthesis N translocation was significantly increased.Pre-anthesis nitrogen translocation was increased by 21-25 kg ha −1 , whereas the contribution rate to grain N was nonsignificantly increased by 2%-7% by subsoiling as compared to conventional tillage.It can be seen that under the conditions of subsoiling in the fellow period, the pre-anthesis N translocation, N contribution to grain, and the amount of N accumulation after anthesis was higher at a medium sowing time as compared to early and late sowing times.At late sowing time the N translocation, contribution of N translocation and post-anthesis N accumulation were decreased as compared to medium sowing time, whereas the post-anthesis contribution of N accumulation to grain N was increased.The contribution rate of N translocation in the plant before anthesis (about 75%) was greater than the contribution rate of N accumulation after anthesis (about 25%) to the grain N (Table 2).Under subsoiling, the pre-anthesis N translocation was significantly increased.Pre-anthesis nitrogen translocation was increased by 21-25 kg ha −1 , whereas the contribution rate to grain N was non-significantly increased by 2%-7% by subsoiling as compared to conventional tillage.It can be seen that under the conditions of subsoiling in the fellow period, the pre-anthesis N translocation, N contribution to grain, and the amount of N accumulation after anthesis was higher at a medium sowing time as compared to early and late sowing times.At late sowing time the N translocation, contribution of N translocation and post-anthesis N accumulation were decreased as compared to medium sowing time, whereas the post-anthesis contribution of N accumulation to grain N was increased.
Pre-anthesis N Accumulation and Translocation in Various Plants Parts
The accumulation and translocation of N before flowering and contribution to grain were highest in the stem + leaf sheath, and the lowest in the glume + spike (Table 3).In leaf, N accumulation and N translocation and contribution to grain were less than the stem + sheath and higher than in glume + spike.Compared with the control, subsoiling during the fellow time significantly increased the accumulation and translocation of N in plants parts and its contribution to grains before flowering.Nitrogen accumulation was increased by 4-7 kg ha −1 , 13-18 kg ha −1 , and 3-4 kg ha −1 , and the amount of nitrogen translocation was increased by 6-7 kg ha −1 and 12-14 kg ha −1 , and 3-4 kg ha −1 in leaves, stems + sheaths, glume + spike respectively.
Accumulation and translocation of N in all plant parts were highest under medium sowing time, while the late and early sowing significantly decreased the N accumulation and translocation in leaf, stems + sheaths, glume + spike (Table 3).Contribution rate to the grain was highest in medium sowing time for leaf and stem + sheath.Under subsoiling conditions, the early and late sowing time has decreased the contribution of leaf and stem + sheath to grain as compared to medium sowing time, whereas there was no significant difference between sowing times in glume + spike.It can be seen that the medium sowing time is beneficial for the translocation of N in the leaves and stems + sheath under subsoiling practice during the fallow period.
Correlation Coefficients between Soil Moisture and Nitrogen Accumulation and Translocation in Plant Parts before Anthesis
Under subsoiling and different sowing periods, the soil water storage in the 0-300 cm soil layers during the fallow period was positively correlated with the accumulation and translocation of nitrogen in plant parts before flowering (Table 4).A significant and positive correlation was found for nitrogen accumulation in leaf and glume + spike and soil moisture in 100-200 cm soil layer.Correlation between N accumulation in stems + sheaths and soil water storage was significant in the 0-300 cm soil layer.The N translocation in leaf and glume + spike and soil water storage in the 20-300 cm soil layer was significantly related.It was shown that the N translocation of glume + spike was closely related to the 40-300 soil layer.
Relationship between N Translocation and Grain Production
A significant linear positive correlation was found between grain yield and pre-anthesis N translocation (Figure 3).For N translocation in the leaves and glume + spike, the fitting equations were y = 89.933x+ 2834.6 (r = 0.887) and y = 179.15x+ 2867.5 (r = 0.876).There was a significant relationship between the N translocation in stem + sheath and grain yield, and the fitting equation was y = 40.92x+ 2551.6 (r = 0.842).
Correlation Coefficients between Soil Moisture and Nitrogen Accumulation and Translocation in Plant Parts before Anthesis
Under subsoiling and different sowing periods, the soil water storage in the 0-300 cm soil layers during the fallow period was positively correlated with the accumulation and translocation of nitrogen in plant parts before flowering (Table 4).A significant and positive correlation was found for nitrogen accumulation in leaf and glume + spike and soil moisture in 100-200 cm soil layer.Correlation between N accumulation in stems + sheaths and soil water storage was significant in the 0-300 cm soil layer.The N translocation in leaf and glume + spike and soil water storage in the 20-300 cm soil layer was significantly related.It was shown that the N translocation of glume + spike was closely related to the 40-300 soil layer.
Relationship between N Translocation and Grain Production
A significant linear positive correlation was found between grain yield and pre-anthesis N translocation (Figure 3).For N translocation in the leaves and glume + spike, the fitting equations were y = 89.933x+ 2834.6 (r = 0.887) and y = 179.15x+ 2867.5 (r = 0.876).There was a significant relationship between the N translocation in stem + sheath and grain yield, and the fitting equation was y = 40.92x+ 2551.6 (r = 0.842).Glume + Spike
The Contribution of Subsoiling to Increase Nitrogen Translocation
Subsoiling during the fallow period significantly increased the yield by 19%-36% (2012-2013), 17%-22% (2013-2014) and 20%-24% (2014-2015) compared with the conventional tillage (Table 5).Under subsoiling, the maximum yield was recorded at the medium sowing time.Under the conventional tillage, in 2012-2013 the highest yield was attained in early sowing treatment, but the difference between early and medium sowing treatment was not significant, however, in 2013-2014, and 2014-2015, maximum yield was attained at medium sowing time which was 7%-11% and 5%-9% higher than the early and late sowing.The contribution of N translocation to grain yield was maximum in glume + spike, followed by the leaf and the minimum was in the stem + sheath.Medium sowing time significantly increased the contribution of the amount of N in each organ to the yield as compared to early and late sowing.It can be seen that the practice of subsoiling during the fallow period and medium sowing time (1 October) was beneficial to the pre-anthesis N translocation in plant organs and contribution of N to the yield and the effect was more prominent during the drier year (2012-2013).
Discussion
The results of this experiment showed that the subsoiling during the fallow period significantly increased the soil water storage from 0 to 300 cm soil before sowing, especially in 2012-2013 (Figure 1).Previous reports also showed that subsoiling during the fallow period promoted the water storage capacity in the deeper horizons of the soil profile in dryland areas [8,23,24].Hou et al. [25] showed that subsoiling during the fallow period improved the water storage capacity of 0-200 cm soil before the sowing in dryland wheat field.Fu et al. [26] reported that subsoiling accumulated 50% of the summer rainfall in dryland wheat fields, and increased the soil water storage capacity by 76.2 mm in 0-200 cm soil before sowing.Mao et al. [27] showed that subsoiling during the fallow period in the dryland wheat field increased the soil water storage capacity of 0-300 cm by 21 mm and increased the yield by 5.5%.Ren et al. [28] found that subsoiling during the fallow period increased the water storage capacity of 0-300 cm soil in dryland wheat (especially in the 80-160 cm soil layer).These indicated that subsoiling can accumulate precipitation during the fallow period and increase the soil moisture reserve in the fallow period, favoring timely sowing and subsequent germination of dryland winter wheat [8,29].
Adequate soil moisture is conducive for the growth and development of wheat, which directly affects the accumulation and translocation of N, thus affecting the yield.The results of this experiment showed that subsoiling during the fallow period can significantly increase the amount of N of various organs before flowering, especially in leaves and stems + sheaths (Table 3).The contribution rate of N translocation to grain before flowering and nitrogen accumulation after flowering were significantly improved.This may be due to the improvement of effective soil water storage capacity to promote water and N absorption.Previous studies have shown that subsoiling practice increased the uptake of N after anthesis and further improve the N accumulation of grain by enhancing the absorption of water and nutrients by roots and increasing the supply of N metabolism substrates in the aboveground parts [15,20].Zheng et al. [19] showed that subsoiling + rotary tillage and subsoiling + strip rotary tillage significantly increased N accumulation at flowering and N translocation from vegetative organs to grain after flowering, compared with the rotary tillage and strip rotary tillage and thereby increasing yield.Wang et al. [30] showed that subsoiling had increased N accumulation after flowering and its contribution to grain by 50% and 38%, respectively.
On the basis of a suitable sowing date, the growth of the root system was promoted, further improving the absorption capacity of soil nitrogen and fertilizer in dryland wheat and promoting the vegetative and reproductive growth promoted the transport of N stored in the leaves, stems and sheaths to the grain, which increased the amount of N before the flowering and the contribution rate to the grain, thereby increasing the yield [8]. Ren et al. [31] showed that the soil storage capacity of 0-300 cm was positively correlated with the accumulation of N in the vegetative organs and the amount of the biomass before anthesis.
Subsoiling during the fallow period significantly increased the yield (17%-36%) compared with the conventional tillage (Table 5).Wang and Shangguan [7] reported that wheat yield in the Loess Plateau region is sensitive to soil water content at plantation and grain yield increased linearly with the soil water at planting, under subsoiling, and other tillage methods.Subsoiling increased yield by improving N accumulation and translocation to grain.The amount of N mobilization in stem + sheath had a significant effect on grain yield [32].The activities of key nitrogen metabolism enzymes and intermediate products of nitrogen assimilation were significantly higher by subsoiling than the rotary tillage and conventional tillage methods [20,33].Subsoiling tillage had a higher translocation of N from vegetative organs to grain, higher absorption of N after flowering and higher contribution from absorbed N after flowering to grain than the other two tillage methods.As compared to subsoiling, the amount of translocation of N, translocation efficiency and N absorption after flowering, and N accumulation and distribution rate in grain were lower under rotary tillage.Therefore, subsoiling tillage could promote N assimilation and improve nitrogen use efficiency to attain high-efficient and high-yield of wheat [20].
Present results indicated that soil water in 40-200 cm showed a highly significant correlation with N accumulation and translocation in the stem + sheath and N translocation in leaf and glume + spike (Table 4).This may be related to the distribution of wheat roots in the soil and consistent with the results of Zhang et al. [32].The soil water content at sowing stage in 0-300 cm depth was positively correlated to the N mobilization amount before anthesis and N accumulation amount after anthesis [32].Subsoiling improves the depth of rooting in deeper soil by minimizing the compaction of soil and allowing the accumulation of water reserve, which in turn improves water and nutrients uptake and drought resistance [34].
Suitable sowing is the main measure to match the growth and development of wheat and the local climate, which is conducive to achieving a stable yield [14].This experiment showed that by delaying the sowing time for 10 days the accumulated growing degree days before winter is reduced by about 180 • C. Xu et al. [12] and other studies have shown that with the delay of the sowing date, the accumulated temperature in winter was reduced, which significantly affected the growth of wheat before winter, and decreased the number of tillers.For each 6-day delay in sowing date, the average daily temperature from sowing to emergence decreased by 1.0-2.5 • C, and the number of tillers decreased by 100-150 million ha −1 .In the present study, both early sowing and medium sowing were beneficial for the formation of more group tillers in winter, but the medium sowing was more favorable for the formation of an effective number of tillers, so as to increase the yield.Sun et al. [14] reported that the delayed sowing time would affect the growth duration and reduced the dry matter mobilization efficiency of winter wheat as compared with the medium sowing time.Tillering is a determining factor for optimum wheat yield because of excessive production of tillers under well-fertilized soil ended in increased competition for light and resources leading to tillers mortality and reduced the number of effective tillers and grain yield [35].
Sowing at the appropriate time can increase the effective accumulative temperature, prolong the effective growth period of wheat, and increase the accumulation of N in grains [15,36,37].The results of the present experiment showed that medium sowing time significantly increased the contribution of the amount of N in each organ to the yield as compared to early and late sowing (Table 5).Late and early sowing significantly decreased the N translocation.Accumulation and translocation of N in all plant parts were highest under medium sowing time, while the late and early sowing significantly decreased the N accumulation and translocation in leaf, stems + sheaths, glume + spike (Table 3).Medium and late sowing time significantly increased the dry matter accumulation in the vegetative organs before flowering to the grain, thereby increasing yield.Under subsoiling, the normal sowing timing significantly increased the yield by 13%-16% and 5%-10%, in 2012-2013 and 2013-2014 respectively as compared to early and late sowing.The delay in sowing tended to decrease the oncoming heading and flowering stage and shorten the duration of the grain filling stage, which caused less dry matter mobilization efficiency and reduced biomass and grain yield [14,38].
Early and late sowing significantly decreased the N accumulation and translocation before anthesis (Table 2).The contribution rate of N to the grain after anthesis was decreased at early and medium sowing, whereas the contribution rate of N accumulation for grain was significantly improved by late sowing at post-anthesis.This may be because late sowing increases the proportion of N translocation from the glume + spike to grain, and improves the ability of the plant to use already absorbed N for grain production.
Ding et al. [39] showed that the accumulation of N in leaves and stems was significantly linearly correlated with grain yield at the flowering stage, and N translocation from the leaves and the stem + sheath was significantly linearly positively correlated with grain yield.The results of this experiment showed that the relationship between the amount of N in various organs before flowering and grain yield was consistent with a significant linear positive correlation.There was a significant relationship between the amount of N translocation in the leaf, stem + sheath, and glume + spike and the grain yield.For every 1 kg ha −1 of N transported from the leaves, the yield was increased by 109-198 kg ha −1 ; for every 1 kg ha −1 of N transported by the stem + sheath, the yield was increased by 58-179 kg ha −1 ; for each kg ha −1 N translocation from glume + spike, the increase of 233-302 kg ha −1 of grain yield could be achieved.
Figure 1 .
Figure 1.Effect of subsoiling and sowing date on soil water storage during 2012-2015.SS: Subsoiling during the fallow period; CT: Conventional tillage; T1: Early sowing date; T2: Timely sowing; T3: Late sowing.Values followed by different small letters indicate significant difference at 0.05 level.
Figure 1 .
Figure 1.Effect of subsoiling and sowing date on soil water storage during 2012-2015.SS: Subsoiling during the fallow period; CT: Conventional tillage; T 1 : Early sowing date; T 2 : Timely sowing; T 3 : Late sowing.Values followed by different small letters indicate significant difference at 0.05 level.
Table 1 .
Precipitation distribution of the experimental site from 2009-2015, during the fallow season and the growth stages of winter wheat (Source: Meteorological Observation of Wenxi County, Shanxi Province, China).
Fallow period: from the last 10 days of June to the last 10 days of September; Sowing-wintering: from the first 10 days of October to the last 10 days of November; Wintering-Jointing: from the first 10 days of December to the first 10 days of April in the following year; Jointing-Anthesis: from the middle 10 days of April to the first 10 days of May; Anthesis-Maturity: the middle 10 days of May to the middle 10 days of June.
Table 2 .
Effect of subsoiling and sowing dates on pre-anthesis nitrogen translation and post-anthesis nitrogen accumulation to grain nitrogen.
Table 3 .
Effect of subsoiling in the fallow period and different sowing date on nitrogen accumulation, translocation, and contribution ratio to the grains of wheat before anthesis.
Table 4 .
Correlation coefficients between soil water storage at different soil depth and nitrogen accumulation and translocation in different organs before anthesis.
Table 4 .
Correlation coefficients between soil water storage at different soil depth and nitrogen accumulation and translocation in different organs before anthesis.
Table 5 .
Effect of subsoiling at different sowing dates on the contribution of N translocation before anthesis on yield.Different letters in the same column indicate significant difference at p < 0.05.SS, subsoiling; CT, conventional tillage; ∆Y, changes in grain yield; ∆NT, changes in N translocation; Y/ NT, contribution of N translocation to grain yield. | 2019-04-03T13:09:37.120Z | 2019-01-16T00:00:00.000 | {
"year": 2019,
"sha1": "7111896848cc36fb33b01ce1f52f707b3b9bf6c5",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4395/9/1/37/pdf?version=1548218390",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "1f5e8469248c193760bc62de4e6702f4118bac32",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
250685822 | pes2o/s2orc | v3-fos-license | A new half-metallic ferromagnet K2Cr8O16 predicted by an ab-initio electronic structure calculation
The first-principles electronic structure calculation is carried out to predict that a chromium oxide K2Cr8O16 with the hollandite-type crystal structure should be a new half-metallic ferromagnet. We compare our results with recent experimental data which indicate the ferromagnetic-metal to ferromagnetic-insulator transition at T ~ 90 K, as well as the paramagnetic-metal to ferromagnetic-metal transition at T ~ 180 K. Based on the calculated electronic structures, we argue that the double-exchange mechanism is responsible for the observed saturated ferromagnetism and the formation of the incommensurate, long-wavelength density wave of spinless fermions caused by the Fermi-surface nesting may be the origin of the opening of the charge gap.
Introduction
Recently, it has been reported [1] that a chromium oxide K 2 Cr 8 O 16 of hollandite type shows a phase transition from the paramagnetic metal to ferromagnetic metal at T c 180 K by lowering temperatures, where the ferromagnetic state has a full spin polarization of 18 µ B per formula unit (f.u.) at low temperatures. In addition to this phase transition, it has also been reported [1] that another phase transition occurs from the ferromagnetic metal to ferromagnetic insulator at T MI 90 K, suggesting that the charge gap opens below T MI . However, no structural distortions associated with this metal-insulator transition (MIT) have been observed so far [1]. The mechanism of MIT of this material has therefore been a great puzzle.
The crystal structure of K 2 Cr 8 O 16 (see Fig. 1) belongs to a group of hollandite-type phases where one-dimensional (1D) double strings of edge-shared CrO 6 octahedra forms a Cr 8 O 16 framework of a tunnel structure, wherein K ions reside [2]. Cr ions are in the mixed-valent state of Cr 4+ (d 2 ) : Cr 3+ (d 3 ) = 3 : 1, and hence with 2.25 electrons per Cr ion.
In this paper, we perform the first-principles electronic structure calculations based on the generalized gradient approximation (GGA) in the density-functional theory in order to clarify the origins of ferromagnetism and MIT of K 2 Cr 8 O 16 . We thereby predict that the materials A 2 Cr 8 O 16 (A = K and Rb) belong to a new class of half-metallic ferromagnets [3]; i.e., the majority-spin electrons are metallic, whereas the minority-spin electrons are semiconducting with a band gap. We also show from the GGA and GGA+U [4] calculations that the doubleexchange mechanism is responsible for the observed saturated ferromagnetism. We then discuss possible mechanisms of the MIT and argue that the formation of an incommensurate, long- In (a), the primitive unit cell is also shown in the thin dotted lines. In (b), the symbols represent Γ(0, 0, 0), M(2π/a, 0, 0), X(π/a, π/a, 0), P(π/a, π/a, π/c), K 1 (0, 0, π(1/c + c/a 2 )), and K 2 (2π/a, 0, π(1/c − c/a 2 )), where K 1 and K 2 are equivalent.
wavelength spin and charge density wave (DW) due to Fermi-surface nesting may be the origin of MIT of this material.
Method of calculation
For the GGA calculations, we employ the computer code WIEN2k [5] based on the full-potential linearized augmented-plane-wave method. The spin polarization is allowed. The spin-orbit interaction is not taken into account. The GGA+U calculation [4] is also made to see the effects of on-site electron correlation U on the band structure. We assume the experimental crystal structure of K 2 Cr 8 O 16 observed at room temperature with the lattice constants of a = 9.7627 and c = 2.9347Å [2]. The Bravais lattice is body-centered tetragonal and the primitive unit cell (u.c.) contains four crystallographically equivalent Cr ions, one K ion and eight O ions, i.e., KCr 4 O 8 , as shown in Fig. 1.
Results of calculation
We first find that the ground state in GGA is fully spin-polarized with the magnetic moment of 9.000 µ B /u.c. in consistent with experiment [1]; the energy gain of 3.16 eV/u.c. is obtained by the spin polarization. The calculated density of states (DOS) is shown in Fig. 2 in a wide energy range covering over the O 2p and Cr 3d bands. We find three separate peaks in both the majority and minority spin bands, which are the O 2p peak, Cr 3d peak with the t 2g symmetry, and Cr 3d peak with the e g symmetry. The hybridization between the O 2p and Cr 3d bands is significantly large. The Fermi level is located at a deep valley of the t 2g majority-spin band while it is located in the energy gap between the O 2p and Cr 3d t 2g bands of the minority-spin band. Thus, the half metallicity of this material is evident in the calculated DOS.
We also calculate the orbital-decomposed partial DOS, ρ α (ε) (α = xy, yz, zx), in the Cr 3d t 2g region, where the two components are exactly degenerate, ρ yz (ε) = ρ zx (ε). The three t 2g orbitals are almost equally occupied by electrons in the paramagnetic state. In the ferromagnetic state, however, the d xy orbitals is almost fully occupied by electrons and therefore holes are only in the d yz and d zx orbitals. Also, the d xy component has a rather high peak-like structure at ∼0. 7 be clarified further if we observe the calculated band dispersion near the Fermi level. We find that a rather dispersionless narrow band of predominantly d xy character is located at ∼0.7 eV below the Fermi level, extending over a large region of the Brillouin zone. On the other hand, the dispersive t 2g bands of predominantly d yz and d zx character with strong admixture of the 2p z state of O(2) are located around the Fermi level. We thus have the dualistic situation where the essentially localized d xy electrons at ∼0.7 eV below the Fermi level interact with the itinerant d yz and d zx electrons of the bandwidth comparable with the intraatomic exchange energy of ∼1 eV, whereby the Hund's rule coupling gives rise to the ferromagnetic spin polarization via the double-exchange mechanism [6]. To support this further, we make the GGA+U calculation for the present material. We find that, as U increases, the d xy band shifts further away from the Fermi level, leaving essentially no We also calculate the Fermi surface of K 2 Cr 8 O 16 in the ferromagnetic state. There are 12 t 2g bands, 3 of which cross the Fermi level and form the semimetallic Fermi surfaces; i.e., the second and third bands (counted from the top) form the electron Fermi surfaces and the fourth band forms the hole Fermi surface. The wave functions at the Fermi surfaces have predominantly d yz and d zx character with large admixture of the O(2) 2p z states. We also find that there is a pair of the 1D-like parallel Fermi surfaces, which are seen to have a very good nesting feature. The nesting vector is aligned roughly along the Γ-K 1 direction and has the value q * (0, 0, 0.147)2π/c or (0, 0, 0.853)2π/c. Thus, the Fermi-surface instability corresponding to the wavenumber q * , leading to formation of the incommensurate, long-wavelength (with a period of ∼7c in the real space) DW, may be relevant with the opening of the charge gap in the present material. Note that the spin and charge DWs occur simultaneously with the same wavenumber q * since we have only the up-spin electrons.
To confirm the nesting features more precisely, we calculate the generalized susceptibility χ 0 (q) for the noninteracting band structure, where we find that the sharp peak structure at q * z = 0.295π/c and 1.705π/c remains strong, irrespective of the value of (q x , q y ), although there is a small variation in the (q x , q y ) plane. The true maximum appears at q * (π/a, π/a, q * z ), or around (π/a, π/a, q * z ) slightly deviating and splitting from (π/a, π/a, q * z ) in the (q x , q y ) plane. Thus, if we include the effects of electron correlations, the q-dependent susceptibility can diverge at this momentum q * , resulting in the formation of the incommensurate, long-wavelength charge and spin DW, which we hope will be checked by experiment in near future.
Details of our calculations will be published elsewhere [7].
Summary
We have made the first-principles electronic structure calculations and predicted that a chromium oxide K 2 Cr 8 O 16 of hollandite type should be a half-metallic ferromagnet. We have shown that the double-exchange mechanism is responsible for the observed saturated ferromagnetism. We have argued that the formation of the incommensurate, long-wavelength density wave of spinless fermions caused by the Fermi-surface nesting may be the origin of the opening of the charge gap. We hope that these predictions will be checked by further experimental studies. | 2022-06-28T02:28:37.889Z | 2010-01-01T00:00:00.000 | {
"year": 2010,
"sha1": "7ac7513025bed0f13c4afed0a1e67533e2161eb8",
"oa_license": null,
"oa_url": "http://iopscience.iop.org/article/10.1088/1742-6596/200/1/012172/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "7ac7513025bed0f13c4afed0a1e67533e2161eb8",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
9094437 | pes2o/s2orc | v3-fos-license | Thioridazine: A Non-Antibiotic Drug Highly Effective, in Combination with First Line Anti-Tuberculosis Drugs, against Any Form of Antibiotic Resistance of Mycobacterium tuberculosis Due to Its Multi-Mechanisms of Action
This review presents the evidence that supports the use of thioridazine (TZ) for the therapy of a pulmonary tuberculosis infection regardless of its antibiotic resistance status. The evidence consists of in vitro and ex vivo assays that demonstrate the activity of TZ against all encountered Mycobacterium tuberculosis (Mtb) regardless of its antibiotic resistance phenotype, as well as in vivo as a therapy for mice infected with multi-drug resistant strains of Mtb, or for human subjects infected with extensively drug resistant (XDR) Mtb. The mechanisms of action by which TZ brings about successful therapeutic outcomes are presented in detail.
Introduction
The resurgence of tuberculosis during the 1980s was followed in the 1990s in New York City by a dramatic increase in the rate of pulmonary tuberculosis (TB) infections accompanied with emerging levels of resistance to the first line anti-tuberculosis (anti-TB) drugs isoniazid and rifampicin-termed multi-drug resistant tuberculosis (MDR-TB) [1]. Resistance to these drugs plus resistance to the second line drugs, amikacin, kanamycin, capreomycin and fluoroquinolones-termed extensively drug resistant tuberculosis (XDR-TB)-was soon noted in various parts of the globe [2]. More recently, resistance to all anti-tuberculosis drugs-termed totally drug resistant tuberculosis (TDR-TB) was first noted in Italy in 2006 and later reported in Iran, India and South Africa [3]. Although the global acquisition of tuberculosis infections has decreased by 1.5% per year [4,5], the progression of increased resistance of Mycobacterium tuberculosis (Mtb), as a consequence of prolonged and problematic therapeutic regimens, threatens the progress that has been made since 1990 in the control and prevention of TB [5]. Although the efficacy of the repurposed and newly recommended antibiotic for resistant forms of Mtb-Linezolid-has been recently evaluated from data provided by 23 studies in 14 different countries, involving more than 500 patients, suggests an overall success rate of 77% [6], the drug is notorious for producing a plethora of serious side effects such as neuropathy and hematological disorders [7]. Other newly approved drugs for MDR and XDR-TB therapeutics-bedaquiline and delamalid-are following the same path with their recognized efficacy against resistant forms of TB approved drugs for MDR and XDR-TB therapeutics-bedaquiline and delamalid-are following the same path with their recognized efficacy against resistant forms of TB being threatened by their market cost and cumulative reports of side effects and lack of safety [8]. Despite the occurrence of these side effects, in the absence of better forms of effective therapeutic regimens, the World Health Organization (WHO) continues to recommend the use of these extremely costly drugs for therapy of MDR and XDR-TB [4,6,8]. But is there an inexpensive drug, in comparison to the cost of these drugs, that has been extensively studied and has been safely used for the therapy of psychosis for over 50 years producing no serious side effects if the patient are monitored properly? The answer resides in thioridazine! It is the purpose of this review to present all of the evidence, much of it confirmed by many groups around the world, which strongly supports that use of thioridazine (TZ) in combination with antibiotics to which the Mtb isolate was previously resistant for therapy of MDR, XDR and most probably, TDR.
Mycobacterium tuberculosis and Phenothiazines: Chlorpromazine, Thioridazine, in Vitro Activities
Phenothiazines are heterocyclic compounds. The first such compound was synthesized by Bernthsen in 1883 via the reaction of diphenylamine with sulfur. Methylene blue (MB) is a phenothiazine that was synthesized from a phenothiazine derivative by Heinrich August Bernthsen in 1883. Soon thereafter, the chemist Paul Erhlich used the MB dye for staining live cells and found that it could reduce movement of microorganisms [9]. This observation spurred experiments with humans that showed that the dye could render the subject sedated and was effective in the treatment of schizophrenia [10]. These discoveries led to the synthesis of chlorpromazine (CPZ) by Paul Charpentier in 1950, introduced by Rhone Poulenc as the first true neuroleptic in 1957 [10] (Figure 1). Because of its worldwide use, anecdotal observations suggested that it had antimycobacterial properties [11]. By 1977, the in vitro antimycobacterial properties of CPZ were clearly shown [12] and confirmed 10 years later [13]. As a consequence of the emergence of a pulmonary tuberculosis epidemic in New York City during the late 1980s, and later the large percentage of MDR-TB coupled with the absence of new and effective anti-TB drugs at that time, the search for new anti-TB drugs began. The observations that CPZ had potential anti-TB activity spurred the study demonstrating that the in vitro concentrations of CPZ needed to inhibit the replication of Mtb could be exceeded and safely achieved in the CPZ treated patient, and clinically relevant concentrations ex vivo could effectively promote the killing of phagocytosed Mtb [14]. Regrettably, CPZ is also a drug that causes very serious side effects [15]. Thioridazine (TZ) is an equally effective neuroleptic phenothiazine. It Because of its worldwide use, anecdotal observations suggested that it had antimycobacterial properties [11]. By 1977, the in vitro antimycobacterial properties of CPZ were clearly shown [12] and confirmed 10 years later [13]. As a consequence of the emergence of a pulmonary tuberculosis epidemic in New York City during the late 1980s, and later the large percentage of MDR-TB coupled with the absence of new and effective anti-TB drugs at that time, the search for new anti-TB drugs began. The observations that CPZ had potential anti-TB activity spurred the study demonstrating that the in vitro concentrations of CPZ needed to inhibit the replication of Mtb could be exceeded and safely achieved in the CPZ treated patient, and clinically relevant concentrations ex vivo could effectively promote the killing of phagocytosed Mtb [14]. Regrettably, CPZ is also a drug that causes very serious side effects [15]. Thioridazine (TZ) is an equally effective neuroleptic phenothiazine. It produces significantly fewer side effects when used with moderation and maintaining an evaluation of the patient for underlying cardiopathy. TZ is therapeutically safe, as proven by the 60 plus years it has been in use, and is still widely used today in many countries to control psychosis. Consequently, TZ was examined for in vitro activity against antibiotic susceptible and antibiotic resistant isolates of Mtb and compared to the activity of CPZ against the same strains [16]. The MICs in vitro for CPZ and TZ were calculated as ranging between 4 and 32 µg/mL, depending on the system and the antibiotic resistance status of the tested strain, and they were equally effective [17]. For the M. tuberculosis H37Rv fully antibiotic susceptible reference strain this range was determined, by many authors, to be 8-15 µg/mL depending on the system (Table 1). Both CPZ and TZ had similar activity against strains susceptible to isoniazid (INH) and rifampicin (RIF) as well as to strains resistant to these antibiotics and as many as five other antibiotics. The in vitro effects of TZ [20,27,28] as well its derivatives, have since been repeatedly confirmed [23,29]. However, the minimal inhibitory concentrations that completely inhibited the replication of Mtb in vitro employed in all of the cited studies exceeded many fold that which can be safely achieved clinically (ca. 0.5 mg/L of plasma in a patient chronically treated with TZ). Because CPZ had been shown to reduce the resistance of a number of pathogenic bacterial species to antibiotics [30], presumably by interacting with the cell wall of the bacterium [31], the effect of TZ on the resistance of isolates of Mtb to antibiotics was also evaluated ( Figure 2). Briefly, although all of the phenothiazines tested were able to reduce the resistance to first line anti-TB drugs, the very mild neuroleptic TZ demonstrated great effectiveness at concentrations that were clinically achievable and similar to those employed for the initial therapy of psychosis [32]. Although the mechanism by which TZ reduced antibiotic resistance of Mtb was not readily understood, studies in other groups of bacteria demonstrated that TZ reversed the resistance of Escherichia coli to tetracycline by inhibiting the over-expressed efflux pump of the bacterium that was responsible for its multi-drug resistant phenotype [33]. Consequently, a large number of clinical isolates of Mtb that were susceptible to isoniazid (INH) were induced to extremely high level resistance to INH; this resistance could be totally reversed with a small and clinically relevant concentration of TZ [34]. Further studies showed that TZ reversed resistance of Mtb that had been induced to high level resistance to INH via the interference with the over-expressed efflux pumps genes of the organism mmpL7, p55, efpA, mmr, Rv1258c and Rv2459 [35] thereby confirming the previous observations on the role of efflux pumps in the multidrug resistance of the organism [34,36].
The effect of TZ on Mtb is not limited to the inhibition of efflux pumps. Studies by Dutta et al. show that, besides efflux pumps, the genes that code for essential proteins of the cell envelope are affected by TZ, albeit at concentrations that exceed the minimum inhibitory concentration of the drug [37]. Among the genes affected were those that encode efflux pumps that extrude antibiotics, oxido-reductases, enzymes involved in fatty acid metabolism and aerobic respiration, and genes that are co-expressed with the global SigmaB regulon, which are involved in the response to stress [38]. Other studies have confirmed these observations and have extended the understanding that TZ affects a large number of essential genes that code for proteins of the plasma membrane, many of which are involved in controlling essential energy production, active transport and permeability processes in response to antibiotic and oxidative stress stimuli [39]. In particular, several studies confirmed that TZ acts in mycobacterial respiratory chain components involved in ATP oxidative phosphorylation, namely, the type-II NADH-menaquinone oxidoreductase (NDH-2)-a key component of respiratory chain of Mtb-thus raising the hypothesis that this is the main molecular target of TZ and making it also effective against latent TB [40][41][42]. NDH-2 catalyzes the first reaction of the electron transfer chain of Mtb that leads to ATP oxidative phosphorylation. During this reaction, NDH-2 transfers two electrons from NADH to menaquinone, which is reduced to menaquinol form. Yano et al. have shown that the respiratory functions leading to de novo ATP synthesis and NADH regeneration might be the Achilles' heel of hypoxic nonreplicating mycobacteria, making TZ an attractive drug with activity both against replicative and dormant mycobacteria [40][41][42]. This hypothesis has been confirmed by Sohaskey et al. who demonstrated that concentrations of TZ exceeding the MIC for actively replicating Mtb also inhibit/kill dormant Mtb, becoming a promising drug to control latent tuberculosis and shorten anti-TB drug regimens if used directly on the human macrophage [43,44]. However, the question of whether TZ can be clinically useful for inhibiting the replication of Mtb and simultaneously killing dormant Mtb remains doubtful unless science demonstrates that these effective in vitro concentrations can be achieved at the site of the pulmonary system where the infective organism normally resides, namely, the pulmonary macrophage. Because TZ is concentrated by cells such as macrophages that are rich in their lysosome content [45,46] to levels that theoretically are assumed to greatly exceed the concentration present in the medium (in fact never measured inside the macrophage, only measured in TZ-loaded culture lysates) [14,47], the noted effects of TZ on essential genes may take place in vivo. The effect of TZ on Mtb is not limited to the inhibition of efflux pumps. Studies by Dutta et al. show that, besides efflux pumps, the genes that code for essential proteins of the cell envelope are affected by TZ, albeit at concentrations that exceed the minimum inhibitory concentration of the drug [37]. Among the genes affected were those that encode efflux pumps that extrude antibiotics, oxido-reductases, enzymes involved in fatty acid metabolism and aerobic respiration, and genes that are co-expressed with the global SigmaB regulon, which are involved in the response to stress [38]. Other studies have confirmed these observations and have extended the understanding that TZ affects a large number of essential genes that code for proteins of the plasma membrane, many of which are involved in controlling essential energy production, active transport and permeability processes in response to antibiotic and oxidative stress stimuli [39]. In particular, several studies confirmed that TZ acts in mycobacterial respiratory chain components involved in ATP oxidative phosphorylation, namely, the type-II NADH-menaquinone oxidoreductase (NDH-2)-a key component of respiratory chain of Mtb-thus raising the hypothesis that this is the main molecular target of TZ and making it also effective against latent TB [40][41][42]. NDH-2 catalyzes the first reaction of the electron transfer chain of Mtb that leads to ATP oxidative phosphorylation. During this reaction, NDH-2 transfers two electrons from NADH to menaquinone, which is reduced to menaquinol form. Yano et al. have shown that the respiratory functions leading to de novo ATP
Thioridazine and Its Effect on Intracellular Mycobacterium tuberculosis
To test the hypothesis above, and based upon the evidence that TZ was equal to CPZ with respect to its antimycobacterial properties in vitro (and the fact that CPZ was also very effective ex vivo), the rationale and the experiments performed by Crowle et al. [14] were repeated by Ordway et al. with CPZ and TZ against clinical strains of MDR Mtb [24], and later by Machado el al. against XDR Mtb, where TZ showed an excellent synergistic effect with first line drugs [26] (See Figure 1 as an example). In these works TZ was shown to enhance the killing of intracellular antibiotic susceptible and MDR/XDR Mtb by monocyte-derived human macrophages that have little killing action of their own at concentrations in the medium which are equivalent or lower than those present in the plasma of a thioridazine-treated psychotic patient (0.5 mg/L of plasma). These TZ ex vivo studies were extended to a large number of TZ derivatives, some of which revealed to be more effective than TZ and all of which expressed no toxicity at their effective ex vivo concentrations [29]. The same rationale and technical approach was further expanded from TZ and CPZ to other ion channel blockers such as verapamil, flupenthixol and haloperidol, with very successful results in enhancing the killing activity of the infected macrophage, regardless of the drug resistance profile of the infectious Mtb, with moderate and acceptable toxicities and excellent synergistic effects with first and second line anti-TB drugs [26].
The mechanism by which TZ promotes the killing of intracellular Mtb was at first opined to be the result of TZ being concentrated within the phagolysosome as predicted by Daniel and Wojcikowski [45,46] to a level compatible to its minimum bactericidal concentration of 60 mg/L. Although the concentrated effect in phagocytosed mycobacteria may take place, up to now this has never truly been demonstrated to occur in vivo. Controversial and disputed studies of Segal and associates [48,49] have also suggested another hypothesis which, if correct, could alter the form of therapy used against the MDR and XDR Mtb [24,50].
This hypothesis involves a series of stages that begin with: binding of the infecting Mtb organism to the receptor of the plasma membrane of the macrophage [51][52][53], followed with the invagination of the receptor-mycobacterium forming the phagosome which travels through the cytoplasm of the macrophage [54,55] and eventually fuses with a lysosome [56] to form the phagolysosome unit. The activation of dormant hydrolases (zymogen granules) requires a low pH [56] which is created by vesicular ATPases of the phagolysosome unit which are dependent upon the retention of ions [57]. Because the macrophage plasma membrane have their Ca 2+ channels (L-type) inhibited in presence of TZ, this inhibition will result in a significant increase of Ca 2+ from intracellular stores within macrophages. Accumulation of these ions in the cytoplasm of the macrophage causes an indirect acidification of the phagolysosome (schematic overview in Figure 3). Consequently, we considered the possibility that since TZ inhibits efflux pumps of bacteria [34][35][36] and also acts against efflux pumps of human cells [58][59][60], and phenothiazines in general inhibit Ca 2+ /ion channel transport [25,61], TZ may also inhibit the efflux of ions from the phagolysosomal unit leading to the indirect acidification of the compartment and the activation of hydrolytic enzymes. This possibility is supported by recent studies of Machado et al. [26,35] demonstrating that TZ promotes the acidification of the phagolysosomal unit by indirect inhibition of macrophage ion channels. The inhibition of these channels therefore activates the hydrolytic enzymes via the coupling of the vesicular ATPases and consequent killing of the entrapped Mtb organism. This hypothesis is further supported by separate research using another ion channel blocker, verapamil [62,63]. These studies not only provide support for the use of TZ for therapy of Mtb drug resistant infections by a non-antibiotic compound [64][65][66][67], but also introduce an alternative therapeutic strategy that targets the killing machinery of the pulmonary macrophage infected with Mtb [62,63,[67][68][69][70]. Drugs that target mycobacteria will eventually cause the organism to become resistant via the development of mutations at the gene coding level of the antibiotic target, and the alternative form of therapy with TZ evades this mutagenic response and assists the still effective antibiotics against drug resistant forms of Mtb.
In Figure 3 a model of the putative mechanism of action of thioridazine inside the macrophage is depicted, combining all the contributions made so far to elucidate its remarkable enhancing activity of the macrophage killing activity [26,[48][49][50].
Mono and Combinational Therapy with TZ: The Mouse and the Human
The question of whether the in vitro and ex vivo effects of TZ are reflected successfully in the murine model needed answering, and to this end, mono-TZ therapy of the Mtb infected mouse [72] as well as combination therapy with first line antibiotics in this model have both proven to be effective [73][74][75]. Nevertheless, because the results in the mouse model not always are reproduced in humans, the effectiveness of TZ-combination therapy needed to be investigated. To this end, TZ in combination with antibiotics to which the infective organism was initially resistant produced complete cures in 17 out of 18 XDR-TB patients in Argentina [76]. Mono-therapy of five terminal XDR-TB patients with TZ significantly improved their quality of life (elimination of night sweats, improved appetite, weight gain, reduction of disease-associated stress) and did contribute to a longer life span [71], but because TZ does not restore lost pulmonary tissue, the patients succumbed to the disease. Studies by Abbate et al. [70] and Udwadia et al. [77] showed that the use of TZ was safe with no significant effects on QT intervals or any other cardiac property as per the rigorous monitoring carried out in these trials. [26,50,62,69,71]. Infected macrophage. (A) The bacterium is recognized by receptors present on the plasma membrane of the macrophage and is internalized by invagination of the plasma membrane into a phagosome; (B) Once the phagosome is formed, the bacteria will manipulate the immune response, leading to the reduction of the availability of calcium within the phagosome, preventing the process of acidification needed for the activation of the hydrolases and the bacteria are thus not killed; (C) Treatment of infected-macrophages with Ca 2+ /ion channel blockers such as thioridazine (TZ) will increase the concentration of calcium into the cytoplasm and the transcription and activity of vacuolar proton (H+)-ATPases. This rise of protons causes the decrease of the pH in the phagolysosome, activating hydrolases that consequently kill the mycobacteria.
Mono and Combinational Therapy with TZ: The Mouse and the Human
The question of whether the in vitro and ex vivo effects of TZ are reflected successfully in the murine model needed answering, and to this end, mono-TZ therapy of the Mtb infected mouse [72] as well as combination therapy with first line antibiotics in this model have both proven to be effective [73][74][75]. Nevertheless, because the results in the mouse model not always are reproduced in humans, the effectiveness of TZ-combination therapy needed to be investigated. To this end, TZ in combination with antibiotics to which the infective organism was initially resistant produced complete cures in 17 out of 18 XDR-TB patients in Argentina [76]. Mono-therapy of five terminal XDR-TB patients with TZ significantly improved their quality of life (elimination of night sweats, improved appetite, weight gain, reduction of disease-associated stress) and did contribute to a longer life span [71], but because TZ does not restore lost pulmonary tissue, the patients succumbed to the disease. Studies by Abbate et al. [70] and Udwadia et al. [77] showed that the use of TZ was safe with no significant effects on QT intervals or any other cardiac property as per the rigorous monitoring carried out in these trials.
Important Considerations for Therapy of MDR/XDR Mtb Patients with TZ in Combination with Antibiotics to Which the Infecting Organism Is Resistant
The initial response of bacteria to an antibiotic or noxious agent is to over-express its efflux pumps [26,[33][34][35][78][79][80][81][82][83][84][85][86][87][88]. When the concentration of the agent is progressively increased, the genes that control and code for the efflux pumps of the responding organisms are progressively increased [33][34][35]85,[87][88][89][90]. However, when the initial concentration of the antibiotic is maintained below its MIC during repeated passages, the bacterium responds with progressive increases in its efflux activity of the pre-existing pumps [26,35,90]. Eventually, the appearance of resistance to a large variety of unrelated antibiotics begins to occur with a progressive concomitant increase of transport activity ultimately leading to the MDR phenotype with basal (normal) levels of efflux activity [35,90]. These observations tend to explain why antibiotic resistance of an infecting bacterium continues to increase although the dosing of the patient remains unchanged [27,79,84]. Moreover, they also suggest that in order to define the antibiotic status of a clinical isolate from a patient who may be a suitable candidate for adjunct therapy with TZ, the antibiotic profile as well as the activity of the efflux pump system of the infecting organism should be determined. Whereas the determination of an antibiotic resistance panel is routine for a laboratory that performs diagnostic studies for a suspected pulmonary tuberculosis infection, there are at this time few laboratories that perform the needed assays that define the efflux pump status of the infecting Mtb isolate. Fortunately, there are methodologies that have been developed which are not difficult to perform by a routine tuberculosis laboratory that do not require expensive instrumentation. When this cost is compared to the huge cost associated with therapy of an MDR Mtb infection due to mutations or to an over-expressed efflux pump system, where therapy is expected to be highly problematic [91], the cost is indeed minor. Based upon the above, any patient who is considered to be a candidate for adjunct TZ therapy with antibiotics must first have a clinical isolate evaluated for susceptibility to first line antibiotics. The status of the efflux pump system and the ability of TZ to reverse its in vitro resistance to specific antibiotics of the panel must also be investigated before treatment [91]. In addition to these assays, it would be of great interest to determine the effect of TZ on the survival of the infecting isolate by the patient's own macrophage. Given positive answers from the above assays (antibiotic susceptibility panel; defined efflux pump system; ability of TZ to reverse resistance to the antibiotic(s) for which the isolate was initially resistant; and, effective enhanced killing activity of the macrophage-trapped isolate), the patient may well be a good candidate for therapy with TZ as an adjunct to antibiotics whose initial resistance was due to an over-expressed efflux pump system [91]. During the time the clinical isolate is accordingly being investigated as suggested, the patient must be evaluated for any cardiopathy. It must be noted that the use of TZ is safe and the suggested protocol for dosing the patient is one that begins with a low level of 25 mg/day that is increased weekly to 50, 100 and 200 mg/day. This protocol has been shown not to reduce the QT interval (increased time between contractions of left and right ventricles) [92][93][94], a side effect repeatedly noted in MDR-TB patients treated with fluoroquinolones, bedaquiline or delamanid and a limitation factor for the use of new regimens including synthetic drugs [95]. However, approximately 6% of the Eastern European population has a mutation in the p450 cytochrome which reduces the metabolism of TZ, and consequently, the build-up of plasma TZ levels will result [93]. This build-up may be rapid and reach levels which are known to reduce the QT interval [94]. Consequently, the patient should be monitored for cardiac function prior to therapy with TZ in order to rule out any cardiopathy that may worsen with TZ dose, and monitoring should continue for the first week of therapy with TZ and periodically thereafter. It is important to note that TZ is safe to use for up to 1000 mg/day when introduced to the patient gradually [76,77,93,94], coupled to knowing the patient's clinical history and performing cardiac monitoring as recommended. At this time, the time required for therapy leading to a negative TB culture and radiological evaluation consistent with cure is not known although as per Abbate et al. complete cures were achieved with XDR-TB patients within a few months of TZ adjunct therapy [76,77]. It may well be that full recovery of XDR TB patients takes place within a period of time commensurate with that routinely producing complete cures of the patient infected with antibiotic susceptible Mtb with daily doses that are far below those used for the therapy of a psychotic patient.
Costs Associated with the Care of an MDR-TB Patient
The average cost for hospitalization during the period from 2005 through 2007 for an MDR-TB patient in the USA was $81,000 per year and for the XDR-TB patient $285,000 (3.5 times than that for the MDR-TB) [96]. Regardless of this huge expenditure per patient, the mortality rate for MDR-TB in the USA is still significantly higher than that for antibiotic susceptible TB infections, and for XDR-TB significantly higher than for MDR infections, especially if the patient is co-infected with HIV or presents with AIDS [97]. Nevertheless, due to the development of a variety of clinical diagnostic programs, therapeutic monitoring such DOTS, and the wide introduction of rapid laboratory methods for the identification, isolation and susceptibility test to first line TB drugs, the frequency of TB infections susceptible and resistant to first line TB drugs has fallen dramatically [4,97]. In countries that are poor, the situation is totally reversed and the incidence of all forms of TB infections continues to rise rapidly [5,98]. Although it is not possible at this time to advance the status of TB control, therapy, etc. in these global regions, given the severity of increasing antibiotic resistance, it is reasonable that therapy of a clinical presentation of tuberculosis can be pursued without the luxury of what is present in wealthier countries. However, therapy with first line drugs is costly and if not properly administered leads to MDR, and progressively more resistant forms of TB. WHO has recommended linezolid to be included in the empirical protocols for the therapy of MDR/XDR-TB infections [4,5]. However, this drug has a very narrow therapeutic window and because the optimal dosing strategy that minimizes the substantial toxicity associated with prolonged use has not been determined [99,100], blind use of this drug is extremely expensive and problematic. Consequently, if far less expensive drugs such as those that make up the line of defense are available, and because TZ is safe when used as prescribed, and because the effective daily dose of TZ used by Abbate et al. was, via increments, limited to 200 mg/day [76], TZ in combination with first-line drugs may prove to be significantly effective for therapy of any form of tuberculosis. Given that TZ when concentrated by the phagolysosome will be effective against the efflux pumps that are responsible for MDR phenotype of the bacterium, and given that TZ may also reach a level in the phagolysosome compatible with that which is bactericidal in vitro, and coupled to the enhancement of killing by the macrophage housing the infective organism, the potential that TZ has for therapy of any TB pulmonary infection is significant and should continue to be further supported.
Conclusions
The body of results and evidences gathered so far, coming from many different contributions from different teams around the world, enable us to propose the following mechanism of action for thioridazine and other ion channel blockers in the bacteria: after entering the cell, the compounds will generate a cascade of events which starts with the inhibition of the respiratory chain complexes, though we cannot say at the present moment if the respiratory chain is a direct target. The inhibition of the bacterial respiratory chain will lead to dissipation of the membrane potential, reduction of ATP levels, efflux inhibition, oxidative stress, and increase in intracellular ion levels. On the host cell, treatment with these compounds results in phagosome acidification that synergizes with several components of the host immune response, such as lysosomal hydrolases, leading to bacterial growth restriction. Both effects cooperate and result in an enhanced killing activity that can be highly efficient when combined with antituberculosis drugs.
Promising examples of the future use of thioridazine in new short term therapeutic regimens against any form of antibiotic resistance of Mtb come from the recent studies that demonstrated the possibility of effectively using nanoparticles containing thioridazine and rifampicin for rapid tuberculosis treatment in vitro and in a zebrafish model [101,102]. The use of TZ as a therapeutic adjuvant for anti-TB therapy is currently being expanded in Argentina and India. Even the World Health Organization, who has not shown great interest in the repurposing of this narcoleptic drug for TB has recently considered thioridazine as a World Health Organization group 5 drug for multidrug-resistant tuberculosis treatment due to its efficacy and safety [103]. | 2018-04-03T00:41:41.543Z | 2017-01-14T00:00:00.000 | {
"year": 2017,
"sha1": "2e5208b55826bce56341bb093ac3bd87101fd7b7",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-6382/6/1/3/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9c70160725a522dc9fe0609a326bab8bd3a17630",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
1421662 | pes2o/s2orc | v3-fos-license | Multilevel Noncontiguous Spinal Fractures: Surgical Approach towards Clinical Characteristics
Study Design The study retrospectively investigated 15 cases with multilevel noncontiguous spinal fractures (MNSF). Purpose To clarify the evaluation of true diagnosis and to plane the surgical treatment. Overview of Literature MNSF are defined as fractures of the vertebral column at more than one level. High-energy injuries caused MNSF, with an incidence ranging from 1.6% to 16.7%. MNSF may be misdiagnosed due to lack of detailed neurological and radiological examinations. Methods Patients with metabolic, rheumatologic diseases and neoplasms were excluded. Despite the presence of a spinal fracture associated clearly with the clinical picture, all patients were scanned within spinal column by direct X-rays, computed tomography and magnetic resonance imaging. When there were ≥5 intact vertebrae between two fractured vertebral segments, each fracture region was managed with a separated stabilization. In cases with ≤4 intact segments between two fractured levels, both fractures were fixed with the same rod and screw system. Results There were 32 vertebra fractures in 15 patients. Eleven (73.3%) patients were male and age ranged from 20 to 64 years (35.9±13.7 years). Eleven cases were the American Spinal Injury Association (ASIA) E, 3 were ASIA A, and one was ASIA D. Ten of the 15 (66.7%) patients returned to previous social status without additional deficit or morbidity. The remaining 5 (33.3%) patients had mild or moderate improvement after surgery. Conclusions The spinal column should always be scanned to rule out a secondary or tertiary vertebra fracture in vertebral fractures associated with high-energy trauma. In MNSF, each fracture should be separately evaluated for decision of surgery and planned approach needs particular care. In MNSF with ≤4 intact vertebra in between, stabilization of one segment should prompt the involvement of the secondary fracture into the system.
Rapid diagnosis of MNSF is essential since a misdiagnosis or delayed diagnosis may complicate the clinical picture. A fracture at a secondary or tertiary region, as a part of MNSF, might cause and/or aggravate neurological deficit, spinal instability, deformity and need for additional surgical intervention [3,6,9,10]. In this study, patients with at least 1 intact vertebra between the fractured vertebrae were investigated with respect to demographic properties, clinical and radiological findings, trauma mechanism, treatment approach, and clinical outcomes.
Materials and Methods
This study retrospectively examined 15 patients who presented to neurosurgery clinics of two different centers and underwent surgical treatment for MNSF between 2012 and 2014. Patients having metabolic, rheumatologic diseases, and neoplasms such as osteoporosis, ankylosing spondylitis, and multiple myeloma were excluded. Patients treated with conservative measures for a single fracture level or levels were also excluded. Fractures involving the occipitocervical junction and sacrum were also excluded since they have a have a unique anatomy, biomechanics, and classification.
All cases were considered as systemic multi-trauma and further investigations were done for associated cranial, thoracic, abdominal, and extremity lesions. Although a fracture was found to be clearly associated with the neurological signs of all patients, direct X-ray examinations involving cervical, thoracic, lumbar regions as well as detailed extremity studies were obtained in all patients. Thorax and abdominal computed tomography (CT) imaging was performed as a routine procedure in high energy multitrauma patients. Additional fractures might be visualized during these investigations if bone scans were thoroughly evaluated Apart from these investigations at presentation, CT and magnetic resonance imaging (MRI) studies were also performed for confirmatory diagnosis, treatment indication or surgical planning.
Cases with unstable vital signs including blood pressure, blood oxygen level, and serum hemoglobin level were precisely investigated for an associated systemic problem nevertheless primary measures were immediately performed to correct vital signs. Cases with stable vital signs were operated on an elective emergency basis. Fractures of all patients were stabilized at a single stage. Symptomatic vertebral fractures with overt neurological signs were primarily operated on, whether the localization of fracture was proximal or distal. In the remaining cases without an associated neurological deficit, proximal segment was stabilized first. When there were ≥5 intact vertebrae between two fractured vertebral segments, each fracture region was managed with a separate operative incision and approach for stabilization (Fig. 1). In cases with ≤4 intact segments between two fractured levels, both fractures were fixed with the same rod and screw system (Fig. 2).
Results
Eleven (73.3%) patients were male and 4 (26.7%) were female. Age ranged between 20 and 64 years (35.9±13.7 years). The most common mechanism of accident resulting in spinal fractures was falling down from height in 7 (46.6%) patients, followed by traffic accidents in 6 (40%) and motorcycle accidents in 2 (13.3%). Associated injuries were diagnosed in 6 patients and documented as 1 bilateral hemothorax, 1 left-sided pneumothorax, 2 right-sided pneumothorax, 1 calcaneus fracture and 1 tibia fracture. Vertebral fractures involving three segments were noted in 2 cases whereas two segments were involved in 13 cases with a minimum one or more single intact vertebral segment in between. There were 32 vertebral fractures. The most common segment among all patients suffering MNSF was thoracic+thoracic region (40%), followed by thoracic+lumbar region (26.7%), lumbar+lumbar region (20%), and cervical+lumbar region (13.3%). According to the American Spinal Injury Association (ASIA) classification, 11 cases had ASIA E, 3 had ASIA A, and 1 had ASIA D at primary neurological evaluation. The exact levels of fractures and their distribution according to the AO spinal fracture classification and neurological status are summarized in Table 1.
Posterior stabilization was performed in all cases and an additional decompressive measure was done when necessary. None of the patients developed an additional postoperative neurological deficit. The level of the stabilization was determined by using the AO spinal fracture classification (McCormack et al. [11]). The main aim of using two systems in patients with ≥5 intact segments between fractured levels was to evenly distribute the available load and thereby prevent overloading of the system for kyphotic deformity.
ASIA class did not improve in any patient during the early postoperative period. Ten of 15 (66.7%) patients returned to previous social status without additional deficit or morbidity however 5 (33.3%) patients had mild or moderate improvement after surgery.
Discussion
MNSF is defined as multilevel fractures involving nonneighboring vertebrae. In the present study, we reviewed the clinical and radiological features of MNSF and emphasized that a secondary fracture might be overlooked when there is a symptomatic primary fracture explaining the present clinical findings. Furthermore a surgical approach towards these fractures should be combined when there are ≤4 intact vertebral segments in between.
The primary lesion is the major lesion that is responsible from clinical signs and symptoms. Major vertebral fracture is easily recognized clinically or radiologically, but an associated secondary or tertiary fracture might pose a diagnostic challenge in some situations. This diagnostic dilemma is more pronounced when the symptomatic lesion is proximal to the secondary and tertiary fracture. In particular, when a patient presents with symptoms of paraplegia and an associated cervical fracture, a secondary thoracic or lumbar fracture might be easily be overlooked. Diagnosis of a secondary fracture may be delayed as long as 2.8 to 52.6 days in the literature [1,7]. Lian et al. [2] reported a delay of 5.1 days for diagnosis of a secondary fracture in 8 of 30 patients. Delays in diagnosis of such lesions have been explained by focusing on a particular lesion indicated by neurological signs [1,7] and the inability to evaluate advanced radiological imaging tests during management of additional traumatic pathologies including hemothorax and cerebral contusion [6].
Neurological examination is the key guide for lesion localization in traumatic lesions of the spinal vertebrae. A great majority of the cases in our series had complete neurological injury or normal findings on examination. Thus, general clinical picture might not be so helpful for detection of a secondary fracture in the present series. In this study, 15 patients were evaluated for a secondary fracture and majority of patients did not have any neurological deficit. Clinical picture did not help in the diagnosis of a secondary lesion. On the other hand two cases in the present study were reported to have a fixed neurological deficit (ASIA A) and an associated fracture distal to the primary lesion was put into shadow. Hence, radiological examinations have a more important role in diagnosis of a secondary or tertiary fracture. MNSF incidence is as high as 20% and Calenoff et al. [7] reported that a secondary lesion occurs above the primary lesion in 40% of cases and below the primary lesion in 60% of cases. For this reason, all vertebrae should be carefully examined as routine when there are signs of a single segment vertebra fracture in high-energy traumas.
We suggest that not only the neighboring vertebrae should be examined, but also that the whole vertebral column should be evaluated for presence of an associated fracture. All patients in our series were diagnosed as MNSF with the aid of a radiological investigation even when there were no neurological signs. Similarly, both proximal and distal segments of the lesion should be examined in cases presenting with complete spinal injuries. A delay in the diagnosis of a secondary or even a tertiary lesion is important due to a risk of an additional neurological deficit, spinal instability, deformity, and planning of surgical intervention for primary fracture level [5,7]. Thus, precise evaluation of whole vertebral column with CT or MRI is essential to avoid an incomplete diagnosis in patients with a spinal vertebra fracture.
Key procedures in the surgical management of vertebra fractures include decompression of neural elements and restoration of vertebral alignment. Early surgical stabilization shortens the hospital stay, early rehabilitation, and reduces rates of complications due to prolonged bed-rest, such as pneumonia, decubitus ulcers, and muscle atrophy [8][9][10]. Treatment strategy for MNSF and related surgical complications are not different than single level fractures. Lian et al. [2] compared three treatment modalities of conservative treatment, surgical therapy for a single lesion and surgical therapy for both lesions in a vertebral MNSF series of 30 patients and found the best clinical and radiological outcome in the surgically managed group [3]. Jorgensen and Joseph [8] reported that excessive kyphosis and resultant chronic pain developed in a case with Th11 and L2 compression fractures, since they were not able to manage surgically due to severe infectious findings. In the present series early surgical approach provided early mobilization at every patient, which prevented prospective systemic complications due to immobility.
Posterior stabilization systems become more popular in the last decade as a gold standard technique. Anterior decompression and stabilization is usually inadequate to provide a biomechanically strong system and surgical approach is associated with more complications than the posterior stabilization procedures [12]. Posterior stabilization systems can restore vertebral body height by distraction forces. Furthermore, anterior and middle columns maintained their normal length during correction of kyphosis. Distractive forces provided by the posterior stabilization system developed a tensile strength in the posterior longitudinal ligament which pushes back the retropulsed bone fragments forward. This process has been termed ligamentotaxis and it is beneficial particularly if performed at the early period [13].
Spinal fractures associated with MNSF should be evaluated as separate in terms of treatment approach. Posterior approaches for stabilization are generally preferred for MNSF. It is a conventional method for the management of vertebral fractures with low complication rates. Mc-Cormack et al. [11] suggested a load sharing system for vertebral fractures taking the level of injury at horizontalvertical planes and angle of kyphosis at the level of fracture to consideration. The stabilization system failed and the screws were reported to be broken in burst fractures subjected to short segment fixation (one level above and below the fracture segment). Despite the presence of an increased intention towards short segment stabilization in recent years, the complications of anterior approach should not be overlooked. A meta-analysis demonstrated the association of the anterior approach with longer operation time and greater blood loss. On the other hand, anterior support can also be provided by placing cages and/or grafts via extended posterolateral approaches [14]. In the present study, the anterior approach was not performed for vertebral fractures and posterior decom-pression with long segment stabilization was preferred to obtain an adequate alignment.
The evaluation of the number of intact vertebrae between two fractured segments is also critical in deciding the treatment approach for MNSF in addition to neurological deficit, spinal deformity, and instability. The length of the system used for the stabilization of the primary fracture causing neurological picture should not jeopardize the secondary and/or tertiary fractures. The expected junctional kyphosis, especially at the proximal part of the stabilized segment, is a well-known complication in vertebral fractures. Proximal junctional kyphosis has been reported at a rate as high as 26% and is more common in the thoracic region after corrective surgery for long segment vertebral deformities [15]. A caudal junctional kyphosis might also rarely develop distal to the stabilization system [16]. A biomechanical overload on the secondary fracture should not be overlooked since a stabilization system involving the primary fracture might jeopardize the secondary fracture when there are fewer than 2 intact vertebrae in between. To our knowledge, the length of the stabilization system needs to be precisely planned to distribute biomechanical loading and to preserve the present mechanics of the already fractured secondary lesion.
Conclusions
The spinal column should be scanned to rule out a secondary or tertiary vertebra fracture in vertebral fractures due to high-energy trauma and severe injury. In MNSF, each fracture should be separately evaluated for decision of surgery and planned approach needs particular care. In MNSF with ≤4 intact vertebrae in between, stabilization of one segment should prompt the involvement of the secondary fracture into the system. | 2017-06-18T14:17:40.951Z | 2015-12-01T00:00:00.000 | {
"year": 2015,
"sha1": "aa2d3701675de85300466e6587ebe097d6292636",
"oa_license": "CCBYNC",
"oa_url": "http://www.asianspinejournal.org/upload/pdf/asj-9-889.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "aa2d3701675de85300466e6587ebe097d6292636",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
5025131 | pes2o/s2orc | v3-fos-license | Non-Fermi-liquid to Fermi-liquid transports in iron-pnictide Ba(Fe1−xCox)2As2 and the electronic correlation strength in superconductors newly probed by the normal-state Hall angle
Electrical transports in iron-pnictide Ba(Fe1−xCox)2As2 (BFCA) single crystals are heavily debated in terms of the hidden Fermi-liquid (HFL) and holographic theories. Both HFL and holographic theories provide consistent physic pictures and propose a universal expression of resistivity to describe the crossover of transports from the non-Fermi-liquid (FL) to FL behavior in these so-called ‘strange metal’ systems. The deduced spin exchange energy J and model-dependent energy scale W in BFCA are almost the same, or are of the same order of several hundred Kelvin for over-doped BFCA, which is in agreement with the HFL theory. Moreover, a drawn line of W/3.5 for BFCA in the higher-doping region up to the right demonstrates the crossover from non-FL-like behavior to FL-like behavior at high doping, and shows a new phase diagram of BFCA. The electronic correlation strength in superconductors has been newly probed by the normal-state Hall angle, which found that, for the first time, correlation strength can be characterized by the ratios of Tc to the Fermi temperature TF, J/TF, and the transverse mass to longitudinal mass.
Introduction
The 'strange-metal' transports in high-temperature superconducting (HTS) cuprates, as well as in new ironbased superconductors, have been the subject of intense study. In particular, the amazing similarity between the quantum-mechanical phase diagrams of cuprates and iron-based superconductors reveals that both of their superconductivities are ascribed to the quantum critical fluctuations associated with a quantum critical point (QCP), even though HTS cuprates are doped Mott insulators, while iron-based superconductors are metallic systems [1,2]. Within the quantum-mechanical phase transition, the singular QCP at absolute zero produces a wide region of unusual behavior at a finite temperature, which displays a striking deviation from the conventional Fermi-liquid (FL) behavior, as it has the so-called strange-metal transport properties [3]. Understanding this QCP is essential, as it corresponds to the occurrence of superconductivity in the vicinity of spin-density-wave (SDW) instability or antiferromagnetic fluctuation [1,4]. Recently, a number of experiments on iron-based superconductors showed a phase transition involving the onset of a SDW order in the normal state above T c , which extrapolates to a T=0 SDW QCP (see [5] and the references therein). For example, the SDW transition was observed in both the resistivity and susceptibility of BaFe 2−x Co x As 2 single crystals in the underdoped region [6]. A more recent study on electronic specific heat in BaFe 2−x Ni x As 2 indicates that the effect of spin fluctuation should not be ignored [7]. It even has been proposed that, SDW QCP is a central organizing principle of organic, iron-pnictide, heavy-fermion, and HTS cuprates [8][9][10]. Under QCP (i.e. optimum doping), the strongest magnetic spin fluctuation suppresses the SDW order, accompanies the appearance of the highest T c , and results in non-FL-like scattering associated with Fermi-surface reconstruction. Electrical resistivity measurements reveal a remarkably T-linear behavior for samples near the optimum doping, while a T 2 -dependent feature can be observed in the higher-doping region [11].
In the phase diagram, a line is usually drawn up and to the right from the edge of the superconducting dome in the higher-doping region in order to separate the non-FL (NFL) 'strange metal' from a conventional Fermi liquid (FL) at high doping [12]. In addition to dc resistivity, measurements of optical conductivity show a T-linear scattering characteristic for samples near the optimum doping, which can be ascribed to a two-dimensional (2D) metal at the onset of the SDW order [13,14]. Particularly, recent studies of infrared spectra, interplane resistivity, and transport coefficients on iron-based superconductors reveal a possible pseudogap in the phase diagrams, which are similar to those observed in HTS cuprates [15][16][17]. These phenomena remain a major open question in the physics of strongly correlated electrons.
Recently, the hidden FL (HFL) theory [18,19] and holographic models [20] have been respectively developed to express the transport and spectroscopic properties of over-doped HTS cuprates for the entire normal state. Based on theoretical studies, it is argued that there is no clear transition line to a true FL for higher doping in the phase diagram. Self-consistency of HFL has been shown in the transport and spectroscopic properties of Tl 2 Ba 2 CuO y and La 2−x Sr x CuO 4 systems [18,19]; however, the availabilities of HFL and holography have never been examined in iron-based superconductors. Theoretical works have been further developed and cast into the framework of strongly correlated FLs or quantum critical systems [20,21]. Recent optical studies on BaFe 2−x Ni x As 2 and Ba 0.6 K 0.4 Fe 2 As 2 single crystals further show the interesting hidden-T-dependent properties of the two Drude models, and have proposed the hidden NFL behavior in the underdoped samples [2,13]. In particular, the boundary from NFL T dependence to FL T 2 dependence, as observed in resistivity measurements, is not clear for iron-based pnictides [22,23]; whereas, the boundary can be obtained by the departure of resistivity from linearity for over-doped cuprates [24].
This article debates and discusses the resistivities and Hall angles of Ba(Fe 1−x Co x )As 2 (BFCA) single crystals in terms of the existing theories. It is found herein that the deduced bandwidth of the spin excitation (the spin exchange energy J) from the Hall angle is in agreement with bandwidth W, as determined from resistivity by considering the HFL theory. An additional phase boundary line corresponding to the crossover from NFL-like transport to FL-like transport can be obtained in the new phase diagram of BFCA. Furthermore, the spin exchange energies for some conventional and unconventional superconductors, as derived from Hall measurements, are developed to explore their electronic correlation strength. The ratios of the spin exchange energy to Fermi temperature T F , J/T F , as well as the transverse mass to longitudinal mass, are presented for the first time in order to characterize the electronic correlation strength in superconductors.
Theoretical surveys
Previous theoretical attempts to explain the crossover from NFL to FL in the transport properties of HTS cuprates are based on the assumption that, transport lifetime, τ tr , must include two different scattering lifetimes, which independently influence the temperature dependence of longitudinal resistivity ρ xx . In the HFL theory, resistivity is explained in terms of the bottleneck effect, where there are two different dissipative processes for accelerated electrons, umklapp scattering and quasiparticle decaying into one pseudoparticle. These two processes act in series to dissipate the momentum to the lattice [18]. However, in holographic models [20], the electrical transport is described by two contributions to conductivity, a charge-conjugation symmetric term and another from explicit charge density relaxed by some momentum dissipation. Although arising from completely different models, both theories provide a consistent picture, which consider the T 2 -dependent relaxation rate and linear-T decay process in pseudoparticle conductivity, in order to achieve a universal expression of resistivity: where A is a temperature-independent pre-factor, W is a model-dependent energy scale, and ρ 0 is the residual resistivity. For T?W, one can see that ρ xx ≈AT+ρ 0 ; while ρ xx ≈ΛT 2 +ρ 0 with Λ=A/W for T=W. In the HFL theory, pre-factor A corresponds to , ) with scattering rate ħτ −1 =λT, the Drude spectral weight is pD 2 w ≈n eff /m * ( n eff and m * , which are effective carrier density and effective mass, respectively), and λ (≈ 0.3) is the coupling strength between the charge carriers and spin excitations in the HTS cuprates. Considering m * =(ħ 2 /E F )πn eff for a 2D system, the pre-factor A in T-linear resistivity can be estimated to be 4 , E F λ, indicating that W is related to the Fermi energy and coupling strength when the holographic theory is considered. Although model-dependent energy scale W represents different meanings in the physics of the HFL and holography theories, in both theories it characterizes the crossover from NFL to FL, and can reflect the electrical coupling strength in the normal-state transports of superconducting systems, as seen later in the discussions of temperature-dependent resistivity and Hall angle.
Anderson and Casey [18] fitted equation (1) to the resistivity data of La 2−x Sr x CuO 4 , with x ranging from underdoping of x=0.15 to overdoping of x=0.33; and obtained the doping-dependent W, which is in the order of a few hundred kelvins and in agreement with those determined from Hall and angle-resolved photoemission spectroscopy analysis. A similar result of overdoped HTS Tl 2 Ba 2 CuO 6+δ with W=800 K was obtained by Casey and Anderson [19].
Theoretical attempts on explaining the anomalous transport properties of HTS cuprates, where resistivity and the Hall effect have different temperature dependencies, are based on the assumption that there exists two transport relaxation times, which independently influence the Hall effect and resistivity in these systems. An important advance in explaining this anomalous behavior in the cuprates was Anderson's conjecture that there exist two transport relaxation times in the cuprates that independently influence the Hall effect and resistivity in these systems [26]. As suggested by the HFL theory, the T 2 -dependent HFL relaxation rate is taken equally as the Hall scattering rate by ħ(τ HFL ) −1 =T 2 /W=ħ(τ H ) −1 . According to Anderson's theory [26], the transverse (Hall) scattering rate is determined by scattering between excitations and varies with T 2 . The scattering from magnetically active impurities introduces additional terms in the longitudinal transport scattering rate, 1/τ tr , and the Hall relaxation rate, 1/τ H . For the transverse scattering rate, Anderson's theory introduced: where J is the spin exchange energy, and 1/τ M is the impurity contribution. For the Fermi surface formed by spinons, the transport scattering rate of 1/τ tr is proportional to the resistivity, i.e., σ xx , which is proportional to τ tr ; whereas, σ xy is proportional to τ H τ tr . Thus, the Hall angle θ H =tan −1 (σ xy /σ xx ) involves 1/τ H only. Equation (2) implies that: where ω c =eB/m s , m s is the effective transverse mass, and C is the impurity contribution. By combining equations (2) and (3), we can see that α corresponds to 1/(ħJω c ) ∝ B −1 and C=1/(τ M ω c ), respectively. From equations (2) and 1/τ HFL =1/τ H , as suggested by the HFL theory, we should have W≈J if the impurity contribution can be neglected. Following Anderson's theory, we write θ H =ω c τ H =(B/2nΦ 0 )k F v k τ H , as described by Chien et al [27], where Φ 0 =h/2e is the flux quantum, n=k F 2 /2π is the planar carrier density, k F is the Fermi wave vector, and v k =J/ħk F . Using equation (3), we now derive a correlation between parameter α and spin exchange energy as: and we have the effective transverse mass, which can be expressed by where m tr is the longitudinal transport mass, we find that the ratio of transverse mass to longitudinal mass, β, can be expressed by: Equation (5) implies that the transverse mass should be much larger than the longitudinal mass, since T F ?J [26,27]. Apparently, the normal-state Hall measurement provides further insight into the strange-metal transports in superconductors.
In the holographic theory, it is argued that there is only a single contribution from the momentum dissipation to the Hall angle with θ H =(B/Q)σ diss , where Q is the charge density and σ diss is the T −2 -dependent dissipation conductivity. Let us now return to inspect the meaning of parameter α in the HFL theory. According to equation (4), as derived within the framework of the HFL theory, the parameter α can be rewritten as (6) and (7), one can see that both derived α holo and α HFL exhibit similar formulas, and thus should have the same order of magnitude, since E F and J are the same order of magnitude for strongly correlated systems (see later in the discussion). It is noted that E F is equal to T F when E F is denoted as a temperature unit, and thus, E F >J, as previously mentioned. Both the HFL theory and holographic model propose similar physics, which lie at the origins of the two-lifetime behavior in these so-called 'strange metal' systems. More recently, it has been demonstrated that the separation of transport lifetimes seems to be pervasive in 2D electron liquids [28]. As described in some review articles [10,29], there are other theoretical approaches that have been proposed for the anomalous transport phenomena in HTS cuprates However, only the mentioned theoretical schemes propose the universal expression of resistivity to describe the crossover of transports from NFL to FL behavior.
To sum up, by measuring longitudinal resistivity ρ xx (T) and Hall angle θ H (T), the obtained energy scales W and J are proposed to describe the crossover of transport from NFL-like behavior to FL-like behavior, as well as the electronic correlation strength for these strange-metal systems [18][19][20]. Although the mechanisms of resistivity are different, both the HFL and holographic theories provide consistent pictures that there exist two transport relaxation times, which independently influence the Hall effect and resistivity in these systems. Based on the schemes within the framework of the HFL and holographic theories, longitudinal resistivity ρ xx (T) and Hall angle θ H (T) for BFCA single crystals, as well as some conventional and unconventional superconductors, are examined, as follows.
Experiment
Previous works have described the preparations and transport measurements of investigated samples of BFCA single crystals, single-crystal NaFe 1−x Co x As (NFCA) with x=0.022, and the charge-density-wave (CDW) related superconductors of Ca 3 Ir 4 Sn 13 (CaIrSn) and Sr 3 Rh 4 Sn 13 (SrRhSn) crystals [30][31][32]. HTS c-axis oriented YBa 2 Cu 3 O y (YBCO) and NdBa 2 Cu 3 O y (NBCO) thin films are grown by radio frequency sputtering onto SrTiO 3 (001) substrates, as described in literature [33]. FeSe 0.5 Te 0.5 (FeSeTe) single crystals have been grown from selfflux in a quartz crucible by referring to the conditions proposed by Sales et al [34], exhibiting good crystallization with the c-axis orientation perpendicular to the plane of the crystal slabs. In addition, a piece of Nb metal with purity of 99.9%, which is regarded as a conventional superconductor, has been studied for comparison. Within the transport measurements, a Hall-measurement geometry with five leads is constructed to allow simultaneous measurements of both longitudinal (ρ xx ) and transverse (Hall) resistivities (ρ xy ) using standard dc techniques. figure 1(a) shows the low-temperature resistivity for the corresponding samples. As shown, the values of resistivity, transition temperatures, and temperature-dependent behaviors are similar to those reported in [35]. In addition, the undoped parent sample shows a very sharp drop in resistivity at the antiferromagnetic transition temperature of 135 K, which accompanies an additional knee-like transition at 25 K, as seen in the inset of figure 1(a). The additional transition at low temperature is similar to that observed by Rotundu et al [36], and seems to be dependent on the annealing periods, which is a phenomenon that has never been examined, and thus, has room for further investigation. Figure 1(b) shows that the resistivities of the over-doped BFCA with x=0.2 follow the form of ρ xx =ρ 0 +ΛT 2 with fields of 0 and 6 T at whole temperatures. Here, the applied field is parallel to the current I in order to eliminate the Lorentz contribution from resistivities. The inset of figure 1(b) shows the field dependences of the Λ value and residual resistivity ρ 0 . As seen, both Λ and ρ 0 are almost field-independent, and the resistivities in fields reveal tiny magnetoresistance. Indeed, the T 2 dependence of ρ xx and the weak field independence of Λ demonstrate FL-like characteristics in the high-doping BFCA.
Results and discussion
Using equation (1), this study attempts to analyze the normal-state BFCA resistivity, as shown in figure 1(a). Equation (1) is fitted to data through the least squares regression method in order to precisely determine parameters A, W, and ρ 0 . In addition, figure 1(a) shows the fitting results (solid lines) for BFCA with different doping levels. Figures 2(a) and (b) show the values of the parameters in equation (1), as obtained from the fit as a function of Co doping x. As shown in figure 2(a), the x dependency of energy scale (in temperature unit) W reveals a rapid increase in the over-doping region, which is similar to that observed on La 2−x Sr x CuO 4 [18]. The values of W for BFCA, which are in the range of 87-1493 K, are approximately the same order of magnitude as those for La 2−x Sr x CuO 4 and other HTS cuprates [19]. Figure 2(b) illustrates the x dependences of parameters A and ρ 0 . In substance, the values of parameters A and ρ 0 decrease with an increase in x, which is also a behavior similar to that obtained for La 2−x Sr x CuO 4 . Closer examination of equation (1) shows that, the rapid increase in W for the higher-doping BFCA indeed agrees with the T 2 -like resistivity commonly observed both in HTS cuprates and iron-based superconductors, as previously mentioned. applied fields up to 6 T. As can be seen, the data fall almost in a straight line in the studied temperature range, and can be fitted to equation (3). The inset of figure 3(a) shows parameter α against H −1 , and demonstrates that α is indeed proportional to H −1 at fields larger than 2 T, which is consistent with the predicted previously result. The deviation of α ∝ H −1 at low fields implies the occurrence of field-dependent parameters J or m s at low fields; however, this phenomenon remains to be further debated. Figure 3(b) plots −cotθ H versus T 2 for BFCA with different doping levels measured in the field of 6 T. As can be seen, the data also fall in a straight line in the normal-state temperature region, and can be fitted to equation (3). The inset of figure 3(b) shows the x dependence of parameter α obtained in the field of 6 T. The values of α for BFCA decrease with an increase in x, and are approximately the same order of magnitude as those for HTS cuprates [27]. Furthermore, with equation (4), we can estimate the values of J for BFCA samples, as shown in figure 2(a), where the planar carrier density is calculated by n=(3π 2 n 3D ) 2/3 /2π and the volume carrier density n 3D is obtained from the Hall measurement. From figure 2(a), we note that, the error bar of the J value rises from the n 3D values taken at different temperature regions. It is found that the values of J are almost the same as the W values for BFCA with x=0 and 0.20; and while the J values are larger than those of W, they are of the same order of several hundred Kelvin for BFCA with x=0.10, which is in agreement with the HFL theory. This result indicates that the transports in over-doped BFCA and in their parent compound (x=0) can be described by the HFL scenario.
Next, this study attempts to extend this observation into a new phase diagram for BFCA. According to the HFL or holographic resistivity in equation (1), one can see that the resistivity behaves as ρ xx ∝ T or ρ xx ∝ T 2 , and according to whether T?W or T=W. First consider the attempt to draw a line up and to the right from QCP (i.e. the optimum-doped point) and the result of W≈3.5T c (W≈87.4 K, and T c ≈25 K) for the optimumdoped BFCA with x=0.08, which should behave like a NFL character at temperatures above T c . We thus draw a line of W/3.5 for BFCA in the higher-doping region up and to the right from QCP in order to show the crossover from NFL-like behavior to FL-like behavior at high doping, as seen in figure 4. In addition, surprisingly, the W/3.5 line almost merges into the boundary line of the antiferromagnetic (AFM) transition for the under-doped BFCA. Figure 4 also illustrates the phase-transition diagram extracted from previous reports [23,35] for comparison, and reveals a renewed phase diagram for BFCA.
Yoshizawa et al [37] recently investigated the elastic properties of BFCA single crystals with different Co concentrations, in which elastic constant C 66 shows large elastic softening associated with the structural phase transition. They obtained characteristic temperature T * with the deviation of the inverse of C 66 from the T-linear behavior, and inferred that T * possibly corresponds to the crossover from the NFL region to the FL region. Figure 4 also illustrates the duplicated T * values for comparison, and shows an approximate coincidence between the W/3.5 boundary line and T * values. In addition, the derived values of bandwidth W, by Yoshizawa et al present the same order of several hundred Kelvin for higher-doped BFCA as those obtained herein with the HFL and holographic transport theories. Furthermore, the factor of 3.5 indicates that the crossover temperature corresponds to a fractional value of bandwidth.
Further discussion of the exchange energy J in superconductors via equations (4) and (5) suggests that the exchange energy can seemingly manipulate the transport behaviors in superconductors. An interesting issue is to examine the spin exchange energy in different kinds of superconductors in order to debate their electronic correlation. Figure 5(a) illustrates the basic characteristics of resistive transition for some high-quality superconductors, including the optimum-doped BFCA, NFCA, and FeSeTe crystals, fully oxygenized YBCO and NBCO films, and the CDW-related superconducting CaIrSn and SrRhSn crystals, as described in the Experiment section, and a conventional superconductor of Nb metal. As seen, the measured superconducting transition temperatures are almost the same as those previously reported. Figure 5(b) plots |cotθ H | versus T 2 for the corresponding samples measured in the field of 6 T. As can be seen, the data also fall in a straight line in the normal-state temperature region, and can be fitted to equation (3). As mentioned above, the exchange energy offers the key to understanding the electric correlation in superconductivity. Following the analysis previously conducted in BFCA, we can derive the exchange energy according to the data in figure 5(b) by using equation (4). The inset of figure 5(b) shows J versus T c for the corresponding samples in figure 5(b); however, there is no clear relation between J and T c . Recently, it has been pointed out that, the ratio of T c to Fermi temperature T F characterizes the correlation strength in superconductors [38]. In unconventional superconductors, such as iron-based superconducting FeTe 0.6 Se 0.4 , HTS YBCO, and heavy fermion superconductors, this ratio is about 0.1; however, it is only ∼0.02 in conventional BCS superconductors [38]. Being analogous to the analysis of T c /T F , we are motivated to examine the ratio of J/T F in different kinds of superconductors. where S is the Seebeck coefficient, γ is a T-linear electronic specific heat coefficient, k B is Boltzmann's constant, and e is the electron charge. This study adopts the results of thermal transport for NBCO [40,41], BFCA [42,43], NFCA [44], SrRhSn [45], and Nb [46] to make the estimations of T F , while the T F values of YBCO, FeSeTe, and CaIrSn are duplicated from articles in literature [38,47,48]. However, as shown in figure 6(a), the error bars of T F arise from the various n 3D values, as taken at normalstate temperatures, and some divergences in the values of S and γ reported. The data for HgBa 2 Ca 2 Cu 3 O 8+δ (HBCCO) and La 1.85 Sr 0.15 CuO 4 (LSCO) are adopted from [48] for comparison. Figure 6(a) shows the dash lines of T c /T F =0.047 and 0.000 17 for two groups of superconductors, respectively. The data of the first group of superconductors, including the strongly correlated HTS YBCO and NBCO, and iron-based superconducting BFCA, NFCA, and FeSeTe, follow the line of T c /T F =0.047, while the data of the second group of superconductors, including weakly correlated CaIrSn, SrRhSn, and Nb, are distributed over the region near the line of T c /T F =0.000 17. This result is in accordance with previously reported results [38,47,48].
Inspired by the plot in figure 6(a), in figure 6(b) we demonstrate a plot to point out an intimate link between J and T F in a superconducting system. An interesting result is that the data of the strongly correlated superconductors (first group) follow the line of J/T F =0.30, while the data of the weakly correlated superconductors (second group) follow the line of J/T F =0.016. This finding indicates that the ratio of J to the Fermi temperature T F can also characterize the correlation strength in superconductors. It is inferred that, for a strongly correlated superconductor, this ratio is much larger than that for a conventional BCS superconductor due to their smaller T F values.
This study further examines the ratio of transverse mass to longitudinal mass, β, as expressed in equation (5). Figure 6(c) illustrates β as a function of T c for the superconductors studied herein, which shows that the strongly correlated superconductors reveal smaller β values, while larger β values are obtained for the weakly correlated superconductors. The β values for the strongly correlated superconductors approximately follow the line of β/T c =0.09, while the data of the weakly correlated superconductors follow the line of β/T c =22.5, indicating that the ratio of β to T c also correlates closely with the electronic correlation strength in superconductors. This result implies that there are different effects of electronic correlation on the ratio of transverse mass to longitudinal mass between strongly and weakly correlated superconductors. Generally speaking, it can be understood that, even though T c is enhanced, such as in the strongly correlated superconductors, the longitudinal effective mass increases faster than the transverse effective mass, leading to a smaller β value due to the relatively small value of T F . We can see that the correlation strength in superconductors can be experimentally revealed by the normal-state Hall angle, thus, more theoretical or experimental studies on the effects of electronic correlation in superconductors are necessary.
Having observed that correlation strength can be characterized by these derived parameters, one may further proceed to the debate between the HFL theory and holographic theory on the basis of the experimental data. According to the derivations in the previous section, it is worthy to notice that the value of Fermi energy E F , which is a key parameter related to the electronic state, can be respectively derived from the experimental data of pre-factor A (corresponds to the coefficient in T-linear resistivity as T?W) and parameter α, as based on the HFL and holographic theories. The obtained E F from pre-factor A, as based on the HFL and holographic theories, are denoted by E F,HFL and E F,holo , respectively, and can be expressed by E F,HFL = Table 1 illustrates the obtained parameters of T c , A, α, E F,HFL , E F,holo , E F,holo,α , T F , and J for the BFCA samples. In addition, the parameters of HTS YBCO and NBCO films are shown for discussion. Here, the parameters of T c , A, α, T F , and J are obtained from the experimental results or calculations, as previously mentioned, where the A values for YBCO and NBCO films are obtained by their well linear fit to their ρ(T) data from 120-300 K. Again notice that the large error bars of T F arise from the various n 3D values, as taken at normal-state temperatures, and some divergences in the values of S and γ reported. In addition, we replace the A values of bulk resistivity with the A/t values for calculation of the 2D sheet resistance, as described by theories, where t is taken as the c-axis length of the unit cell with t≈1.3 and 1.2 nm for BFCA and HTS cuprates, respectively. Regarding calculation of the values of E F,HFL , E F,holo , and E F,holo,α , the information of (v F0 /v F ), λ, and A * for BFCA and HTS cuprates should be clarified. Considering the small anisotropic transport properties of BFCA and HTS cuprates in the crystal ab plane, we take (v F0 /v F )≈1, as shown in [18]. The values of λ for BFCA and HTS cuprates are taken as 0.12 and 0.3, respectively, by referring to the results in [49,25]. The A * value of ∼4 for BFCA is estimated by Rullier-Albenque et al [1], while the A * value for the HTS cuprates has not been reported yet. As previously mentioned, A * describes the electronelectron scattering processes given by ħ/τ e−e =A * T 2 /E F , thus, A * can be estimated from the Λ coefficient of T 2 -dependent resistivity, that ρ=m tr /ne 2 τ e−e =(m tr /ne 2 ) A * T 2 /(ħE F )=ΛT 2 . It has been reported that the Λ value of YBCO is ∼1.5×10 −9 Ω cm K -2 and the Hall coefficient is R H =1/ne≈5×10 −4 cm 3 C −1 [50].
By considering E F =2100 K [38] and m tr ≈12m e [51], we obtain an A * value of ∼7.1 for YBCO, and then, proceed to calculate the E F,holo,α values of YBCO and NBCO.
As seen in table 1, the obtained E F,HFL values for BFCA, which are in the range of 495-1027 K, are near the T F values of 682±305-1009±197 K, which are derived according to the reported electronic specific heat coefficients [43]. However, the obtained E F,HFL values for YBCO and NBCO are much smaller than those of T F , implying that the assumption of (v F0 /v F )≈1 may need to be corrected when applying the HFL theory to HTS cuprates. On the other hand, all E F,holo values for BFCA, YBCO, and NBCO are almost several-time magnitude larger than those of T F , while E F,holo,α shows a more consistent result, as compared with the values of T F . These deviations may arise from the uncertain parameters of λ and A * for iron-based superconductors and HTS cuprates, which require further confirmation through experimentation. If any doubt remains about these derived E F values, it is clear that both the HFL and holographic theories hold truths regarding the temperaturedependent resistivity and Hall angle in these strange-metal superconductors, and some uncertain parameters still require further calibration for theoretical application.
Conclusions
By considering HFL and holographic theories, this research examined spin exchange energy J and modeldependent energy scale W in BFCA single crystals, as deduced from the Hall angles and resistivities, respectively. In theoretical surveys, both HFL and holographic theories give similar physics, meaning that there exist two transport relaxation times, which independently influence the Hall effect and resistivity in the so-called 'strange metal' systems. One can see that the values of J are almost the same as the W values, or are of the same order of several hundred Kelvin for the over-doped BFCA, which is in agreement with the HFL theory. Moreover, a drawn line of W/3.5 for BFCA in the higher-doping region up to the right from QCP shows the crossover from NFL-like behavior to FL-like behavior at high doping, leading to the obtainment of a new phase diagram for BFCA. Furthermore, this study has newly derived spin exchange energies and Fermi temperatures for some conventional and unconventional superconductors from Hall measurements in order to explore their electronic correlation strength. Findings show that the data of T c /T F and J/T F for strongly correlated superconductors follow higher-ratio lines, as compared with those for weakly correlated superconductors. By contrast, the ratios of transverse mass to longitudinal mass for strongly correlated superconductors reveal smaller values. The ratios of T c /T F , J/T F , and β/T c are presented, for the first time, as characterizing the correlation strength in superconductors. In addition, both the HFL and holographic theories can describe the temperature-dependent resistivity and Hall angle in these unconventional superconductors, which have some uncertain parameters that require further experimental confirmation. Table 1. Obtained parameters of T c , A, α, E F,HFL , E F,holo , E F,holo,α , T F , and J for BFCA samples, YBCO, and NBCO films. The T F of YBCO is taken from [38].
Samples
T | 2018-04-22T19:54:58.980Z | 2017-03-28T00:00:00.000 | {
"year": 2017,
"sha1": "3ee8e7be4b90e9c98ddbe8bae68aa72be3a1592d",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1367-2630/aa649e",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "d5e9fcc64e8ed7b6ec54e858e6187dbe6497519e",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
203568474 | pes2o/s2orc | v3-fos-license | Effects of Rubber Size on the Cracking Resistance of Rubberized Mortars
This study investigated the cracking resistance of rubberized cement-based mortars. Three rubber particle sizes were used: Rubber A (major particle size 2–4 mm), Rubber B (major particle size 1–3 mm), and Rubber C (major particle size 0–2 mm). Traditional restrained ring shrinkage test (RRST), new restrained squared eccentric ring shrinkage test (RSERST), mechanical test, and scanning electron microscopy test were conducted. Results showed that the cracking inhibitory effect of Rubber B was the highest among the three rubber particle sizes. SEM results revealed that the particle size of the rubber does not have much effect on the ITZ (interfacial transition zone) position of rubber and cement paste. For the strength differences of the three types of rubberized mortar, it is mainly because the specific surface area increased as the rubber size decreased, which lead to more ITZ positions and pore structures. Our study verified that RSERST can predict the cracking position and shorten the test period. Compared with RRST, RSERST can also increase the restriction degree. KR is defined as the intensification factor of RSERST restriction degree. The average intensification factor is KR¯=1.17.
Introduction
Cracking under restrained shrinkage is a common cause of distress in concrete walls, slabs, and pavements [1]. Cracks in turn lead to leaks, corrosion of rebars, freeze-thaw damage, so as other durability issues [2][3][4], while cement-based materials are brittle and sensitive to shrinkage cracking. For cement-based composites, when the shrinkage of the material is restrained by steel bars or form work, tensile stress develops in materials. The development of tensile stress can then result in an early age cracking, if the tensile stress is higher than the tensile strength [5]. A major contribution of incorporating rubber wastes to cement-based materials is the improvement of the flexibility, toughness and fatigue resistance of the material [6][7][8][9], although the compressive strength of the cement-based material is reduced [10,11]. The use of rubber aggregates is also a suitable solution to improve the cracking resistance of cement-based composites [11][12][13][14]. Recent research [15] shows that rubberized cement-based material is a promising pavement material for the excellent performance on cracking resistance of the material.
However, rubber particles exist in different sizes that affect rubberized cement-based materials to varying degrees. Sukontasukkul [16] investigated the sound and thermal properties of rubberized concrete with two different size of particle. Sukontasukkul and Tiamlom [17] demonstrated that various properties, such as expansion and shrinkage of rubberized concrete depend strongly on
Materials and Mix Proportions
The cement employed in this test was ordinary Portland cement of Chinese grade 42.5 and was camel brand cement produced in Tianjin (China). Physical properties of used materials are listed in Table 1. The chemical composition of the cement is listed in Table 2. The sand used was local river sand. Tap water was utilized. All of the rubber aggregates were obtained from mechanically ground waste tires. The chemical composition of the crumb rubber is listed in Table 3. Three types of rubber particles were used in this work and were designated as Rubbers A, B, and C. The analyses of the aggregates' sizes were carried out using the sieve method. The particle size distributions of the fine aggregates are provided in Figure 1. The mix proportions of the rubberized mortars are presented in Table 4.
RSERST
Dimensions and specimens of RSERST are shown in Figure 2 and Figure 3. Mortar specimens were stripped from the steel molds after 1 day, and the RSERST specimen was covered on the upper surface by waterproof silicone rubber. The specimens were then naturally cured. The temperature of curing room is 20 ± 2 °C, and relative humidity is 50% ± 5%. The strain values were measured every 15 min by three strain gauges pasted at point A, as shown in Figure 2. A total of 21 RSERST specimens were tested.
RSERST
Dimensions and specimens of RSERST are shown in Figures 2 and 3. Mortar specimens were stripped from the steel molds after 1 day, and the RSERST specimen was covered on the upper surface by waterproof silicone rubber. The specimens were then naturally cured. The temperature of curing room is 20 ± 2 • C, and relative humidity is 50% ± 5%. The strain values were measured every 15 min by three strain gauges pasted at point A, as shown in Figure 2. A total of 21 RSERST specimens were tested.
RSERST
Dimensions and specimens of RSERST are shown in Figure 2 and Figure 3. Mortar specimens were stripped from the steel molds after 1 day, and the RSERST specimen was covered on the upper surface by waterproof silicone rubber. The specimens were then naturally cured. The temperature of curing room is 20 ± 2 °C, and relative humidity is 50% ± 5%. The strain values were measured every 15 min by three strain gauges pasted at point A, as shown in Figure 2. A total of 21 RSERST specimens were tested.
RRST
To quantify the cracking resistance of each mortar mixture and to compare the RSERST results, we applied the RRST in accordance with ASTM C1581-04, as shown in Figures 4 and 5. The strain values were measured every 15 min using four strain gauges bonded at the middle location of the inner surface of the steel ring. A total of 21 specimens were casted, and average strain values were calculated.
RRST
To quantify the cracking resistance of each mortar mixture and to compare the RSERST results, we applied the RRST in accordance with ASTM C1581-04, as shown in Figures 4 and 5. The strain values were measured every 15 min using four strain gauges bonded at the middle location of the inner surface of the steel ring. A total of 21 specimens were casted, and average strain values were calculated.
RRST
To quantify the cracking resistance of each mortar mixture and to compare the RSERST results, we applied the RRST in accordance with ASTM C1581-04, as shown in Figures 4 and 5. The strain values were measured every 15 min using four strain gauges bonded at the middle location of the inner surface of the steel ring. A total of 21 specimens were casted, and average strain values were calculated. The development of strain within the inner surface of the steel ring can be transformed to the maximum circumferential tensile stress of mortar, which occurs at the interface of the mortar and steel through the calculation diagram, as shown in Figure 6 and the following equations: where P int is the fictitious interface pressure, σ r is the tensile stress in the mortar ring at any point along the radius, R IM is the inner diameter of the mortar ring, is the outer diameter of the mortar ring, R IS is the inner diameter of the steel ring, R OS is the outer diameter of the steel ring, ε ST is the strain in the steel ring, and E Steel is the modulus of elasticity of the steel. The development of strain within the inner surface of the steel ring can be transformed to the maximum circumferential tensile stress of mortar, which occurs at the interface of the mortar and steel through the calculation diagram, as shown in Figure 6 and the following equations: where is the fictitious interface pressure, is the tensile stress in the mortar ring at any point along the radius, is the inner diameter of the mortar ring, is the outer diameter of the mortar ring, is the inner diameter of the steel ring, is the outer diameter of the steel ring, is the strain in the steel ring, and is the modulus of elasticity of the steel.
Mechanical Test
Mortar specimens with 70.7 × 70.7 × 70.7 mm dimension were casted for compressive strength testing in accordance with JGJ/T70-2009 [26]. For the compressive strength test, a total of 84 specimens were prepared. The compressive strengths of the specimens were measured on day 28. Statistical analysis was conducted on the basis of compressive strength for quality test. For flexural strength testing, specimens with dimensions of 40 × 40 × 160 mm were casted according to GB/T 17671-1999 [27], and the specimens were examined on days 1, 3, 7, and 28. A total of 84 specimens were casted for flexural strength test. The specimens were cured at 20 °C ± 2 °C and 40% ± 5% relative humidity, and these conditions were the same as those in RRST and RSERST.
SEM Test
The interfacial transition zones (ITZs) of rubber samples and cement paste were observed under a field emission SEM 1530VP at Institute of oceanology, Chinese academy of sciences, Qingdao China The mortar samples of 10× 10 × 10 mm dimensions were polished, cleaned, coated with gold, and
Mechanical Test
Mortar specimens with 70.7 × 70.7 × 70.7 mm dimension were casted for compressive strength testing in accordance with JGJ/T70-2009 [26]. For the compressive strength test, a total of 84 specimens were prepared. The compressive strengths of the specimens were measured on day 28. Statistical analysis was conducted on the basis of compressive strength for quality test. For flexural strength testing, specimens with dimensions of 40 × 40 × 160 mm were casted according to GB/T 17671-1999 [27], and the specimens were examined on days 1, 3, 7, and 28. A total of 84 specimens were casted for flexural strength test. The specimens were cured at 20 • C ± 2 • C and 40% ± 5% relative humidity, and these conditions were the same as those in RRST and RSERST.
SEM Test
The interfacial transition zones (ITZs) of rubber samples and cement paste were observed under a field emission SEM 1530VP at Institute of oceanology, Chinese academy of sciences, Qingdao China. The mortar samples of 10 × 10 × 10 mm dimensions were polished, cleaned, coated with gold, and evacuated prior to observation.
RSERST Results
The cracks occurred at the thinnest portion of the RSERST specimen, as shown in Figure 7, which verified that RSERST could predict the cracking position. With the predicted cracking position, the observation of the cracks was more convenient. The strains obtained from the steel ring of RSERST are shown in Figure 8. The release of strain on the curves marks the cracking time of the mortar. In terms of size, the effects of Rubber B in preventing cracking are better than that of Rubber A and Rubber C. As for rubber content, the time of cracking increases as rubber content increases. terms of size, the effects of Rubber B in preventing cracking are better than that of Rubber A and Rubber C. As for rubber content, the time of cracking increases as rubber content increases. Figure 9 shows the plots of the strain obtained from the steel ring by RRST as a function of time attained using a data acquisition system. The release of strain on the curves marks the cracking time of the mortar [28,29]. The results clearly demonstrate that the rubber aggregates benefitted from the delayed restrained shrinkage cracking, a finding similar to that of a previous study [12]. With the same rubber contents, the cracking inhibition effect of Rubber B was the highest among those of the three rubber samples tested. The results of RRST are consistent with those of RSERST. terms of size, the effects of Rubber B in preventing cracking are better than that of Rubber A and Rubber C. As for rubber content, the time of cracking increases as rubber content increases. Figure 9 shows the plots of the strain obtained from the steel ring by RRST as a function of time attained using a data acquisition system. The release of strain on the curves marks the cracking time of the mortar [28,29]. The results clearly demonstrate that the rubber aggregates benefitted from the delayed restrained shrinkage cracking, a finding similar to that of a previous study [12]. With the same rubber contents, the cracking inhibition effect of Rubber B was the highest among those of the three rubber samples tested. The results of RRST are consistent with those of RSERST. Figure 9 shows the plots of the strain obtained from the steel ring by RRST as a function of time attained using a data acquisition system. The release of strain on the curves marks the cracking time of the mortar [28,29]. The results clearly demonstrate that the rubber aggregates benefitted from the delayed restrained shrinkage cracking, a finding similar to that of a previous study [12]. With the same rubber contents, the cracking inhibition effect of Rubber B was the highest among those of the three rubber samples tested. The results of RRST are consistent with those of RSERST. Table 5 shows the compressive strengths of the seven mixed mortars. The compressive strength decreases as rubber size decreases or rubber content increases. The coefficient value of the compressive strength variation is often used as a control of quality. Day [30] suggested that generally, for a reasonable quality control, a coefficient of variation should between 5% and 10%. Swamy [31] proposed that the limit for fine quality control is 15%. The largest coefficient of variation for the seven mixes is 10.9 for MRC200, which is slightly larger than 10% but much lower than 15%. Therefore, the mortars can be regarded to have good quality. Table 5 shows the compressive strengths of the seven mixed mortars. The compressive strength decreases as rubber size decreases or rubber content increases. The coefficient value of the compressive strength variation is often used as a control of quality. Day [30] suggested that generally, for a reasonable quality control, a coefficient of variation should between 5% and 10%. Swamy [31] proposed that the limit for fine quality control is 15%. The largest coefficient of variation for the seven mixes is 10.9 for MRC200, which is slightly larger than 10% but much lower than 15%. Therefore, the mortars can be regarded to have good quality. Table 6 shows the development of flexural strengths of the mixed mortars. In previous studies [4,26], both properties decreased when crumb rubber was incorporated into the mixes. The decline in compressive and flexural strengths was significant when the crumb rubber with small particle sizes was used. The strength reductions may be attributed to the combination of two effects: (1) Lower stiffness in the rubber than in the sand and (2) an increase in the amount of ITZs generated in the Table 6 shows the development of flexural strengths of the mixed mortars. In previous studies [4,26], both properties decreased when crumb rubber was incorporated into the mixes. The decline in compressive and flexural strengths was significant when the crumb rubber with small particle sizes was used. The strength reductions may be attributed to the combination of two effects: (1) Lower stiffness in the rubber than in the sand and (2) an increase in the amount of ITZs generated in the mortar with an increase in rubber content or reduction in rubber size. The specific surface area increased as the rubber size decreased and eventually resulted in the reduction in strength when the small crumb rubber particles were utilized.
SEM Results for the ITZ
The SEM images of the fine aggregates and cement matrix interfaces are shown in Figure 10. Only in the ITZ of the sand and cement paste, as shown in Figure a, was the main gap not obvious. In the ITZ of Rubber A, B, C and cement paste, minor cracks grew wider and enlarged to a main gap with large number of pore structures. The it can be concluded that the bonding interface between sand and cement is better than that between rubber and cement. The addition of rubber could reduce the strength of mortar mainly in two aspects: (1) Rubber particles are softer than sand, (2) Rubber particles could introduce pores into the mortar, whereas leading to the reduction of the cement-based materials strength.
The SEM images of the fine aggregates and cement matrix interfaces are shown in Figure 10. Only in the ITZ of the sand and cement paste, as shown in Figure a, was the main gap not obvious. In the ITZ of Rubber A, B, C and cement paste, minor cracks grew wider and enlarged to a main gap with large number of pore structures. The it can be concluded that the bonding interface between sand and cement is better than that between rubber and cement. The addition of rubber could reduce the strength of mortar mainly in two aspects: (1) Rubber particles are softer than sand, (2) Rubber particles could introduce pores into the mortar, whereas leading to the reduction of the cement-based materials strength.
From Figure 10b-d, it can be concluded that the particle size of the rubber does not affect the thickness of ITZ position very much, for rubber particles of different particle sizes are all made by mechanical grinding, and the production process is the same. If the rubber is of equal volume, the surface area of small size rubber particles is larger than that of rubber with large particle size. So, it can be concluded that, for the strength differences of the three types of rubberized mortar, it is mainly because the specific surface area increased as the size of rubber decreased, which lead to more ITZ positions and pore structures. So, the basic mechanical properties of MRC200 group with the smallest rubber particles and the largest rubber content are the worst.
Comparison of RSERST and RRST
The cracking time of RRST and RSERST specimens are shown in Table 7. Results of RSERST are in agreement with those of RRST. The order of cracking in RSERST consistently matches with that in RRST. Both results show that 0.5M0 cracked first, followed by the cracking of 0.5MRA100, 0.5MRC100, 0.5MRB100, 0.5MRA200, 0.5MRC200; 0.5MRB200 cracked the last. Compared with RRST, RSERST can shorten the test period, and the longer period of cracking results in more time that is saved by REERST. Figures 8 and 9 also demonstrate that the strains of the seven mixes obtained from RSERST were larger than those obtained from RRST, which possibly resulted from the stress concentration of RSERST. From Figure 10b-d, it can be concluded that the particle size of the rubber does not affect the thickness of ITZ position very much, for rubber particles of different particle sizes are all made by mechanical grinding, and the production process is the same. If the rubber is of equal volume, the surface area of small size rubber particles is larger than that of rubber with large particle size. So, it can be concluded that, for the strength differences of the three types of rubberized mortar, it is mainly because the specific surface area increased as the size of rubber decreased, which lead to more ITZ positions and pore structures. So, the basic mechanical properties of MRC200 group with the smallest rubber particles and the largest rubber content are the worst.
Comparison of RSERST and RRST
The cracking time of RRST and RSERST specimens are shown in Table 7. Results of RSERST are in agreement with those of RRST. The order of cracking in RSERST consistently matches with that in RRST. Both results show that 0.5M0 cracked first, followed by the cracking of 0.5MRA100, 0.5MRC100, 0.5MRB100, 0.5MRA200, 0.5MRC200; 0.5MRB200 cracked the last. Compared with RRST, RSERST can shorten the test period, and the longer period of cracking results in more time that is saved by REERST. Figures 8 and 9 also demonstrate that the strains of the seven mixes obtained from RSERST were larger than those obtained from RRST, which possibly resulted from the stress concentration of RSERST. Figure 11 shows the development comparison of restraint circumferential stress and flexural tensile strength. The mortar cracks when the corresponding hoop constraint stress is greater than the tensile strength. Rubber can delay mortar cracking because the strength development speed of mortars containing rubber is quicker than the hoop constraint stress development speed of the same sample. With the same rubber contents, the cracking inhibition function of Rubber B was better than those of Rubbers A and C.
Intensification Factor of RSERST Restriction Degree K
We define as the intensification factor of RSERST restriction degree, which shows the maximum constraint stress ratio of RSERST and RRST with the same materials and test conditions, as shown in Equation (4).
Intensification Factor of RSERST Restriction Degree K R
We define K R as the intensification factor of RSERST restriction degree, which shows the maximum constraint stress ratio of RSERST and RRST with the same materials and test conditions, as shown in Equation (4).
where σ RSERST is the maximum constraint stress of RSERST specimen, and σ RRST is the maximum constraint stress of RRST specimen. When the RSERST specimen cracks, the σ RSERST equals the tensile strength of the mortar. For the σ RRST taken at the same time, the variable can be calculated through Equation (3), as shown in Table 8. Then, K R can be calculated by using Equation (4). K R of the mortars at the cracking time of the RSERST specimen are shown in Table 8. We can conclude that K R of the mortars is >1, and the average K R is 1.17, which indicates that the eccentricity of the RSERST increases the restriction degree of the mortars compared with that of RRST.
Conclusions
The cracking properties of rubberized mortars with rubber of different sizes and contents were systematically investigated, and the restriction degree of RSERST was calculated. Based on our results, the conclusions can be drawn as follows: 1.
RSERST can predict the cracking position and shorten the test period, and the restriction degree is higher in RSERST than in RRST. The average intensification factor is K R = 1.17.
2.
Both RRST and RSERST revealed that the addition of rubber can delay cracking. The content and size of rubber can both contribute to the cracking resistance of rubberized mortars. With rubber of equal content, the cracking inhibitory effect of Rubber B is higher than that of Rubbers A and C. 3.
The bonding interface between sand and cement is better than that between rubber and cement. The particle size of the rubber does not affect much on the ITZ position of rubber and cement paste. For the strength differences of the three types of rubberized mortar, it is mainly because the specific surface area increased as the rubber size decreased, which lead to more ITZ positions and pore structures. 4.
The addition of rubber will inhibit the development of mortar tensile strength. With rubber particles of a smaller size, more additional pores are introduced, leading to more obvious reduction effects. While, as the rubber is a soft filling, with a smaller particle size, the rubber distribution is more uniform, leading to better cracking inhibition effect. The effect of rubber particle size is opposite in two aspects. Therefore, rubber particle B, which is of medium size, performed best in the cracking inhibition. | 2019-09-28T13:02:29.398Z | 2019-09-25T00:00:00.000 | {
"year": 2019,
"sha1": "4a36bafcc506719e277d4d8c2a76a95e2f2bfaaa",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/12/19/3132/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7678bba61900ec395c04af406166307af05d6a8a",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
} |
246556962 | pes2o/s2orc | v3-fos-license | Lymphocyte percentage as a valuable predictor of prognosis in lung cancer
Abstract Lymphocytes and neutrophils are involved in the immune response against cancer. This study aimed to investigate the relationship between lymphocyte percentage/neutrophil percentage and the clinical characteristics of lung cancer patients, and to explore whether they could act as valuable predictors to ameliorate lung cancer prognosis. A total of 1312 patients were eligible to be recruited. Lymphocyte percentage and neutrophil percentage were classified based on their reference ranges. Survival curves were determined using Kaplan–Meier method, and univariate and multivariate cox regression analyses were performed to identify the significant predictors. Decision curve analysis was used to evaluate the clinical benefit. The results of both training and validation cohorts indicated that lymphocyte percentage exhibited high correlation with clinical characteristics and metastasis of lung cancer patients. Both lymphocyte percentage and neutrophil percentage were closely associated with survival status (all p < 0.0001). Low lymphocyte percentage could act as an indicator of poor prognosis; it offered a higher clinical benefit when combined with the clinical characteristic model. Our findings suggested that pretreatment lymphocyte percentage served as a reliable predictor of lung cancer prognosis, and it was also an accurate response indicator in lung adenocarcinoma and advanced lung cancer. Measurement of lymphocyte percentage improved the clinical utility of patient characteristics in predicting mortality of lung cancer patients.
However, the main limitation of LDCT screening is generation of false-positive results, since it is unclear whether all lesions detected in asymptomatic participants will develop significant symptoms, and affect long-term outcomes, suggesting that LDCT may be potentially harmful in large-scale screening programmes. 7,8 Thus, more effective and low-cost strategies need to be developed to consider patient acceptability, and assess the prognosis of lung cancer patients.
Circulating biomarkers in plasma and serum, which usually appear prior to imaging changes, can serve as indicators of tumour progression and predictors of prognosis. 9,10 Identifying reliable markers to better select patients for currently available and upcoming approaches, such as immunotherapy, will greatly assist clinical decisionmaking. A variety of biomarkers, including carcinoembryonic antigen (CEA), cytokeratin 19 fragments (CYFRA 21-1), carbohydrate antigen (CA)125, CA199 and lactate dehydrogenase (LDH), have been identified to be associated with lung cancer prognosis. [11][12][13][14][15] Inflammation is one of the hallmarks of cancer and plays a pivotal role in the modulation of tumour microenvironment. It can also highly influence tumorigenesis and tumour progression. 16,17 Different cells are known to be involved in the immune response against cancer, making the process dynamic and balanced. 18 Lymphocytes and neutrophils are easy to measure and may provide a more convenient strategy for the study of cancer-related inflammation. Neutrophil-to-lymphocyte ratio (NLR) has been evaluated in a variety of cancers, but its prognostic role remains controversial, which may explain the reason why it has not been incorporated into clinical practice. [19][20][21][22] For other haematological parameters, previous study has demonstrated that preoperative lymphocyte count is associated with node-negative non-small-cell lung cancer (NSCLC) prognosis. 23 An elevated neutrophil count has been shown to be a predictor of poor survival in metastatic melanoma. 24 Overall changes in lymphocytes and neutrophils with regard to inflammation and the immune state may be expressed as lymphocyte percentage (LY%) and neutrophil percentage (NEUT%).
Peripheral LY% reflects leukocytosis more directly than NLR does, the relative decrease of lymphocytes results in the diminishment of the immune response and increases the risk of cancer, and it has been reported to predict the survival more accurately than peripheral lymphocyte count in colorectal cancer. 25 However, it was less considered in previous studies regarding the prognostic values of LY% and NEUT% in large cohort of patients with lung cancer.
In this study, a retrospective analysis was performed to investigate the relationship between LY%/NEUT% and clinical characteristics of lung cancer patients, and to evaluate whether LY% and NEUT% could be used for improving prediction of patient outcome.
| Ethics statement
This study was approved by the Medical Ethics Committee and Institutional Review Board of West China Hospital. Informed consent was obtained from all patients before study. All methods used in this study were performed following the approved protocols.
| Patients
This study included 1312 patients diagnosed with lung cancer in West China Hospital from 2008 to 2014. On account of the differences in the levels of haematological indicators, 270 patients were excluded in advance due to preoperative treatment or history of cancer, while 38 patients were excluded owing to insufficient data of survival, or information regarding LY% or NEUT% in peripheral blood was not available ( Figure 1). All clinical information was extracted from the medical records after lung cancer confirmed by biopsy. Information regarding metastasis was obtained using whole-body CT scan, bone scan, lymph node biopsy and fibreoptic bronchoscopy. Survival status was determined on the last followup day, and the overall survival time was defined as the length of time between the lung cancer confirmation date and the date of death or last follow-up, which was done by visits or telephone inquiries. count to white blood cell count, and NEUT% was defined as the percentage of neutrophil count to white blood cell count. In several analyses of cancer prognosis, the cut-off value of LY% was defined as 20%, 28,29 and its normal level was considered to be 20%-50% in a study of lung cancer, 30 which was in line with the clinical criteria of West China Hospital, and based on the diagnostic criteria and clinical experience, the reference range of NEUT% was defined as 40%-75%.
| Statistical analysis
Continuous variables were presented as median (range). Categorical variables were presented as percentage (%). Chi-square test was performed to determine the statistical significance of categorical data. For survival analysis, the log-rank test was used for univariate and multivariate cox regression analysis. To measure effects of variables on survival, statistical significance was expressed as hazard ratio (HR), at 95% confidence interval (CI). Survival curves were constructed using the Kaplan-Meier method. With regard to clinical utility, the net benefit was measured by decision curve analysis
| Patient Characteristics
A total of 1312 lung cancer patients were randomly divided into two cohorts (Table S1)
| Correlation of LY% and NEUT% with clinical characteristics in all lung cancer patients
The association between LY% and the clinicopathologic characteristics of lung cancer patients was investigated (Table 1). In the training cohort, 365 cases were identified to be within the reference range, (Table S2).
| Correlation of LY% and NEUT% with clinical characteristics in different histological subtypes
For the purpose of treatment, lung cancer was classified as SCLC and NSCLC, ADC and SCC accounted for more than 80% of NSCLC cases. 32,33 Hence, classification analyses of LY% and NEUT% in ADC, SCC and SCLC were performed. (Table S3). in SCLC (Table S4).
TA B L E 2 (Continued)
F I G U R E 2 Kaplan-Meier curves for overall survival according to LY% (A) and NEUT% (B) in training and validation cohorts of all lung cancer patients. ****p < 0.0001. LY%: lymphocyte percentage; NEUT%: neutrophil percentage
| LY% and NEUT% were associated with overall survival of lung cancer
The overall survival time of patients in training and validation cohorts was evaluated using Kaplan-Meier survival curves. As shown in Figure 2, low LY% was strongly correlated with poor survival status (both cohorts, p < 0.0001), and NEUT% showed worse overall survival in higher level (both cohorts, p < 0.0001).
After stratification based on histology, significant differences were found in each subtype (Figure 3). Patients with lower LY% ex-
| LY% could serve as a valuable predictor of prognosis in lung cancer
Univariate and multivariate cox regression models were introduced to measure prognostic predictors of lung cancer patients. The univariate analysis revealed that aberrant levels of LY% and NEUT% conferred unfavourable prognosis (both p = 0.000). Sex, stage, smoking status, differentiation and metastasis were associated with prognosis in all lung cancer patients ( Figure 5A). A multivariate regression analysis was conducted for 8 variables with statistically significant differences (p < 0.1) in univariate analysis. The HR increased to 1.550 (95% CI: 1.332-1.804, p = 0.000) in low LY% group, compared with the reference, suggesting that low LY% could serve as an important predictor of poor prognosis for lung cancer patients.
Moreover, age older than 60 years (p = 0.018), advanced stage (III: To ensure the reliability of results, patients in this study were randomly assigned to either training or validation cohort, and the relationship between LY% and clinicopathological factors was investigated first. In addition to some slight inconsistencies regarding age and metastatic sites, the results of two cohorts illustrated that LY% was strongly correlated with other clinical parameters, as well as bone, liver and pleural metastasis in all lung cancer patients.
| Clinical utility of LY% integration model in lung cancer prognosis
Survival curves showed that low LY% had an obvious correlation with unfavourable survival status, and based on these findings, a multivariate cox regression analysis was performed to evaluate the prognostic value of LY%; it was then confirmed to be a significant predictor of prognosis in patients with lung cancer. In addition, the incorporation of LY% resulted in better predictive performance of clinical characteristic model on patient outcome. Neutrophil plays a complex role in inflammation within the tumour, and its elevated count has been demonstrated to have prognostic value in NSCLC. 44 In this study, the role of NEUT% in lung cancer was examined as well. In spite of its relevance to clinical characteristics and survival outcomes, it could not make prediction for lung cancer prognosis in the multivariate analysis.
Stratification analyses were also performed, and it was interesting to note that NEUT% could serve as prognostic factor in both SCLC and non-metastatic patients. Regarding ADC and SCC subtypes, as well as adverse characteristics of advanced stage, undifferentiation and metastasis, LY% remained a more valuable predictor.
The current study shed light on peripheral lymphocyte percentage was significantly associated with prognosis of lung cancer patients. The percentage of lymphocyte has the potential to be a surrogate indicator of disease outcome and a stratification factor in clinical trials. While the results were derived based on the reference range of our clinical criteria, using the appropriate cut-off value for the corresponding population is the best course of action. A ROC analysis might be useful to determine the optimal cut-off level.
Hence, further studies are required before it can be established as a validated prognostic marker. NLR is a well-established predictor for patients with malignancy, which also is what mainly determines the LY%. In the subsequent analysis, the comparison for prognostic prediction potential of NLR and LY% will be conducted. In addition, we aim to figure out whether these two indicators can be combined as a more effective index to reflect the prognosis of patients with lung cancer. This study, conducted in a relatively large number of patients, obtained detailed clinical information to allow for extensive miscellaneous adjustments. However, data on the dynamic changes in lymphocyte and neutrophil percentages during tumour progression, and around the treatment period, were not available.
Furthermore, the mechanism of complex association between inflammatory cells and tumour microenvironment has not been established, while the imbalance of lymphocyte and neutrophil ratio may provide insight into tumour progression and prognosis of individuals with lung cancer. We believe that the interaction or regulation of lymphocytes and neutrophils in lung cancer is worth studying and considering.
Evidence from the study supported the idea that pretreatment lymphocyte percentage effectively predicted the prognosis of lung cancer patients, and it was also an accurate response indicator in ADC and advanced lung cancer. Based on the results obtained, the integration of lymphocyte percentage with clinical characteristic model benefited the prognostic prediction of the disease.
CO N FLI C T O F I NTE R E S T
The authors confirm that there are no conflicts of interest. | 2022-02-06T06:23:08.192Z | 2022-02-05T00:00:00.000 | {
"year": 2022,
"sha1": "780b57e618b008cde8e781cdc0b5345584e7575c",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jcmm.17214",
"oa_status": "GOLD",
"pdf_src": "Wiley",
"pdf_hash": "abb7fafd928975d21ad873b9441f97ed18010050",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
210988699 | pes2o/s2orc | v3-fos-license | Fluoroscopy-Guided Percutaneous Sacroplasty for Painful Metastases at the Sacral Ala
Objective Percutaneous sacroplasty (PSP) is widely used in the clinic for osteoporotic sacral insufficiency fractures; however, few reports have described the safety and effectiveness of PSP for painful sacral metastases at the sacral ala under fluoroscopy alone. We aimed to evaluate the safety and efficacy of fluoroscopy-guided PSP for painful metastases at the sacral ala. Patients and Methods Thirty-five consecutive patients (median age, 60.74 ± 12.74 years), with a total of 41 metastatic lesions at the sacral ala, were treated with PSP. The patients were followed up for periods ranging from 1 month to 30 months (average, 8.23 ± 6.75 months). The visual analog scale (VAS), Oswestry Disability Index (ODI), and Karnofsky Performance Scale (KPS) were used to evaluate pain, mobility, and quality of life before the procedure and at 3 days and 1, 3, 6, 12, and 18 months after the procedure. Results Technical success was achieved in all patients. The minimum follow-up duration was 1 month. The mean VAS scores declined significantly from 7.20 ± 0.93 before the procedure to 3.43 ± 1.38 by day 3 after the procedure, and was 3.13 ± 1.07 at 1 month, 3.17 ± 1.15 at 3 months, 2.91± 1.38 at 6 months, and 2.57 ± 1.51 at 12 months after the procedure (P < 0.001). After PSP, analgesic drug administration had been discontinued in 31 of 35 patients (88.57%). The ODI and KPS also changed after PSP, with significant differences between the baseline scores and those at each follow-up examination (P < 0.001). Extraosseous cement leakage occurred in 12 cases without any major clinical complications. Conclusion PSP is a safe and effective technique for the palliative treatment of painful metastases involving the sacral ala under fluoroscopic guidance alone. It can relieve pain, reduce disability, and improve function, and is associated with minimal complications.
Introduction
Percutaneous sacroplasty (PSP), which involves the placement of bone needles and the injection of bone cement into the sacrum, is a minimally invasive, image-guided procedure for pain control and stability restoration in cases with sacral insufficiency fractures (SIF) resulting from both osteoporosis and metastases. [1][2][3] Although PSP has been widely used in cases with osteoporotic insufficiency, 3-6 only a few reports have described the safety and effectiveness of PSP in painful sacral metastases. [7][8][9][10] Moreover, studies have not compared the effect of multiple approaches for metastases involving the specific sacral ala and, because the follow-up periods of the available studies have generally been short, little is known regarding the mediumand long-term improvement in pain and function after PSP in cases with metastases. In this retrospective study, we aimed to evaluate the efficacy and safety of fluoroscopy-guided PSP in the treatment of painful metastases involving the sacral ala with a relatively large sample and medium-term follow-up.
Study Design
This retrospective study was approved by the institutional review board of Shanghai Sixth People's Hospital East Campus, and written informed consent was obtained from all participants included in the study. From March 2016 to August 2018, patients with painful metastatic lesions of the sacral ala were recruited from our department for treatment with PSP. All patients were referred to our institution due to the persistence of pain that had not responded to conventional treatments such as analgesic drugs, chemotherapy, and radiotherapy. All patients had pathologically confirmed primary cancer and had undergone Computed Tomography (CT) and/or magnetic resonance imaging (MRI) examinations before the procedure to determine the size of the lesion and the part of the sacral segment involved, and to rule out other causes of back pain (such as degenerative facet disease). All patients had severe pain, without any neurological deficit related to the metastatic lesions of the sacral ala. Patients were eligible for inclusion in the study if they had life expectancy ≥3 months, were 18 years of age or older, and were willing to sign the consent form. Patients with pathological fractures and lesions involving the area of the sacral foramina were excluded. The primary tumors in these patients were located in the lung (n=14), thyroid (n=8), liver (n=6), prostate (n=3), biliary (n=2), breast (n=1), and colon (n=1). The number of treated lesions per patient ranged from 1 to 2; in particular, 82.86% (29/ 35) patients had 1 lesion each and 17.14% (6/35) had 2 lesions each, thus resulting in a total of 41 metastases in the cohort. In addition, 28 patients were treated for other metastatic localizations, including spinal metastases in 19 (54.29%) and pelvic lesions in 9 (25.71%). The baseline characteristics of the 35 patients and the results are summarized in Table 1.
PSP Procedures
All procedures were performed under fluoroscopic guidance alone with a biplane machine (GE, Innova IGS630, America). Under conscious sedation, the patient was placed in the prone position. Under anteroposterior and lateral fluoroscopic guidance, a 13-gauge bone puncture needle (Cook Inc., Bloomington, IN, USA) was slowly hammered into the metastatic lesions at the sacral ala through the posterior approach, transiliac approach, or anterior-oblique approach under fluoroscopic guidance. As the bone needle was advanced into the S1 sacral ala, continuous anteroposterior and lateral fluoroscopic guidance was used to confirm needle access between the sacroiliac joints, sacral foramina, and anterior margin of the sacral ala, whereas the orientation of the bevel edge was adjusted to avoid penetration of structures such as the anterior surface of the sacral ala and S1 foramen. One or more needles were used if necessary. Thereafter, mixed bone cement (polymethyl methacrylate [PMMA]; Palacos V; Heraeus Medical GmBH, Germany) was injected into the metastatic lesion at the sacral ala. The injection process was monitored continuously under fluoroscopy in the anterior and lateral plane. Injection was stopped when substantial resistance was met or when the PMMA cement reached the margin of the sacroiliac joint or the posterior portion of the sacrum. The technical success of the procedure and any complications that occurred were also recorded. Immediately after PSP, non-contrast CT examination was performed in all patients to assess if there was cement leakage (Figure 1).
Clinical Outcome Evaluation and Data Collection
All patients underwent clinical examination by two of the authors before the procedure; 1 week after the procedure; 1, 3, 6, and 12 months after the procedure; and every 6 months thereafter until patient death. Data on technical success, PMMA volume injected, pain relief, functional outcomes, and complications were evaluated during follow-up consultations or at patient death. Technical success was defined as successful puncture of the sacral lesion with any approach, followed by PMMA injection without any major complications.
Major complications included accidental nerve root injury, cauda equina syndrome, pulmonary embolism, intestinal rupture, or perioperative mortality, whereas minor complications included postoperative urinary retention, wound hematoma, and infection. Pain was measured using the visual analog scale (VAS), where a score of 0 indicated "no pain" and a score of 10 indicated "worst pain ever." Pain relief was defined as a decrease in the VAS score by ≥3 points from the baseline score. The functional status of patients for walking, standing, and sleeping was measured using the Oswestry Disability Index (ODI). The functional outcomes were measured on a 100-point Karnofsky Performance Scale (KPS) to assess any changes in the quality of life. Data on the use of pain medication (narcotics or nonsteroidal anti-inflammatory drugs) before and after the intervention were evaluated. A decrease in dose or a shift to a lower level of the World Health Organization classification of analgesia was considered to represent a reduction in analgesic use.
Statistical Analysis
All statistical analyses were performed using commercially available software (SPSS Version 16, SPSS Inc., Chicago, IL, USA). Data are expressed as means ± standard deviation. The paired t-test was used to compare the mean VAS, ODI, and KPS scores between the different study time points. A P value of ≤0.05 was considered statistically significant.
Results
Thirty-five consecutive patients (median age: 60.74 ± 12.74 years, including 24 men and 11 women) with painful metastatic lesions at the sacral ala underwent PSP in our medical center. PSP was technically successful and well tolerated in all the patients through three different approaches: the posterior approach (n=18), transiliac approach (n=7), or anterior-oblique approach (n=10). The mean amount of cement injected per lesion was 5.20 ± 1.55 mL (range, 2-8 mL). The mean number of puncture needles used during PSP was 1.49 ± 0.84 (range, 1-4) per lesion. No major complications were observed during the procedure. The only minor complication encountered was PMMA leakage, which was noted in 34.29% (12/35) patients. Leakages occurred into the sacral venous plexus (n=5), puncture path (n=2), anterior epidural space (n=2), or anterior sacral space (n=3), but were all asymptomatic and did not require any special treatment. In addition, there was no significant difference in PMMA leakage among the three approaches (P > 0.05).
The average follow-up duration was 8.23 ± 6.75 months (range, 1-30 months). The changes in the VAS, ODI, and KPS values following PSP are shown in Figure 2. Of the 35 patients, 31 (88.57%) reported pain relief and the other 4 (11.43%) experienced no obvious regression of pain immediately after the intervention. Four patients did not receive regular treatment, such as systematic chemotherapy or radiotherapy. The mean VAS scores declined significantly from 7.20 ± 0.93 before the procedure to 3.43 ± 1.38 by day 3 after the procedure, and was 3.13 ± 1.07 at 1 month, 3.17 ± 1.15 at 3 months, 2.91± 1.38 at 6 months, and 2.57 ± 1.51 at 12 months after the procedure; the scores remained low throughout the follow-up period. There was a significant difference between the pre-procedure VAS score and that at each study time point after the procedure (P < 0.001). As shown in Figure 2, the average ODI and KPS scores also changed after the procedure, with significant differences between the baseline scores and those at each follow-up examination (P < 0.001). In addition, no significant differences were observed in the VAS, ODI, and KPS values among the three approaches and among the three types of bone destruction (P > 0.05). Prior to the PSP procedure, all patients were prescribed analgesic drugs, such as strong opioid analgesics (n=12), weak opioid analgesics (n=15), and nonsteroidal anti-inflammatory drugs (NSAIDs; n=8). After the PSP procedure, the administration of analgesic drugs was discontinued or reduced in 31 of 35 patients (88.57%). In 31 of the 35 patients, post-procedural pain was controlled by using strong opioid analgesics (n=1), weak opioid analgesics (n=4), and NSAIDs (n=12); in the remaining 14 patients, no analgesic therapy was necessary after the procedure.
Discussion
Metastatic tumors are the most common malignant lesions occurring in the sacrum, especially in the sacral ala, and account for 1-7% of all spinal tumors. 11,12 The sacrum is a weight-bearing structure, and the sacral ala dissipates vertical axial forces from the lumbar to iliac region, thus aiding in spinal stability. Symptomatic sacral metastases usually manifest as debilitating local pain that can radiate into the buttocks, perineum, and posterior thigh. In addition to severe pain, sacral metastasis can also lead to pathological fractures and neurological deficits, which often limit the mobility, quality of life, and tolerance for further necessary cancer treatment in patients.
The currently available treatments for sacral metastases combine systemic and local therapies, including pain medication, radiotherapy, chemotherapy, and endovascular embolization in cases with highly vascular tumors. 13,14 When a localized painful lesion is identified, open surgery has limited application because it is often too invasive for a fragile patient. Radiotherapy is a good option but has certain limitations, such as a delayed effect and tissue tolerance. 15 It is estimated that at least 45% of the patients with bone metastases develop intractable pain due to the lack of sufficient treatment. 16 Therefore, researchers have sought to develop novel therapies to relieve pain and improve mobility.
PSP, analogous to percutaneous vertebroplasty (PVP), has been widely reported as a safe and effective option for managing osteoporotic sacral insufficiency fractures. [4][5][6] Moreover, case and case-series reports have indicated that PSP provides pain relief and mobility improvement in patients with sacral metastases, although the number of cases described is small. [7][8][9][10]13,17 In a multicenter study with 243 patients, including 204 with painful sacral insufficiency fractures and 39 with symptomatic sacral lesions, 24 patients with sacral metastases experienced remarkable and prompt pain relief. 1 Pereira et al described promising outcomes and safety data in the largest published sample of 42 patients with sacral metastases undergoing PSP. 2 In the present retrospective study with a relatively large sample, most patients experienced significant changes in the VAS, ODI, and KPS immediately and longitudinally after PSP, consistent with the previously published reports. Collectively, these studies with a relatively large sample appear to strongly indicate that PSP is an effective procedure for providing pain relief and function recovery in patients who are unresponsive to conservative management and those who are not candidates for surgery. In our study, four patients did not experience any apparent pain reduction. This finding may be related to the fact that the four patients were not under regular treatment, such as systematic chemotherapy or radiotherapy, which might have helped control the cancer progression. Moreover, this finding suggests that successful immediate and long-term pain relief is achievable in patients with sacral metastasis who require regular anticancer therapy in addition to PVP.
Although PSP is a variation of PVP, the puncture and injection techniques of PSP completely differ from those of PVP due to the complex anatomy of the sacrum. The sacral alaclassified as zone I by Denis et al 18 is adjacent to the sacral foramina at its internal side and to the sacroiliac joint at its external side. Therefore, the adjacent nature of the sacral ala and the convex course complicate needle positioning and cement injection. Moreover, the iliac bones of the pelvis prevent adequate visualization of the sacral ala and require lateral fluoroscopic evaluation. In addition, it can be difficult to identify anatomic landmarks in destructive bone under fluoroscopy.
However, there are still three primary approaches for PSP at the sacral ala: the posterior approach, transiliac approach, and anterior-oblique approach. 1,13,19 In particular, the posterior approach, including a long-axis approach and a short-axis approach, is the most commonly used approach under fluoroscopic or CT guidance. In the present study, the above three approaches were all used with 100% technical success. We also compared the safety and efficacy of the different approaches for metastases of the sacral ala, and did not observe any differences, consistent with the findings of another study. 20 Although a combination of CT and fluoroscopic guidance may be the best option at present, our procedures were all performed under continuous fluoroscopy alone without any major complications. We primarily preferred fluoroscopic guidance due to the potential for realtime imaging during needle placement and cement injection. We believe that an experienced doctor with a thorough understanding of the radiologic anatomy of the sacrum can achieve both precise needle placement and real-time visualization of cement delivery under continuous anteroposterior and lateral fluoroscopy alone. Furthermore, sacrokyphoplasty has been described as an effective treatment in patients with sacral metastasis to reduce pain and the rate of cement leakage. 8 However, in the present study, we used sacro-vertebroplasty due to the lower associated cost with the relatively similar clinical efficacy.
This study has certain limitations. First, it was conducted at a single center and was retrospective in nature. Second, the sample size was not sufficiently large. Third, we did not compare PSP with other therapeutic options such as surgical treatment or radiotherapy. Moreover, 28 patients were treated for other metastatic localizations, which may bias the effect of PSP.
In conclusion, PSP is an effective, safe, and minimally invasive procedure for the treatment of painful metastases of the sacral ala that are refractory to conservative treatment. This method can achieve a marked reduction in pain, as well as improvement in function and the quality of life. Nevertheless, large-scale prospective research is required to confirm our findings. | 2020-01-23T09:21:17.224Z | 2020-01-16T00:00:00.000 | {
"year": 2020,
"sha1": "38b4ee01a0731678a55f21a853df70404165e307",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=55452",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f240f051cb589d74e4618ea305673d1e247b872a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
103796612 | pes2o/s2orc | v3-fos-license | Hydrophilic cell-derived extracellular matrix as a niche to promote adhesion and di ff erentiation of neural progenitor cells †
The natural extracellular matrix (ECM) o ff ers a dynamic and intricate microenvironment, which serves as a structural support and regulates cell phenotype and its function. Recently, ECM has been revealed as a favorable and biocompatible architecture for stem cell adhesion and growth in cellular therapy to treat various diseases. However, cell-derived ECM is rarely used as a culture substrate for anchorage-dependent cells, such as neural progenitor cells (NPCs), which have the potential to di ff erentiate into basal forebrain cholinergic neurons (BFCNs) for Alzheimer's disease. Here, we report mouse embryonic fi broblast (MEF)-derived ECM, with an appropriately hydrophilic property (water contact angle of 66.8 (cid:1) ) to mimic the neural niche for NPCs. In addition, MEF-derived ECM possesses a nanotopological surface, plentiful kinds of components and exhibits excellent adhesion properties for anchoring NPCs. Compared with a laminin-coated plate, MEF-derived ECM promotes NPC proliferation and di ff erentiation into BFCNs by (cid:3) 1.6 fold and (cid:3) 3.1 fold, respectively, consequently enhancing the production of acetylcholine by (cid:3) 2.0 fold. This MEF-derived ECM could be a favorable cell culture carrier for NPC attachment, with great potential for applications in stem cell therapy for Alzheimer's disease. model. Which has been evaluated as a neural niche for cultured NPC. Importantly, this MEF-derived ECM exhibits excellent adhesion performance for NPC anchoring and residency, while promote NPC proliferation and generation into BFCN. Our results reveal that MEF-derived ECM could be a promising candidate for neural niche, providing a novel technique for developing cell-derived ECM for NPC application into clinical therapy of Alzheimer's diseases.
Introduction
Alzheimer's disease is a very serious neurodegenerative disorder and is primarily characterized by progressive decits in spatial learning and memory, which are related to the early substantial loss of basal forebrain cholinergic neurons (BFCNs) and the absence of cortical cholinergic input. 1 The developed primate and rodent models have demonstrated the roles of BFCNs in memory function, hippocampal neurogenesis, and functional plasticity of the cortex. 2 The transplant of exogenous BFCNs to ameliorate memory decits through stable engrament of cells in the adult cortex has become a potential therapeutic strategy. 3 However, due to the limitations in acquiring sufficient BFCNs from precursor cells, the ability to promote the differentiation of stem cells into BFCNs would be a signicant step toward a cell replacement therapy. 4 Recently, neural progenitor cells (NPCs)-based therapy has emerged as a promising approach to restore the original BFCN function in the cortical and hippocampal regions for treatment of Alzheimer's disease. 5 NPCs can proliferate and differentiate into BFCNs when they attach onto a matrix under the appropriate conditions, and thus, they can be considered as anchorage-dependent cells. 6 Anchoragedependent cells require a good cell carrier to spread, reside, proliferate, differentiate and maintain other cellular functions. Especially, stem cell functions require activation of intrinsic transcription and interaction with a specic extracellular microenvironment niche.
The extracellular matrix (ECM) is a natural cell-growth microenvironment, containing a variety of biological components secreted by the resident cells in tissues and organs. 7 It has been proved that ECM provides a mechanical support and signaling cues to regulate cell survival, proliferation, differentiation and metastasis via the interaction of cells and ECM. [8][9][10][11] Given the advantageous features, native tissue-derived ECM, mainly derived from allogeneic and xenogeneic tissues treated with chemical or thermal decellularization, has been utilized extensively toward promoting tissue engineering and regenerative medicine. 12 However, tissue-derived ECM has several limitations, such as limited donors, inammatory reactions, pathogens transfer disease and uncontrollable degradation. 13 Recently, electrostatic spinning, hydrogels and threedimensional bio-printing technologies have been used to fabricate scaffolds with synthetic material or a limited set of natural polymer blends, but these fail to adequately mimic the complex morphology and composition of natural ECM. 14 In addition, puried ECM components, including brin, laminin, bronectin, collagen and hyaluronan etc., serve important roles to mimic the native microenvironment for the engrament of anchorage-dependent cells and have been employed in various types of models in vitro and in vivo. 6 Moreover, natural ECM can also promote tissue-specic stem cell differentiation, whereas individual components, such as laminin and collagen, have no signicant effects on cell proliferation and differentiation. 15 Therefore, individual ECM components are not enough to build an ideal niche because different components act for different functions. 16 To more fully replicate the biological molecules and surface characteristics of ECM in natural tissues, a great deal of research has attempted to develop an approach that involves the fabrication of cell-derived ECM. Similar to tissue-derived ECM, cell-derived ECM also has highly advantageous biophysical and biochemical properties. More importantly, cell-derived ECM can eliminate the possibility of inammatory reactions and pathogen transfer. 17 Therefore, cell-derived ECM has been extensively applied to tissue engineering and regenerative medicine, instead of the tissue-derived ECM. 13,[18][19][20] Meanwhile, cell-derived ECM as cell attachment carrier has gained increasing interests in vitro applications. In previous studies, cell-derived ECM was mainly contributed to culture dorsal root ganglion neurons, 20 mesenchymal stem cells 21 and Schwann cells, 16 which are anchorage-independent cells, with less strict matrix requirements since those cells can adhere and grow on commercialized tissue culture polystyrenes without any modi-cations. To date, there are scarcely reports regarding the use of cell-derived ECM as a niche for anchorage-dependent cells, such as neural stem/progenitor cells.
Here, MEF (mouse embryonic broblast) was selected to fabricate a cell-derived ECM model. Which has been evaluated as a neural niche for cultured NPC. Importantly, this MEFderived ECM exhibits excellent adhesion performance for NPC anchoring and residency, while promote NPC proliferation and generation into BFCN. Our results reveal that MEF-derived ECM could be a promising candidate for neural niche, providing a novel technique for developing cell-derived ECM for NPC application into clinical therapy of Alzheimer's diseases.
Materials
Gelatin, L-ascorbic acid, sodium ascorbate, deoxycholate, DNase, Triton X-100 and DAPI were provided from Sigma-Aldrich, USA. Glutaraldehyde and tertiary butanol were ordered from Alfa Aesar, USA. DMEM, FBS, penicillin/streptomycin, PBS, TRIzol reagent, AmplexRed Acetylcholine/Acetylcholinesterase Detection Kit, EdU Labeling/Detection Kit and ECL Western blot Substrate Kit were offered from Thermo Fisher, USA. Mouse anti-bronectin antibody, rabbit anti-laminin antibody, mouse anti-nestin antibody, rabbit anti-vimentin antibody, mouse anti-Map-2 antibody, goat anti-ChAT antibody, rabbit anti-p75 antibody, rabbit anti-VAChT antibody and secondary antibodies were supplied from Abcam, USA. First Strand cDNA Synthesis Kit and SYBR Green Real-time PCR Master Mix were purchased from Applied Biosystems, USA.
Formation of MEF-derived ECM
Tissue culture plates were pre-coated with 0.5% gelatin overnight at 37 C and exposed to ultraviolet rays for 2 h to enhance crosslinking. MEFs were rst cultured in DMEM with 10% FBS for 3-5 d until 100% conuence. The cells were continually cultured in DMEM with 20% FBS and 50 mg mL À1 L-ascorbic acids and 100 mg mL À1 sodium ascorbate for 14 d to stimulate ECM formation, and the medium was routinely changed every 48 h. 22 For decellularization, cells were washed with deionized water at 37 C for 20 min and then dried, followed by treatments with 0.5% Triton X-100 plus 1% deoxycholate for 10 min and with 100 U mL À1 DNase for 30 min at 37 C. Aer washing for three times with PBS, the MEF-derived ECM was stored in PBS containing 100 U mL À1 penicillin/streptomycin at 4 C.
Scanning electron microscopy imaging
The samples were washed three times with PBS, and xed with 2.5% glutaraldehyde for 30 min at 4 C, followed by post-xation in 1% osmium tetroxide. The samples were then dehydrated in gradient concentrations of ethanol (50% to 100%) for 10 min, followed by replacement with 100% tertiary butanol, and nally, lyophilized using a vacuum drier. Aer the samples were coated with gold, they were observed using a scanning electron microscope (Quanta 400 FEG, FEI, USA).
Immunouorescence staining
The samples were xed with 4% paraformaldehyde for 20 min at room temperature, washed with PBS, and then blocked with 5% normal goat serum for 1 h. The samples were incubated with the relevant primary antibodies overnight at 4 C. Aer washing with PBS, samples were incubated with the appropriate uorescently labelled secondary antibodies for 2 h in the dark at room temperature, followed by nuclear staining with 5 mg mL À1 DAPI. Fluorescence microscopy images were collected using uorescence microscopy (Nikon, Japan).
Wettability assessment
The interaction force between a water droplet and the MEFderived ECM interface was assessed by a high-sensitivity microelectromechanical balance system (Dataphysics DCAT11, Germany), and the droplet contact angle was measured using the captive droplet method (Dataphysics OCA20). 23 The volume of the droplet was approximately 3 mL for each test. Glass without any modications and a laminin coated surface were used as controls. All the experiments were repeated more than ve times.
EdU/Hoechst 33342 double staining
Based on the instructions from the EdU Labelling/Detection Kit, NPCs were plated into a 24-well plate with containing MEFderived ECM or laminin coated and cultured for 7 d. Then the cells incubated in medium containing 10 mM EdU for an additional 24 h at 37 C in 5% CO 2 . Subsequently, the NPCs were xed with 4% paraformaldehyde for 20 min. Aer rinsing with PBS, cells were incubated with 1Â Apollo reaction buffer for 30 min in the dark, permeated with 0.5% Triton X-100 in PBS, and stained with 5 mg mL À1 Hoechst 33342 dye for 30 min. Images were obtained with a uorescence microscope (Nikon, Japan). The percentage of EdU-positive cells was calculated from ve random elds in each well (ve wells per group).
Real-time polymerase chain reaction (RT-PCR)
Aer the NPCs were cultured on MEF-derived ECM or laminin pre-coated plate for 24 h (for adhesion), 7 d (for proliferation) or 21 d (for differentiation), total RNA was extracted using TRIzol reagent. cDNA was synthesized from total RNA using a First Strand cDNA Synthesis Kit according to the manufacturer's instructions. The primer sequences for genes are shown in Table S1. † SYBR Green Real-time PCR Master mix was used to quantify mRNA expression according to the manufacturer's instructions. Reaction mixtures were incubated at 95 C for 15 sec and 60 C for 45 sec for 40 cycles. The 2 ÀDDC t method was used to analyze the relative mRNA expression. b-Actin was used as an endogenous control.
Western blot assay
Aer NPCs were cultured on MEF-derived ECM or laminin for 24 h (for adhesion), 7 d (for proliferation) or 21 d (for differentiation), cells were harvested with 0.25% trypsin in 0.03% EDTA and lysed in RIPA buffer. The collected proteins were loaded onto a 10% SDS-PAGE gel, separated by gel electrophoresis, and transferred onto a PVDF membrane. The membrane were blocked for 1 h and incubated with primary antibodies overnight. Subsequently, the membranes were incubated with HRP-conjugated secondary antibodies for 2 h. Aer washing with TBS-T, the membranes were reacted with the ECL western blot substrate before exposure. b-Actin was used as an endogenous control.
Residency and motion assay
The residency and motion of NPCs cultured on ECM or laminin pre-coated plates were assessed by monitoring cells for 24 h and taking pictures every 30 min using a live cell imaging system (Cytation 3, BioTek, USA). For the cell tracking analysis, each individual cell position (x, y) was tracked, and the cell tracking plots, accumulated distance, Euclidean distance and velocity were generated using Chemotaxis Tool freeware (ibidi, Germany). 24
Quantication of acetylcholine
Aer the NPCs were cultured on MEF-derived ECM or laminin with differentiation medium for 21 d, quantication of the ACh levels in the cultured neurons was performed with an AmplexRed Acetylcholine/Acetylcholinesterase Detection Kit and correlated with protein levels determined using the Protein Quantication Kit according to the manufacturer's manual.
Statistical analysis
All quantitative data are expressed as the mean AE SD. Statistical analysis was performed using a two-sample t-test with Origin Pro 8 soware. *p < 0.05 was considered statistically signicant; **p < 0.01 was considered highly signicant. All experimental data shown represent experiments performed in triplicate.
Results and discussion
Morphology and composition of MEF-derived ECM The detailed process of our experiment is presented in Scheme 1. Fibroblasts are able to synthesize larger quantities of ECM biomolecules, including collagenous matrix and elastin networks. 25 MEFs were applied to fabricate the cell-derived ECM in this study (ESI, Fig. S1 †), which can be easily isolated and cultured up passage 14 without a decrease in collagen synthesis or a reduction in the rate of growth. Stimulated with L-ascorbic acid and sodium ascorbate for 14 d, MEFs were embedded in secreted ECM deposition and interweaved together (Fig. 1a and c). Aer decellularization, the cells were lysed, and most of the adhesive molecules and proteins were removed, thus clearly exposing the protein bers. The remaining ECM consisted of a large number of interconnected laments distributed into a network (Fig. 1b), and high resolution observation showed that post-decellularized ECM exhibited a lamentous, nanoscale ($80 nm in diameter), porous network with a fabric-style ultrastructural appearance (Fig. 1d).
The composition of the MEF-derived ECM was rst examined by immunouorescence staining, which presented that the ECM was abundant with structural proteins such as laminin ( Fig. 2a and d) and bronectin (Fig. 2b and e) before or aer decellularization. Western blot analysis with b-actin as an internal control revealed that the amounts of laminin and bronectin were maintained. However, b-actin, as a cytoskeletal protein, was absent aer decellularization (Fig. 2g). To examine the presence of any nuclear contamination following decellularization, DNA content was quantied with an assay kit as described previously. 26 The results of the DNA quantication indicated that almost of cellular DNA (>98%) was removed aer decellularization (Fig. 2h). To further determine the composition of the ECM, the proteins were analyzed using nano-liquid chromatography tandem mass spectrometry (nLC-MS/MS) analysis aer the efficient decellularization. 10 The obtained results indicated the reliable detection and relative quantication of 809 different proteins, and the 10 most abundant proteins are listed according to counts of peptides (ESI, Table S2 †). Within those 809 kinds of components, many of the proteins constituting the matrix may play key roles in maintaining the superior biocompatibility. For instance, collagen is the major structural protein in the ECM and imparts mechanical properties to natural tissues. 27 Meanwhile, collagen is a critical component for cell anchorage in the NPC niche, and the interaction of cells with collagen inuences adhesion, residency, cell-cell communication, and cell survival. Another important component, Tenascin C (TNC), is an extracellular matrix glycoprotein that is highly expressed by NPCs located in the brain and spinal cord during development and in the adults. Consistent with the dynamic interplay of factors within the NPC niche, the TNC-mediated alterations in growth factor responsiveness may be due, in part, to secondary alteration heparan sulfate proteoglycan alterations. 28
Wettability of MEF-derived ECM
Previous investigations have shown that the material with the best hydrophilicity was favorable for cell adhesion. 29 However, recent research has proposed that the adhesion activity is related to the degree of hydrophilicity, and the primary reason for this is that different surface hydrophilicities can absorb a variety of types of proteins and molecules. It is generally accepted that hydrophobic surfaces adsorb more protein, whereas hydrophilic surfaces do not facilitate protein adsorption. The adsorption of proteins to scaffold surface has been reported as highest one with water contact angle of 60-80 . 30 In this work, the hydrophilicity of MEF-derived ECM was determined by the water contact angle measured using the captive droplet method. The contact angle of the droplet with ECM, laminin and glass were approximately 66.8 AE 4.55 , 15.4 AE 3.85 and 120.4 AE 4.16 , respectively (Fig. 3). The appropriate hydrophilicity is determined by the components, nanotopology and porous surface. More importantly, the different hydrophilicities lead to different adhesive properties. The adhered proteins can transmit signals to the cells through cell adhesion receptors and thereby affect cell survival, growth and differentiation.
Adhesion and residency performance of MEF-derived ECM
In many previous studies, to facilitate anchorage-dependent cell adhesion, individual ECM components, including laminin, bronectin, collagen and other bioactive molecules, have been used to modify plates or glass surfaces. 6 In this work, laminin pre-coated was adopted as the control group, and no obvious difference in micromorphology was observed between them. Pre-coated with laminin is a standard protocol for surface treatment, and it is a widely used biomolecule for culturing neural stem cells. As investigated using live/dead assay kit and immunouorescence staining with anti-nestin antibody (ESI, Fig. S3 and S4 †), MEF-derived ECM had excellent biocompatibility and maintained the stemness properties. NPCs were then cultured for 24 h, and the cell adhesion performance on ECM was observed. SEM images revealed that NPCs anchored on the surface of ECM had an extensive spreading morphology and formed a strong interaction with the ECM, exhibiting excellent cell adhesion performance (Fig. 4a-d). Additionally, vinculin is a key component of cellular adhesion plaques and adherents junctions. Our real-time PCR and western blot results indicated signicantly increased mRNA and protein expression levels of vinculin in NPCs cultured on the ECM (Fig. 4e and f). Further experiments were performed to re-validate cell anchorage efficiency on ECM. Cells were xed aer culturing for 2 or 24 h and then stained with DAPI for uorescence microscopy images. In Fig. 5, it can be observed that aer 2 h of culture, most of the NPCs were adhered on the ECM, and only a few cells were adhered on the laminin. Aer 24 h of culture, the number of cells adhered to the ECM was greater than that of the laminin group. This indicated that rapid, efficient anchorage was more favorable on ECM than on laminin.
Cell residency and motion are a complex process involving cell adhesion, polarization and forward movement. 31 As to the residency of NPCs on the neural niche, we observed the cell adhesion and motion continuously for 24 h using a live cell imaging system (ESI, Movie S1 and S2 †). The individual cell tracks were plotted with cell initial positions (point 0, 0), and the results showed that cells cultured on laminin (Fig. 6a) moved faster than that those cultured on MEF-derived ECM (Fig. 6b). The statistics histogram of the accumulated distance, Euclidean distance, and velocity calculated from individual cell tracks (Fig. 6c-e) conrmed that cell anchorage on ECM had a low movement ratio. Except for diverse composition, the appropriately hydrophilic surface of ECM is believed to adsorb sufficient nutrition to meet the metabolic demands of NPCs, which can facilitate cell adhesion and residency. More important, NPCs moved absolutely randomly without any preferential direction on the ECM and laminin (Fig. 6a and b and Movies in ESI †), and from this, it can be inferred that the ECM was a relatively homogeneous material.
Cell proliferation performance of MEF-derived ECM
Cell proliferation is a process that results in an increase in the number of cells and dened by the balance between cell division and cell loss through cell death or differentiation. The proliferation of NPCs on MEF-derived ECM was examined by measuring the ratio of EdU-positive cells aer NPCs were maintained in proliferation medium for 7 d. The EdU/Hoechst 33342 immunouorescence results showed that ECM promoted NPCs proliferation (Fig. 7a-f). The percentage of EdU-positive cells in the ECM group was signicantly higher than that of the laminin group (53.25% AE 4.54% vs. 32.83% AE 2.86%, Fig. 7g). NPC proliferation was also veried by measuring the expression of Ki67 mRNA and protein. As expected, the mRNA and protein expression levels of Ki67 were signicantly higher in NPCs cultured on ECM than in those cultured on laminin pre-coated plates ( Fig. 7h and i). According to the movies using live cell imaging system, cell division was observed within 24 h aer NPC anchorage on the ECM, whereas rarely observed in the laminin group. This indicated that cells cultured on the ECM entered into proliferation state as soon as they adhered. How does the ECM facilitate this process? First, the appropriate hydrophilicity and nanotopgraphy of the ECM surface can lead to the adsorption of more protein and other growth cytokines from the proliferation medium and nearby cells, which provides more nutrients for cell growth and division. In addition, our MEF-derived ECM contains many glycan-binding proteins and basal lamina, for instance, lamin and a galectin, which can promote the proliferation of NPCs in vitro and in vivo. 32 Cell differentiation performance of MEF-derived ECM The successful differentiation of NPCs is a signicant step for effective stem cell-mediated treatment of Alzheimer's disease.
Aer 21 d of differentiation, immunouorescence staining showed a vast increase in the number of cells positive for choline acetyltransferase (ChAT) from 9.23% AE 1.25% on the laminin to 28.75% AE 1.82% on the ECM (Fig. 8a-c). ChAT catalyzes the formation of ACh and is expressed by cholinergic neurons of both the basal forebrain and the motor system. In previous studies, ChAT positive neurons differentiation from human pluripotent stem cell on laminin coated was 15% at 32 d. 2 In addition, immunouorescence staining with double labeling conrmed that ChAT-immunopositive cells were almost all microtubule-associated protein 2 (MAP2)-, vesicular acetylcholine transport (VAChT)-and neurotrophin receptor (p75NTR, p75)-positive ( Fig. 8d-f). For further quantitative analysis, cells were harvested and subjected to RT-PCR and western blot assay. As shown in Fig. 8g and h, compared with the cells on laminin pre-coated plates, a large and signicant increase in the expression of markers for the BFCN lineage, including ChAT, MAP2, VAChT and p75. The functionality of the BFCN was conrmed through direct detection of ACh, which plays a signicant role in synaptic transmission, mediating fast excitatory neurotransmission by binding to ACh receptors. As a neural niche, MEF-derived ECM for cholinergic neurons markedly increases ACh levels: 8.15 AE 0.57 ng ACh per microgram of protein, and 3.89 AE 0.45 ng ACh per microgram of protein for the laminin group at day 21, comparatively (Fig. 8i). This differentiation effect was almost certainly due to a complex interplay between the chemical and physical properties of the ECM and the cells. The ECM was composed of rich variety of protein, such as proteoglycans and glycoproteins. Heparan sulfate proteoglycans (HSPGs) are one of the main components of MEF-derived ECM according to our proteomic analysis (ESI , Table S2 †), it's been noted that HSPGs could help drive the differentiation of neural cells by promoting FGF and BMP signaling. 33 BMP-9 is a critical exogenous cytokines for the differentiation of NPCs into BFCNs in our method. Perlecan (HSPG2) is an HSPG, highly expressed in the basal lamina of the developing neuroepithelium. The growth cytokines binding ability of HSPG2, including its binding to FGF8 and SHH, is considered to be the primary mechanism by which HSPG2 regulates NPC proliferation and differentiation. 34 In summary, although the microenvironment that determines the fate of stem cells has been extensively reported, these were scarcely studies used cell-derived ECM as a neural niche for neural stem/progenitor cells, as anchorage-dependent cells, and there was also not any data about cell-derived ECM affecting the differentiation of neural stem/progenitor cells, especially. Therefore, we utilized MEF-derived ECM as a cell culture carrier for NPC in vitro. It was found that ECM not only maintains the viability and stemness of NPCs, but also exhibits an excellent adhesion performance. Meanwhile, the ECM promoted efficient proliferation and differentiation of cells into BFCNs. There might be three aspects for interpretation. (i) Nanotopographical surface: biomaterials with nanoscale topography effectively mimic surface characteristics of natural tissues and may signicantly affect protein adsorption, cell adhesion. (ii) Diverse composition: MEF-derived ECM has a rich variety of components, including proteoglycans, glycoproteins and HSPGs. (iii) Appropriate hydrophilicity: appropriate hydrophilicity of MEF-derive ECM ensure the enhancement of anchorage of NPCs, and promotion of efficient protein adsorption of nutrition to meet the metabolic demands of NPCs, facilitating cell adhesion, residence and proliferation within the ECM.
Conclusions
Our ndings revealed the MEF-derived ECM, as a neural niche for NPCs with excellent biocompatible properties, it possesses the abilities to maintain stemness, adhesion, residence, proliferation and differentiation. Nanotopographical structure, the variety of components, and appropriate hydrophilicity of MEF-derived ECM were proposed. It's important that MEFderived ECM has been conrmed as a neural niche with great potential applications in neural stem cell therapy for Alzheimer's disease.
Conflicts of interest
There are no conicts to declare. | 2019-04-09T13:05:36.018Z | 2017-09-22T00:00:00.000 | {
"year": 2017,
"sha1": "66221562f80d6f8dc7930ca1e760982085d94dba",
"oa_license": "CCBYNC",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2017/ra/c7ra08273h",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "2100bd17f8382c382abaa7830a9a678546810ecb",
"s2fieldsofstudy": [
"Biology",
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
246055823 | pes2o/s2orc | v3-fos-license | Aberrant DNA Methylation Maker for Predicting Metachronous Recurrence After Endoscopic Resection of Gastric Neoplasms
Purpose This study aimed to investigate whether MOS methylation can be useful for the prediction of metachronous recurrence after endoscopic resection of gastric neoplasms. Materials and Methods From 2012 to 2017, 294 patients were prospectively enrolled after endoscopic resection of gastric dysplasia (n=171) or early gastric cancer (n=123). When Helicobacter pylori was positive, eradication therapy was performed. Among them, 124 patients completed the study protocol (follow-up duration > 3 years or development of metachronous recurrence during the follow-up). Methylation levels of MOS were measured at baseline using quantitative MethyLight assay from the antrum. Results Median follow-up duration was 49.9 months. MOS methylation levels at baseline were not different by age, sex, and current H. pylori infection, but they showed a weak correlation with operative link on gastritis assessment (OLGA) or operative link on gastric intestinal metaplasia assessment (OLGIM) stages (Spearman’s ρ=0.240 and 0.174, respectively; p < 0.05). During the follow-up, a total of 20 metachronous gastric neoplasms (13 adenomas and 7 adenocarcinomas) were developed. Either OLGA or OLGIM stage was not useful in predicting the risk for metachronous recurrence. In contrast, MOS methylation high group (≥ 34.82%) had a significantly increased risk for metachronous recurrence compared to MOS methylation low group (adjusted hazard ratio, 4.76; 95% confidence interval, 1.54 to 14.79; p=0.007). Conclusion MOS methylation can be a promising marker for predicting metachronous recurrence after endoscopic resection of gastric neoplasms. To confirm the usefulness of MOS methylation, validation studies are warranted in the future (ClinicalTrials No. NCT04830618).
Introduction
Lung cancer is one of the most prevalent malignant neopl-Gastric cancer (GC) is the sixth most diagnosed cancer and the third leading cause of cancer mortality with 1,090,103 incident cases, and more than 768,793 deaths in 2020 [1]. Helicobacter pylori infection is associated with peptic ulcer disease, mucosa-associated lymphoid tissue lymphoma, and GC.
H. pylori infection induces chronic inflammation, increased secretion of inflammatory cytokines, and aberrant DNA methylation including promoter CpG island hypermethylation and global DNA hypomethylation [2,3]. In result, prolonged H. pylori infection results in epigenetic field defect [4,5], suggesting that methylation could be a surrogate marker for GC [6,7]. Previously, we performed a genome-wide DNA methylation chip study in H. pylori-induced gastric carcinogenesis and identified several methylation markers [8]. Then we validated these methylation markers in a casecontrol study, and among the candidate genes, methylation of MOS, a, proto-oncogene, was associated with the duration of H. pylori exposure and the risk of GC [9]. Interestingly, MOS methylation decreased after H. pylori eradication in controls, but it remained significantly increased in patients with gastric dysplasia or GC even after H. pylori eradication [10].
In Korea, biannual upper gastrointestinal endoscopy is covered by national insurance for adults over 40 years of age to detect the early gastric cancer (EGC) before progression to advanced GC. This has led to an increase both in diagnosis and endoscopic resection (ER) of EGC [11]. H. pylori eradication after ER of EGC reduced the risk for metachro-nous recurrence [12]. However, many patients still develop metachronous gastric cancers or gastric dysplasia even after H. pylori eradication treatment [13,14]. Thus, there is a need for a surrogate marker that can predict the risk of GC after H. pylori eradication [15].
From this background, we performed a prospective cohort study to investigate whether MOS methylation can be useful for the prediction of metachronous recurrence after ER of gastric neoplasms.
Study subjects
The study was designed as a prospective cohort study. From 2012 to 2017, 294 patients were prospectively enrolled after ER of gastric dysplasia (n=171) or EGC (n=123). All lesions were assessed by endoscopy with biopsy before ER. Endoscopic mucosal resection or endoscopic submucosal dissection (ESD) was performed for gastric dysplasia and early gastric cancers which met the absolute indication (differentiated adenocarcinoma, intramucosal cancer, lesions < 20 mm, and no endoscopic evidence of ulceration). All lesions were curatively resected; if non-curatively resected, then the patients were not enrolled in the study. All subjects, who provided informed consent at the time of initial endoscopic treatment, were asked to complete a questionnaire under the supervision of a well-trained interviewer. The questionnaire included questions regarding demographic data (age, sex), socioeconomic data (smoking, alcohol, and education), their family history of GC in first-degree relatives, and history of H. pylori eradication therapy.
Among the 294 subjects, MOS methylation level at baseline could be determined in 261 patients from noncancerous gastric mucosae at antrum. When H. pylori was positive by CLOtest or histology at baseline or during the followup, eradication therapy was done. To evaluate whether H. pylori was eradicated, 13 C-urea breath testing was performed at least 4 weeks after completion of eradication therapy. The definition of the completion of the study protocol was (1) endoscopic and/or radiologic follow-up for more than 3 years, or (2) development of metachronous gastric neoplasm (gastric dysplasia or cancer) during the follow-up. Metachronous recurrence was defined as secondary dysplasia or cancers detected > 1 year after initial diagnosis. Finally, 124 of 261 subjects completed the study protocol and were included for the survival analysis.
Follow-up after endoscopic resection
All study subjects were closely followed up since recurrent tumors at previous ER sites can be easily detected on endoscopy with biopsy and treated during follow-up. Patients with local recurrence underwent further treatments, including repeated ESD, argon plasma coagulation, and gastrectomy based on pathology, and patients who refused treatment received supportive care.
All patients underwent endoscopy with biopsy within 6 months, then at 12 months after ESD to check for metachronous lesions or local recurrences. After 12 months, endoscopy with biopsy was performed annually. In case of EGCs, abdominal computed tomography scan was performed in the first year and biennially thereafter to detect lymph node or distant metastases.
H. pylori testing and histologic assessment
At each endoscopy, 12 biopsy specimens were obtained for histological analysis, Campylobacter-like organism test, to determine the presence of a current H. pylori infection. This methodology has been presented previously [10,16]. In brief, two biopsy specimens from the antrum and two from the corpus (1 from the lesser curvature, 1 from the greater curvature) were fixed in formalin to assess the presence of H. pylori by modified Giemsa staining and the degree of inflammatory cell infiltration, atrophy and intestinal metaplasia (all by hematoxylin and eosin staining). These histologic features of the gastric mucosa were recorded using the updated Sydney scoring system (0, none; 1, mild; 2, moderate; and 3, marked) [17]. One specimen from each of the lesser curvature of the antrum and the body was used for rapid urease testing (CLOtest, Delta West, Bentley, Australia). The remaining six noncancerous mucosal biopsy specimens (3 antrum and 3 body each) were immediately frozen at -70°C until DNA extraction.
Operative link on gastritis assessment and operative link on gastric intestinal metaplasia assessment staging
Operative link on gastritis assessment (OLGA) or operative link on gastric intestinal metaplasia assessment (OL-GIM) stages were made by histological examination of gastric biopsy samples (antrum and corpus) following the updated Sydney System [18]. Two independent gastrointestinal pathologists, who were blinded to clinical information, assessed the biopsies. if there was a disagreement, the biopsies were assessed by a third pathologist again.
DNA extraction, bisulfite modification, and MethyLight assay
Genomic DNA was extracted directly from noncancerous antral biopsy specimens using sodium bisulfite. The methodology was reported previously [19]. Briefly, specimens were homogenized in proteinase K solution (20 mmol/L Tris-HCl [pH 8.0], 10 mmol/L ethylenediaminetetraacetic acid, 0.5% sodium dodecyl sulfate, and 10 mg/mL proteinase K) using a sterile micropestle, followed by incubation for 3 hours at 52°C. DNA was isolated from homogenates using phenol/ chloroform extraction and ethanol precipitation. Genomic DNA (1 µg) was bisulfite modified using the EZ DNA Methylation Kit (Zymo Research, Irvine, CA) by following the manufacturer's instructions. The methylation status of MOS from bisulfite-modified DNA samples was quantified using real-time polymerase chain reaction-based MethyLight technology. MethyLight, as a sensitive, high-throughput methylation assay, allows the highly specific detection of methylation using probes that cover methylation sites, as well as methylation-specific primers [20]. The primer and probe sequences used in the reaction are as follows: forward primer sequence, TTCACTCCAACGACCCTAATATCC; backward primer sequence, GGGAAAATTCGTTTCGGAGGTAG; probe oligo sequence, 6FAM-AATACGATACCCTCGCCCCTA-ACCCTACG-BHQ-1 [19]. The quantified level of MOS was reported as a percentage of methylated reference, which is the relative methylation ratio of the target gene to the ALU gene of a sample, divided by the ratio of the target gene to the Alu gene of sodium bisulfite and CpG methyltransferase (M-SssI)-treated sperm DNA, multiplied by 100.
Statistical analysis
For sample size calculation, the expected incidence of metachronous recurrence in low-risk group (low methylation group) is presumed to be 0.01 per year, and that in highrisk group increases by 4-fold. Assuming that the ratio of the number of the low-risk and high-risk individuals is 1:1, the number of patients in each group was calculated as 131 at a statistical power of 0.80 with a two-sided significance level of 0.05. Considering a dropout rate of ~10%, the sample size was determined as 290 (145 in each group).
Continuous variables were presented as mean±standard deviation. Categorical variables were presented as numbers with proportions. To compare continuous variables, Student t test was used. For categorical variables, chi-square test was used for analysis. For determining the optimal diagnostic cutoff value on predicting metachronous recurrence, receiver operating characteristic curve was used. Sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were calculated. For survival analysis, Kaplan-Meier curves for cumulative incidences were used with log-rank test. Cox proportional hazard model was adopted under adjustment with clinically important variables. All the statistical analyses were performed using R ver. 3.2.3 (The R Foundation for Statistical Computing, Vienna, Austria; http://www.r-project.org). All tests were two-sided and p < 0.05 were considered statistically significant.
Characteristics of the study subjects at baseline
The clinical and pathological characteristics of the study subjects at baseline were summarized in Table 1. There was no significant difference between the methylation high group (MOS methylation level ≥ 35.82%) and the methylation low group (methylation level < 35.82%) except for follow-up duration and follow-up visits (p < 0.001), which was attributed to a higher metachronous recurrence in the methylation high group.
Also, the clinicopathological characteristics of the 124 patients completed the study protocol according to metachronous recurrence were presented in S1 with metachronous recurrence, initial pathology was lowor high-grade dysplasia rather than adenocarcinoma (p < 0.001), and synchronous lesions (dysplasia or EGCs) were more prevalent (p=0.053). OLGA and OLGIM stages were not different between the two groups (p > 0.05), but MOS methylation level was higher in patients with metachronous recurrence (p=0.009).
Association between MOS methylation level and clinical and histologic variables.
Next, we evaluated whether MOS methylation levels were different by age, family history of GC, synchronous gastric lesions, current H. pylori infection, and OLGA and OLGIM stages. There was no correlation between age and MOS methylation level (Pearson's correlation coefficient=0.063, p=0.312) ( Fig. 2A). Family history of GC in 1 • relatives, synchronous gastric neoplasms, current H. pylori infection did not affect MOS methylation levels (p > 0.05) (Fig. 2B-D). In contrast, MOS methylation levels correlated with OLGA or OLGIM stages (Spearman's ρ=0.240 and 0.174, respectively, both p < 0.05) ( Fig. 2E and F
Clinical implication of mucosal atrophy, intestinal metaplasia, and MOS methylation in the prediction of metachronous gastric recurrence after endoscopic resection
Then, we evaluated whether atrophic gastritis, intestinal metaplasia, or MOS methylation level could predict the metachronous recurrence after ER of gastric neoplasms (Table 2, Fig. 3). Kaplan-Meier curves for cumulative incidences of metachronous recurrence showed that presence or absence of atrophic gastritis and intestinal metaplasia did not predict the risk for metachronous recurrence in this high-risk population ( Fig. 3A and C). Also, OLGA and OLGIM stages were not useful in predicting the risk (Fig. 3B and D); if the analysis was performed comparing low-risk (grade 0 to 2) and high-risk (grade 3 and 4) groups, it was not statistically significant (S2 Fig.).
In contrast, MOS methylation could be useful to determine the high-risk group in metachronous recurrence. That is, MOS methylation high group (≥ 34.82%) had a significantly Cheol Min Shin, MOS Methylation and Metachronous Gastric Neoplasms Table 2). Nevertheless, a significant increasing linear trend was observed between MOS methylation and the risk of meta-chronous recurrence (adjusted p for trend=0.034). When the same analyses were performed in the entire cohort (n=261), the results were not different (S3 Table, S4 Fig.).
Discussion
This study showed that MOS methylation could be useful in predicting metachronous recurrence after H. pylori eradication in the high-risk patients who had undergone ER of gastric neoplasm. The patients who underwent ER of EGC or gastric dysplasia are regarded as a high-risk population of metachronous gastric neoplasms [15]. In the previous studies, the incidence of metachronous GC was reported to be 1.9%-25.3% when observed up to 4-7 years [21], and H. pylori eradication reduced the incidence of metachronous GC by ~50% [12]. However, metachronous recurrence still develops even after H. pylori eradication; thus, we need a surrogate marker for the risk of metachronous GC after H. pylori eradication [15].
Differentiated GCs are frequently found after H. pylori eradication, showing characteristic endoscopic features such as reddish depression; benign reddish depression is difficult to be distinguished from GC because of the histological alterations in the surface structures (non-neoplastic epithelium or epithelium with low-grade atypia) as well as multiple appearances of benign reddish depression [22]. Furthermore, submucosal invasive cancers were not infrequently found after H. pylori eradication despite of the annual endoscopic surveillance [22]. In this study, all cases of metachronous recurrence (n=20) were either gastric dysplasia or EGC; six of seven metachronous gastric cancers (85.7%) were differentiated gastric cancers, but three cases (42.9%) invaded submucosa. 1-3), respectively, at either antrum or corpus by the updated Sydney scoring system, c) Statistically significant.
There have been several studies that aberrant DNA methylation could be a surrogate marker for the risk of metachronous GC [6,23]. Previously, a Japanese group published the impact of aberrant DNA methylation accumulation on metachronous GC in a 5-year follow-up of a multicenter prospective cohort study [24,25]. They showed that the higher quartiles of methylation levels in miR-124a-3, EMX1, and MKX6-1 showed an increased risk for metachronous GCs. Another study has shown that aberrant methylation of microRNA-34b/c is a predictive marker of metachronous GC risk [23].
In the present study, the rationale for choosing MOS meth- ylation as a marker is based on the results of previous studies. Previously, we evaluated the usefulness of several candidate methylation markers to define a high-risk group for GC [8].
Among them, methylation of MOS was associated with the duration of H. pylori exposure. MOS methylation was also increased in remote past infection in which H. pylori disappeared in gastric mucosa, and it was significantly increased in patients with GC regardless of H. pylori infection [9]. Interestingly, MOS methylation decreased after H. pylori eradication in controls, but it remained significantly increased in patients with gastric dysplasia or GC even after H. pylori eradication [10]. In a retrospective study, we have shown that MOS methylation levels at baseline were significantly higher among patients with metachronous gastric neoplasms [26]. We paid attention to the results of previous studies in that there are two types of methylation occurring in the gastric mucosa. One is temporary components of methylation (induced in progenitor or differentiated cells) and the other is permanent components (induced in stem cells) [2,4]. During active H. pylori infection, both temporary and permanent components of methylation increase as the duration of infection increases. When H. pylori infection discontinues, the temporary component will disappear, leaving only the permanent component. The remaining permanent components correlate with the risk of developing gastric cancers.
From this point of view, MOS methylation could be an ideal marker for predicting the risk of GC. The MOS methylation we analyzed in this study does not originate from the promoter region (promoter CpG island), but the exon region [8]. Although methylation of some marker genes is not directly involved in carcinogenesis, their methylation levels correlate with those of tumor-suppressor genes and thus GC risks. Methylation of a marker gene is not requisite for gastric carcinogenesis [4]. Methylation levels of MOS in GC tissues did not correlate with those in their background gastric mucosa. Rather, we found that hypomethylation of MOS in GC tissues was associated with tumor invasion, nodal metastasis, and undifferentiated histology, suggesting that MOS methylation occurs in a complex manner depending on the stages of gastric carcinogenesis [9].
In the present study, MOS methylation was not affected by age (Table 1, Fig. 2). Therefore, MOS methylation might not be an aging process. There was no significant difference in MOS methylation level between H. pylori-positive and -negative patients. This is because most of the subjects were high-risk patients in this study. Even if some of them had no evidence of active H. pylori infection at present, most of them might be in remote past H. pylori infection [27]. Likely, MOS methylation levels did not differ according to the presence or absence of synchronous gastric neoplasm. In contrast, MOS methylation level positively correlated with OLGA and OLGIM staging (Fig. 2). Atrophic gastritis and intestinal metaplasia are not only important precancerous lesions of GC but have been reported to be significantly associated with the occurrence of metachronous GC [13,28]. In this study, however, OLGA and OLGIM stages failed to show the relations to metachronous recurrence. This might be attributed to the fact that the frequencies of patients with high OLGA and OLGIM stages (stage 3-4) at baseline were much lower than those reported in GC patients (Table 1). In contrast, we found that MOS methylation may predict the Fig. 3). Unlikely with the previous studies, the reason of insignificant results in atrophic gastritis and intestinal metaplasia might be attributed to the relatively small sample size; if the sample size is sufficiently large, significant results could be shown for atrophic gastritis and metaplasia as well. However, the fact that MOS methylation was found to be significantly related to the risk for metachronous recurrence despite the relatively small sample size in this study indicates that MOS methylation can be a more powerful marker to predict the recurrence of metachronous gastric neoplasms after endoscopic resection. Recently, we found that metachronous GC occurred in the 35 patients among 3,044 patients (1.1%) in the remaining stomach after curative gastric partial resection with GC [29]. In this population, the metachronous GC was only related to older age and surgical methods used. Thus, it might be valuable to perform further study whether the MOS methylation can be beneficial in predicting the metachronous recurrence after gastrectomy.
Our study has several limitations as the following. First, the sample size was relatively small. In addition, the dropout rate (follow-up loss within 3 years after initial endoscopic treatment) was much higher than expected (137/261, 52.5%). In South Korea, it is recommended that the patients be returned to the local clinic for screening endoscopy if there are no problems after endoscopic treatment. As a result, many subjects were dropped out, and only 124 subjects were followed up for more than 3 years. Thus, this study might be underpowered. Nevertheless, MOS methylation showed statistically significant results. In addition, the results were not different when the survival analyses were performed in the entire cohort (n=261) (S3 Table, S4 Fig.). However, the results of our study should be verified through a large prospective study. Second, serum gastrin-17, anti-H. pylori IgG antibody, and pepsinogen I/II levels were not measured in this study. They have been shown to be a surrogate marker of metachronous recurrence after ER of EGC [30,31]. Third, H. pylori-positive rate was relatively low (~37%) for the study population, which was EGC or dysplasia patients. It might be because most of the patients who were H. pylori-negative in this study were patients with a remote past infection. However, since OLGA and OLGIM stages were not high at baseline, there is a possibility that H. pylori infection rate was actually low. Fourth, the interpretation of OLGA and OLGIM staging should be cautious because gastric mucosae were not obtained at gastric angle. Furthermore, OLGA staging was possible in 110 of 261 (42.1%) patients only, because in many cases either antrum or corpus biopsy specimen was inappropriate to assess the degree of atrophy. Despite these limitations, the results of this study show the possibility of MOS methylation as a surrogate marker for metachronous gastric neoplasms, and also prove the importance of aberrant DNA methylation in gastric carcinogenesis.
In conclusion, MOS methylation can be a promising marker for predicting metachronous gastric neoplasms after ER of gastric neoplasms. To confirm the usefulness of MOS methylation, large prospective studies (validation studies) are warranted in the future.
Electronic Supplementary Material
Supplementary materials are available at Cancer Research and Treatment website (https://www.e-crt.org).
Ethical Statement
The study protocol was approved by the Ethical Committee at Seoul National University Bundang Hospital (IRB No. B1204/152-005). All study participants signed a consent form before enrolling in the study. | 2022-01-19T06:23:39.050Z | 2022-01-18T00:00:00.000 | {
"year": 2022,
"sha1": "bb7c556b15b74e68e2de7002090128faf650b757",
"oa_license": "CCBYNC",
"oa_url": "http://www.e-crt.org/upload/pdf/crt-2021-997.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "54aea73bcdfa0bf1c1ce9c272eed446bced9452c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
117965911 | pes2o/s2orc | v3-fos-license | A Clinical Application of Fuzzy Logic
In fuzzy logic, linguistic variables are used to represent operating parameters in order to apply a more human-like way of thinking [Zadeh, 1965, 1968, 1973, 1988, 1989]. Fuzzy logic incorporates a simple, IF-THEN rule-based approach to solve a problem rather than attempting to model a system mathematically and this property plays a central role in most of fuzzy logic applications [Kang et al., 2000; Lin & Wang, 1999; Shi et al., 1999]. Recently, the main features of fuzzy logic theory make it highly applicable in many systematic designs in order to obtain a better performance when data analysis is too complex or impractical for conventional mathematical models. This chapter represents how fuzzy logic, as explained theoretically in the previous chapters, can practically be applied on a real case. For this aim, a clinical application of fuzzy logic was taken into account for cancer treatment by developing a fuzzy correlation model.
Introduction
In fuzzy logic, linguistic variables are used to represent operating parameters in order to apply a more human-like way of thinking [Zadeh, 1965[Zadeh, , 1968[Zadeh, , 1973[Zadeh, , 1988[Zadeh, , 1989. Fuzzy logic incorporates a simple, IF-THEN rule-based approach to solve a problem rather than attempting to model a system mathematically and this property plays a central role in most of fuzzy logic applications [Kang et al., 2000;Lin & Wang, 1999;Shi et al., 1999]. Recently, the main features of fuzzy logic theory make it highly applicable in many systematic designs in order to obtain a better performance when data analysis is too complex or impractical for conventional mathematical models. This chapter represents how fuzzy logic, as explained theoretically in the previous chapters, can practically be applied on a real case. For this aim, a clinical application of fuzzy logic was taken into account for cancer treatment by developing a fuzzy correlation model.
Cancer is an inclusive phrase representing a large number of deseases in which unconrolled cells are divided and grown out of regular form and also are able to invade other healthy tissues. Cancer can usually be treated using surgery, chemotherapy or radiotherapy [Cassileth & Deng, 2004;Smith, 2006;Vickers, 2004]. In radiotherapy method the cancerious cells are bombarded by high energy ionizing radiation such as gamma ray or charge particle beams. The radiation ionizes the bonds of water molecoules located in cell environment and causes releasing of hydroxyl free radicals that have damaging effects for DNA. In external radiotherapy the first and most important step is tumor localization for obtaining maximum targeting accuracy. Tumor volume is visualized using 3D imaging systems [Balter & Kessler 2007;Evans, 2008] such as Computed Tomography (CT) or Magnetic Resonance Imaging (MRI) and then the contoured treatment region depicted by medical physicists is irradiated by means of an external beam extracted from the accelerator systems. In radiotherapy the correct and accurate information of tumor position during the treatment determine the degree of treatment success. Among different tumors, some typical tumors located in lung region of patient body move due to breathing cycle phenomena and this non-regular motion www.intechopen.com Fuzzy Logic -Emerging Technologies and Applications 4 causes a constraint to achieve the accurate knowledge of tumor location during the treatment [Ramrath et al., 2007;Vedam et al., 2004]. In order to address this issue, one strategy is tracking the tumor motion by continuous monitoring systems such as fluoroscopy which is unsafe for patient due to its additional exposed dose [Dieterich et al., 2008;Keall etal., 2006]. Another alternative that is effective and acceptable, is finding real time tumor position information over time from external rib cage motion [Torshabi et al., 2010]. For this aim, the external breathing motion is synchronized and correlated with internal tumor motion by developing a correlation model in training step before the treatment. It should be mentioned that the external breathing motion is traced by means of specific external markers placed on thorax region (rib cage and abdomen) of patient and recorded by infrared tracking system. In contrast, the internal tumor motion is tracked using implanted internal clips inside or near the tumor volume and visualized using orthogonal X-ray imaging in snapshot mode. A correlation model based on fuzzy logic concept is proposed here to estimate the tumor motion from external markers data as input data when internal marker data is out of access. In order to investigate the clinical application of fuzzy logic, data from real patients were utilized for model testing and verification (Table 1). The end result is a nonlinear mapping from the motion data of external markers as input to an output which is the estimate of tumor motion. When tumor position was predicted by fuzzy model, the gated-respiratory radiotherapy can be applied to treat the tumor [Kubo & Hill 1996, Minohara et al. 2000, Ohara et al., 1989. In this method the therapeutic beam is only ON in a pre-defined gating window in which tumor volume exists and otherwise, the beam is set to turn OFF for preventing healthy tissues against additional exposure. Therefore based on above description, the specific clinical application of fuzzy model in this chapter consists of all moving targets located in thorax region of patient body such as lung, chestwall and pancreas cancers.
Recently, several respiratory motion prediction models have been developed in different mathematical approaches [Kakar et al., 2005;Murphy et al., 2006;Ramarth et al., 2007;Riaz et al., 2009;Ruan et al., 2008;Vedam et al., 2004]. Since the breathing phenomenon has inherently high uncertainty and therefore causes a significant variability in input/output dataset, fuzzy logic seems to have suitable environment to correlate input data with tumor motion estimation with less error [Kakar et al., 2005;Torshabi et al., 2010].
Our patient database consists of a real database obtained from 130 patients, who received hypo-fractionated stereotactic body radiotherapy with CyberKnife® (Accuray Inc., Sunnyvale, CA) between 2005 and 2007, was analyzed [Brown et al., 2007;Hoogeman et al., 2009;Seppenwoolde et al., 2007]. The patient database is made available by the Georgetown University Medical Center (Washington, DC). Such database includes patients treated with real-time compensation of tumor motion by means of the Synchrony® respiratory tracking module, as available in the Cyberknife® system. This system provides tumor tracking relying on an external/internal correlation model between the motion of external infrared markers and of clips implanted near the tumor. The model is built at the beginning of each irradiation session and updated as needed over the course of treatment. Twenty patients were selected randomly among the population, as shown in table 1. The chosen patients were divided into control and worst groups and the 3D targeting error of each group were analyzed, separately. The worst group consists of tumor motions with large tracking error. One of the main factors affected on fuzzy model performance is data clustering for membership function generation [Jain et al., 1999]. Two most practical data clustering approaches considered in this chapter are Subtractive and Fuzzy C-Means (FCM) clustering [Bezdek, 1981;Chiu, 1994;Dunn, 1973;Jang et al., 1997].
In this chapter fuzzy model structure and different steps of model performance were explained graphically and finally we compared fuzzy model performance with two different correlation models based on Artificial Neural Network and State model [Procházka & Pavelka, 2007;Robert et al., 2002;Ruan et al., 2008;Seppenwoolde et al., 2007;Sharp et al., 2004;Su et al., 2005]. The state model was implemented as a linear/quadratic correlation between external marker motion and internal tumor motion. In this model The 3D www.intechopen.com Fuzzy Logic -Emerging Technologies and Applications 6 movement of external markers was transformed into a mono-dimensional signal, by projecting the three-dimensional coordinates in the principal component space [Ruan et al., 2008]. Artificial Neural Networks (ANNs) are a mathematical method that simulates the behavior of a natural neural network, where several inputs are integrated to obtain outputs according to predefined rules. The nodes (synapses) are inter-connected with specific weight values, defined during the training phase and representing the significance of each connection. ANNs are widely used to predict signals that may be difficult to model.
The analyzed results of 3D targeting error assessment onto two control and worst groups represent that the implemented fuzzy logic-based correlation model represents the best performance rather than two alternative modelers. In general, fuzzy logic theory appears very useful when the process to be modeled is too complex for conventional techniques, or when the available dataset can be interpreted either qualitatively or with a large degree of uncertainty. Final verifications represent that this model can be potentially applicable for moving tumor located in lung and abdominal region of patient body as some typical cases depicted in table 1.
Development of fuzzy correlation model
In fuzzy logic-based systems, membership functions represent the magnitude of participation of each input, graphically. The proposed fuzzy correlation model involves data clustering [Jain et al., 1999] for membership function generation, as inputs for fuzzy inference system section ( Figure 1, upper solid rectangle). Data clustering is an approach for finding similar data in a big dataset and puting them into a group. In the other word, data clustering analysis is the organization of a collection of dataset into clusters based on similarity. Therefore, clustering divides a dataset into several groups such that each group consists of a set of data points with same nature. the main purpose of data clustering is breaking a huge dataset into some small groups in order to make a further simplification for data analysis. Clustering algorithms are utilized not only to categorize the data but are also helpful for data compression and model construction. In some cases data clustering can discover a relevance knowledge among datapoints with same nature [Azuaje et al., 2000]. In the implemented fuzzy logic algorithm, data from all three external markers arranged in an input matrix with 9 columns and data from internal marker set in an output matrix with 1 column are clustered initially. Sugeno and Mamdani types of Fuzzy Inference Systems configured by 1) data fuzzification, 2) if-then rules induction, 3) application of implication method, 4) output aggregation and 5) defuzzification steps, utilized due to its specific effects on model performance (Figure 1, upper solid rectangles).
Fuzzy correlation model was developed in MatLab (The MathWorks Inc., Natick, MA, USA) using fuzzy logic toolbox. The model is built before the treatment using training data. Training data is 3D external markers motion as model input and internal implanted marker as model output. When the model is developed, it can be applied to estimate tumor motion as a function of time during the treatment (figure 2, solid blocks). The model can also be updated and re-built as needed during the treatment with X-ray imaging representing the internal marker location. Figure 2 shows a block diagram of model operation. The dashed rectangles (right side) in this figure represent the training and updating steps. Between several techniques for data clustering, two of most representative techniques utilized in our model are: 1) Subtractive clustering, 2) Fuzzy C-Means clustering. In the training step, two fuzzy inference systems based on above clustering approaches are configured for motion prediction during the treatment. The properties and implementations of these inference systems are in the following paragraphs.
Membership function generation via subtractive clustering
The first clustering algorithm employed for data grouping in this work is on the basis of subtractive technique. In this algorithm, each data point of the dataset is assumed as potential cluster center and therefore a density measure at data point a i is calculated as the following equation: www.intechopen.com
/ exp
Where a i is the i th m e a s u r e d d a t a p o i n t , c j is the center of the cluster, and r is the neighborhood radius or influence range. By this way, when density value of a data point is high, that data point is surrounded by a huge amount of other neighboring data points.
Subtractive clustering algorithm firstly nominates a datapoint as first cluster center such that its density value calculated by above formula is the largest. As the second step, the algorithm removes all data points belonging to the first cluster, configured with a predefined neighboring radius for determining the next data cluster and its center location. In the third and last step, this clustering algorithm continues density measurements on the rest of data points until all the data points are covered by the sufficient clusters. By ending these steps and when all of data were categorized, a set of fuzzy rules and membership functions are resulted.
Membership function generation via Fuzzy C-Means clustering
In Fuzzy C-Means clustering algorithm each data point in the dataset belongs to every cluster with a specific membership degree. The magnitude of this membership degree is determined by finding the distance of data point from cluster center. In the other word, each data point that is close to the cluster center has high value of membership degree, otherwise if a data point that lies far away from the cluster center has a low membership degree. It should be noted that in this way, before applying FCM technique our training dataset is clustered into n groups using subtractive clustering algorithm, as mentioned previously.
From mathematical point of view, membership functions in FCM clustering algorithm are obtained by minimization of the following objective function. This equation represents the distance from any given data point to a cluster center weighted by its membership degree: where m is any real number greater than 1, u ij is the degree of membership of x i in cluster j, x i is the i th measured data point, and c j is the center of the cluster. The value of m was set to 2 in our objective function [Bedzek & Pal 1998;Yu 2004]. At first, FCM assumes the cluster centers in the mean location of each cluster. Next, the FCM algorithm sets a membership degree for each data point at each cluster, and then iteratively moves the cluster centers c j and updates the membership degrees u ij : This iteration process will continue till |U (k+1) -U (k) |<ε, where ε is a termination criterion between 0 and 1, U is [u ij ] matrix and k is the number of iterations.
In should be noted that the structure of fuzzy inference systems is based on Sugeno (or Takagi-Sugeno-Kang) model [Sugeno & Takagi, 1985]. This model is computationally more efficient and thus gives a faster response, where quick decisions should be taken.
For better description, a typical fuzzy inference system on the basis of FCM clustering algorithm was built as example using the data of one chosen patient from table one with Right Lower Lung (RLL) cancer. Figure 3-a shows a set of Gaussian membership functions generated by this fuzzy inference system on input data given by three external markers that move on three X, Y and Z directions (totally 9 inputs) and figure 3-b illustrates the same membership functions using the same algorithm on output data given by implanted internal marker only on X direction. In this inference system three clusters and hence three if-then rules connected with AND operator, have been utilized.
(a) (b) Fig. 3. Gaussian Membership functions generated by fuzzy inference system on the basis of FCM clustering algorithm using total 9 inputs dataset (panels a) and one output dataset (panel b)
Operation of fuzzy correlation model
When a fuzzy model was built by training dataset, each external marker data is applied as input and the following steps are accomplished by fuzzy model to estimate the tumor motion as output.
Fuzzification: This step takes the inputs and determine their participate degrees at each cluster via generated membership functions (similar to membership function visualized in the previous section).
Applying AND/OR operator: When the inputs were fuzzified, if the antecedent of a given rule has more than one part, the fuzzy operator is applied to obtain one number that represents the result of antecedent for that rule. In our typical example, three rules were used connected with AND operator. Figure 4 represents the contribution of each input membership function (filled by yellow) and one output membership functions (filled by blue) associated with applied input value. Fig. 4. Three rules connected with AND operator in antecedent (yellow) and consequent (blue) parts of FIS Applying implication: Implication step in consequent part of FIS uses a single number given by the antecedent part, and the output is a truncated fuzzy set. In the other word, the consequent is reshaped using a function associated with antecedent. The implication step should be applied for each rule. In figure 5, the truncated output fuzzy set was shown by blue color for second rule of our FIS example. As shown in this example, the build-in function of implication step is on the basis of AND (minimum selection criteria) operation. Applying aggregation: This step receives all the truncated output fuzzy set of each rule and cumulate them as one fuzzy set. Figure 6 shows the aggregation step applied of our www.intechopen.com example. As shown, the lowest square represents the accumulation of all available truncated fuzzy sets.
Fig. 6. Accumulation of all truncated fuzzy sets in aggregation step
Defuzzification: This step acts as final step and the input is aggregated fuzzy set where the output is a single number that returns the center of the cumulated area under the curve. Defuzzification is performed using five built-in methods. In our example the single output was obtained by Centroid Calculation method.
For real-time tumor tracking the correlation models should be executed without a significant delay such that on-time compensation strategy can be applied against tumor motion. Therefore, the execute time of each correlation model that strongly depends on the utilized mathematical procedures, should be taken into account for clinical application. The features of fuzzy model make it very quick in execution, such that the tumor position can be estimated in real-time condition.
As final part of this chapter, in order to visualize the performance of fuzzy model in tumor motion tracking, one patient database was selected for model configuration and operation. The chosen patient has Right Lower Lung (RLL) cancer belonging to control group. The number of training dataset used for model configuration in pre-treatment step for this case is 11. Figure 7 shows the tumor motion tracking of this case (red line) versus Cyberknife modeler (blue line) over 5 minutes of treatment time on X, Y and Z directions. The imaging points indicated by green squares in these figures were taken by stereoscopic X-ray imaging system and represent the exact position of tumor motion at that time. As mentioned in this chapter, these points are used for model performance assessment and also model updating during the treatment. As shown, there are five green square points on each panel that indicates the updating process has been done every one minute for this case. As depicted in figure 7, the performance of fuzzy correlation model in tumor tracking is comparable with Cyberknife modeler, although a negligible local noise is observed around the inhalation/exhalation peaks. In some peaks there are also some over estimation with respect to Cyberknife modeler performance that is highly visible in the last peak shown in middle panel of this figure.
Moreover, two alternative correlation models were taken into account based on artificial neural network and State model, as mentioned in Introduction section.
3D targeting error was calculated for control and worst cases applying fuzzy, ANNs and state models, by means of all imaging points in a same condition [Torshabi et al., 2010]. In this calculation imaging points were utilized as reference points in order to investigate the model performance accuracy. For this aim, the distance between predicted point as given output of three correlation models and corresponding imaging point is measured as model accuracy criteria .Where the assumed predicted point was close to the corresponding imaging point, that model acts reasonably. In contrast, when the predicted point is far away from the corresponding imaging point the accuracy of model performance is missing.
As resulted from this comparative assessment, it can be noted that for control cases where the tracking errors are in a normal interval, there is a good agreement between the performance of three modelers versus Cyberknife. In contrast, for worst cases the fuzzy model has the best performance even better than Cyberknife modeler. In this comparison state model acts as worst prediction model. In worst cases an error reduction improvement was resulted from fuzzy model with respect to Cyberknife that is 10.8% at the 95% confidence level. More detailed information concerning the structure and operation of state model and ANNs with respect to fuzzy model was given by Torshabi et al.
Conclusion
In this chapter a clinical application of fuzzy logic was taken into account for cancer treatment by developing a fuzzy correlation model. This model act as prediction model and track the moving targets, placed in lung and abdomen regions of patient body. For this aim the internal-external markers data were utilized for fuzzy model generation (pre-treatment), operation & updating (during the treatment). Fuzzy model structure and different steps of model performance were explained graphically for a real case. Finally a comparative investigation was preformed between fuzzy model performance and two different correlation models based on Artificial Neural Network and State model. The analyzed results represents that the fuzzy model performance is the best with less error and negligible executive time among the modelers. In general, fuzzy model features make it robust for modeling some systems that are too complex to be modeled by means of conventional mathematical techniques. The application of the fuzzy logic is also highly recommended whenever the available dataset is not qualitatively perfect or has a large degree of variability. As drawback point, it should be considered that the fuzzy model has some local small noises near the inhalation/exhalation peaks as depicted in figure 7, such that two artificial neural network and state models can track the motion more smoothly with less local ripples. In current fuzzy model descibed above, a single output that is tumor motion is properly estimated by means of multi-inputs that is three external markers data. This motion prediction is suitable for treating the tumors by resiratory-gated radiotherapy approach which in the beam is irradiating only in a pre-defined gating window. As future work, the prediction of volumetric information of tumor motion will be invstigated that is needed for tumor treatment by Real-Time Tumor Tracking Radiotherapy. In this alternative method of radiotherapy that is still in reasearch step, 2D information of tumor contour motion at each moment of treatment time is required. Therefore, The prediction model must work as multiinput/multi-output model such that the multi-output is some finite pionts located on tumor contour at each tumor slice. By this way 3D information of tumor motion and also its deformation can be estimated during the breathing cycle. But the main open isuues that must be addressed in this proposal are restriction in extracting minimum required points of tumor contours at different tumor slices as multi-input data for model configuration and also low quality of Orthogonal X-ray images for model updating. | 2019-04-17T15:49:35.665Z | 2012-03-16T00:00:00.000 | {
"year": 2012,
"sha1": "633ca999a46d7709bac0901ad47012078b81e040",
"oa_license": "CCBY",
"oa_url": "https://cdn.intechopen.com/pdfs/32877.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "bf8aad9c859be8b5749c08fc561c0a1cd1e4314f",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
255104034 | pes2o/s2orc | v3-fos-license | Dourine: a neglected disease of equids
Dourine is a venereal transmitted trypanosomosis causing a major health problem threatening equines worldwide. The origin and identification of Trypanosoma equiperdum within the subgenus Trypanozoon is still a subject of debate. Unlike other trypanosomal infections, dourine is transmitted almost exclusively by coitus. Diagnosis of dourine has continued to be a challenge, due to limited knowledge about the parasite and host-parasite interaction following infection. The pathological lesions caused by the diseases are poorly described and are observed mainly in the reproductive organs, in the nervous system, and on the skin. Dourine has been neglected by research and current knowledge on the disease, and the parasite is very deficient despite its considerably high burden. This paper looks in to the challenges in identification of T. equiperdum and diagnosis techniques with the aim to update our current knowledge of the disease.
Introduction
Dourine is a contagious disease of equids caused by the protozoan parasite Trypanosoma equiperdum. Once widespread, dourine has been eradicated from many countries but is still seen in horses in Asia, Africa, South America, Southern and Eastern Europe, Mexico, and Russia and was reported in June 2011 in Sicily and then just north of Naples, on the Italian mainland (Sidney et al. 2013).
T. equiperdum is morphologically identical to other insect vector-transmitted Trypanosoma evansi and Trypanosoma brucei which cause surra and nagana, respectively. In many regions of the world, these three parasite species occur together and current diagnostic tests are unable to differentiate between them (Brun et al. 1998). T. equiperdum differs from other trypanosomes in that it is primarily a tissue parasite that rarely invades the blood. The trypanosomes, which are present in the seminal fluid and mucous membranes of the genitalia of the infected donor animal, are transferred to the recipient during sexual intercourse. Parasites then may pass into the blood, where they are carried to other parts of the body. In typical cases, this metastatic invasion gives rise to characteristic cutaneous plaques (Stephen 1986).
In practice, diagnosis of dourine is based on clinical evidence supported by serology (Office International des Epizooties (OIE) 2008). Despite available serological tests are more sensitive, they fail to distinguish between an active infection and a cured one (Clausen et al. 1999). Detection of T. equiperdum, by standard parasitological techniques, is usually difficult even in dourine positive horses, due to low numbers of parasites in the blood or tissue fluids and chronic nature of the disease (Vulpiani et al. 2013). Additionally, diagnosis of the disease becomes more complicated in an area where the causative agents of surra or nagana occur and appears difficult to identify which parasite is causing dourine (Hagos et al. 2010b).
Dourine is a disease of great economic importance and well documented as a trade barrier for the movement of horses (Chin et al. 2013). Dourine poses a significant challenge to equine production, as transmission does not require insect vectors that are influenced by climatic factors, the disease can be found anywhere and even the disease is more important in areas where mechanically or tsetse-transmitted trypanosomes are endemic. Though dourine still occurs in many parts of the world, since its eradication from North America and Northern Europe, research on the disease has been neglected. The absence of published information on many aspects of dourine should prompt experts in the field to bridge the gap in current knowledge about the disease possibly through research or systematic review of existing literature. This paper presents a thorough review of the epidemiology, diagnosis, and pathology of dourine.
Equine Trypanosomosis (dourine) Definition and synonyms
Dourine is a chronic or acute contagious disease of equids transmitted directly from animal to animal during coitus (Calistri et al. 2013). The venereal disease of equines or dourine has been also known under different other names (Arabic Bel Dourin,^English Bcovering disease,^German BBeschalseuche,^French BMal de coit,^Russian BSlucnaja Boleznj,^or BPodsedal^) (Hoare 1972).
Etiology
T. equiperdum is the causative agent of dourine that belongs to the subgenus Trypanozoon (Hébert et al. 2017). This subgenus also includes the three subspecies of T. brucei (Trypanosoma brucei brucei, Trypanosoma brucei gambiense, and Trypanosoma brucei rhodesiense), and T. evansi. T. b. brucei causing nagana in domestic animals and T. b. rhodesiense and T. b. gambiense causing sleeping sickness in humans. Further, T. evansi causes surra predominantly in livestock but also in other mammals (Maudlin et al. 2004).
Origin and identification of the parasite T. equiperdum is classified under the subgenus Trypanozoon along with T. brucei spp. and T. evansi; however, the species classification of Trypanozoon remains a controversial topic because it has been hypothesized that a very close evolutionary relationship exists among the trypanosome species of Trypanozoon (Suganuma et al. 2016). Based on biological and morphological characteristics, Hoare (1972) suggested that T. evansi evolved from T. brucei by adaptation to mechanical transmission from host to host through their insect vectors, while T. equiperdum was derived from T. evansi by adapting to the equine hosts. However, since T. evansi lacks kinetoplast DNA (kDNA) maxicircles (Lun et al. 1992), Hoare's hypothesis that T. equiperdum arose from T. evansi is inappropriate. In fact, there is no evidence to indicate that the ability to reacquire kDNA occurred in trypanosomes or other kinetoplastids (Lun et al. 1992). On the other hand, based on the morphology and molecular data from a study on T. brucei, T. evansi, and T. equiperdum, other researchers suggested that T. evansi stocks distributed around the world were derived from a mutated clone population of T. equiperdum that lacked maxicircle kDNA (Lun and Desser 1995;Brun et al. 1998). To date, phylogenetic analyses show that T. equiperdum and T. evansi are not monophyletic and should therefore be considered as subspecies of T. brucei, a parasite causing sleeping sickness in humans and nagana in animals (Hébert et al. 2017). The possible phylogenetic relationship is illustrated in Fig. 1.
Unlike T. brucei whose kDNA contains hundreds of complex heterogeneous minicircle sequence, T. equiperdum and T. evansi share the same properties of minicircles which are largely homogeneous and totally different from that of T. brucei having only a single major minicircle class (Gibson 2007;Lai et al. 2008;Lun et al. 2010). This strongly supports the hypothesis that T. evansi is likely to directly arise from a mutated T. equiperdum which has a lack of maxicircles.
Based on the biological and molecular evidence, it is suggested that T. equiperdum evolved from an ancient strain of T. brucei which adapted to equine hosts, and that during the period of adaptation, some parts of the maxicircle kDNA and the heterogeneous minicircles were lost, causing the lack of development within an insect vector (Lai et al. 2008). This is supported by the deletion of maxicircle sequences observed in at least two stocks of T. equiperdum (Lun et al. 1992). Lun et al. (1992) showed that Chinese T. equiperdum maxicircles are only about half the size of those of T. brucei, being approximately 14.3 kb in size. Because of the lack of parts of the maxicircle kDNA, cyclic stages no longer occurred in this mutated trypanosome (Borst et al. 1987) and then this ancestral T. brucei isolate, later called T. equiperdum, was finally limited to the equine hosts.
Although great progress has been made in clarifying the genetic and evolutionary relationships, many interesting questions still need to be resolved with more evidence. In addition, it is not clearly understood how or how often maxicircle kDNA loss has happened in dyskinetoplastic trypanosomes although it is clear that this phenomenon is frequently observed in T. evansi and T. equiperdum both in vivo and in vitro. At the same time, the links between mechanical transmission, the loss of kDNA, and host specificity remain uncertain (Wei et al. 2011).
It is difficult to distinguish T. equiperdum microscopically from other members of the subgenus Trypanozoon (T. evansi and T. brucei). In particular, T. equiperdum and T. evansi cannot be differentiated on the basis of morphological criteria (Claes et al. 2005). Like T. evansi, T. equiperdum is usually monomorphic. However, it sometimes exhibits pleomorphism like T. evansi during subpassages in rodents (Wei et al. 2011). At the fine structural level, there are relatively more coated vesicles in the flagellar pocket of T. equiperdum, compared with that of T. evansi. It becomes somewhat difficult to differentiate these two species with respect to the ultrastructural properties (Brun et al. 1998).
Neither parasitological nor serological tests are sensitive and specific enough, thus leading to various kinds of genetic and molecular methods which have been continually updated in order to enhance greater precision in diagnosis of Trypanozoon species and differentiation of these pathogens (Wei et al. 2011). Accordingly, restriction fragment length polymorphisms (RFLPs) (Lun et al. 2004), genome fingerprinting (Waitumbi and Murphy, 1993), and repetitive DNA probes were used (Zhang and Baltz 1994). A series of techniques based on PCR have also been used, for example, mini satellite DNA analysis (Macleod et al. 2001), amplified fragment length polymorphism (AFLP) (Agbo et al. 2002), multiplex-endonuclease genotyping (MEGA) , mobile genetic elements (MGE)-PCR, simple sequence repeat (SSR)-PCR (Li et al. 2005), and random amplification of polymorphic DNA (RAPD) (Lun et al. 2004). PCR test based on the RoTat1.2 variable surface glycoprotein (VSG) cDNA sequence was performed by Claes et al. (2004). Moreover, two kinds of techniques have been developed for detection and identification of African trypanosomes, i.e., fluorescence in situ hybridization with peptide nucleic acid probes (Radwanska et al. 2002) and the loop-mediated isothermal amplification (LAMP) reaction (Thekisoe et al. 2007;Njiru et al. 2008). However, despite the development of these, genetic and molecular techniques by different scholars to clear species-specific identification within the subgenus trypanosome remains difficult. So far, the discovery of a simple and reliable way to entirely distinguish all Trypanozoon species remains a big challenge.
Epidemiology
Host range and geographical distribution T. equiperdum has been reported to infect horses, donkeys, and mules. There is no known natural reservoir of the parasite other than infected equids (Brun et al. 1998). Infection is not always transmitted by an infected animal during copulation (OIE 2013). Horses usually die from infection without treatment, whereas donkeys and mules are more resistant than horses and may remain unapparent carriers. Zebras have been tested positive by serology, but there is no conclusive evidence of infection (Brun et al. 1998). Since T. equiperdum is a tissue parasite found in equines in nature, its establishment in the blood of laboratory animals is extremely difficult. However, once a strain becomes adapted to rodents, the parasites can be maintained by serial passages, in the same manner as T. evansi. It is noted that murine-adapted clones of T. equiperdum can cause acute infection like T. evansi when passaged through mice, rats, rabbits, horses, and dogs. Domestic animals such as sheeps and goats infected with murine-adapted strain of T. equiperdum produce the clinical manifestations of dourine (Wang 1988).
Dourine has a worldwide distribution but few cases have been reported during the last three decades owing to the wide use of artificial fertilization technology (OIE 2013). It was once widespread during the times when the horse was militarily, economically, and agriculturally important. It was of great concern in the USA and Canada at the beginning of the twentieth century. Nowadays, Western Europe, Australia, and the USA are considered to be free from dourine . The infection is endemic in many areas of Asia, Africa, Russia, Middle East, and Eastern Europe (OIE 2008). The latest official reports of dourine (i.e., Complement Fixation Test (CFT) positive cases) were in China, Kazakhstan, Pakistan, Ethiopia, Botswana, Namibia, South Africa, Brazil, Italy, and Germany (Fig. 2). However, due to possible cross reactions in the CFT, it is difficult to conclude that seropositive animals are real T. equiperdum cases (Zablotskij et al. 2003). The prevalence of the disease in some countries is summarized in Table 1.
Transmission
Unlike other trypanosomal infections, dourine is transmitted almost exclusively during coitus. Dourine is the only Fig. 1 Phylogenetic relationship among three closely related trypanosomes which indicates the close relationship between T. evansi and T. equiperdum (Brun et al. 1998) trypanosomosis that is not transmitted by an invertebrate vector. T. equiperdum differs from other trypanosomes in that it is primarily a tissue parasite that is rarely detected in the blood (OIE 2013).
The trypanosomes, which are present in the seminal fluid and mucous membranes of the genitalia of the infected donor animal, are transferred to the recipient during sexual intercourse. Trypanosomes are rarely observed in the bloodstream of the host because they are normally localized in the capillaries of the mucous membranes of the urogenital tract. However, a few trypanosomes occasionally appear in the peripheral blood of animals with chronic infection. This could provide the opportunity for bloodsucking insects to mechanically transmit this parasite, although this is considered to be very rare (Wang 1988).
The infection is more commonly transmitted from stallion to mare, facilitated by the presence of the parasite in the seminal fluid and mucous exudates of the penis and its sheath. From the infected mare, the infection is transmitted to a stallion due to the presence of the parasite in the vaginal mucus (OIE 2013). A study conducted using clinical findings and laboratory and epidemiological analyses of the outbreaks in Italy, based on features such as prevalence, age, reproductive activity, and relationship between the affected animals, indicated that the infection is transmitted directly from animal to animal during coitus (Calistri et al. 2013). As the disease progresses, trypanosomes periodically disappear from the urethra or vagina; during these periods, the animals are non-infective. Non-infective periods may last for weeks or months and are more likely to occur in the later stages of the disease. Thus, transmission is most likely in the early disease process (Wang 1988). An interesting finding in the literature was a positive PCR test result from a prepuce swab taken from a dourine-free stallion immediately after mounting an infected mare. The horse remained negative at all subsequent tests, supporting the theory that the parasite is present in the genital tissues but that sexual transmission is not constant (Vulpiani et al. 2013).
T. equiperdum can pass through intact mucous membranes and it is possible for foals to acquire infection by contamination of nasal or conjunctival membranes with the vaginal Italy CFT 0.54 (Calistri et al. 2013) discharge. These infected foals can spread the organism when they mature. Other means of transmission may also be possible, but there is no evidence that arthropod vectors play any role in transmission. Intravenous or intraperitoneal experimental infections suggest that mechanical transmission by bloodsucking flies cannot be excluded. Foals born to mares infected with T. equiperdum may be infected in utero or may become infected during parturition. Transmission to foals by ingestion of infected colostrum or milk is considered rare (William and Steven 2007). The presence of trypanosomes in the mammary secretions may support that the infection can occasionally pass to foals during suckling (Pascucci et al. 2013). Foals that ingest colostrum from infected mares will become seropositive due to passive transfer of antibodies; these foals are usually seronegative by from 4 to 7 months of age (William and Steven 2007).
Clinical signs
The incubation period between exposure and initial clinical signs is highly variable; it may be as short as 1-2 weeks or as long as several years (William and Steven 2007). Clinical signs of dourine are highly variable in manifestation and severity. The disease is characterized mainly by swelling of the genitalia, cutaneous plaques, and neurological signs but severity varies with the virulence of the strain, the nutritional status of the horse, and stress factors. Clinical signs often develop over weeks or months, frequently waxing and waning with relapses, probably precipitated by stress. This can occur several times before the animal either dies or experiences an apparent recovery. The mortality rate is believed to be in excess of 50% (Sidney et al. 2013).
A number of authors have broken the course down into three stages: stage 1 (genital lesions), stage 2 (cutaneous signs), and stage 3 (nervous signs). Stage 1 involves genital edema and swelling, manifesting 1-2 weeks after infection. In stage 2, typical cutaneous plaques (Bsilver dollar^plaques) appear, with thickening of the skin, considered pathognomonic by some authors. Stage 3 is characterized by progressive anemia, neurological disorders, and paresis of the hindquarters, often ending in death (Claes et al. 2005).
A pathognomonic sign is the edematous plaque consisting of an elevated lesion in the skin, up to 5-8 cm in diameter and 1 cm thick. The plaques usually appear over the ribs, although they may occur anywhere on the body, and usually persist for between 3 and 7 days. They are not a constant feature. Pyrexia is intermittent; nervous signs include incoordination, mainly of the hind limbs, lips, nostrils, ears, and throat. Depigmentation of the genital area, perineum, and udder may occur. In the stallion, the first clinical sign is a variable swelling involving the glans penis and prepuce. The edema extends posteriorly to the scrotum, inguinal lymph nodes, and perineum, with an anterior extension along the inferior abdomen. In stallions of heavy breeds, the edema may extend over the whole floor of the abdomen (OIE 2013).
An observation made by Vulpiani et al. (2013) indicates that infected stallions revealed mild signs than the infected mares. Six months after infection, the stallions were almost asymptomatic. However, the differences with respect to sex cannot be statistically examined because of the low number of considered cases in the study. An observation made by Watson (1920) indicates that apart from the fact of increasing virulence resulting from continued passages accords with general experience that the disease is usually more progressive in the stallion than in the mare.
Pathological lesions of dourine
The disease is characterized by edematous lesions of the genitalia, involvement of the nervous system, and progressive emaciation, and it is ultimately fatal in most cases. Typical cutaneous lesions, from which the disease derives its name Bdourine,^have been described as circular elevated plaques of thickened skin ranging in size from 1 to 10 cm in diameter, resembling money or Bdouros^ (Claes et al. 2005). The constant antigenic variations of the parasite result in the release of a large amount of biological active products and the formation of immune complexes, which are certainly major factors in triggering a variety of clinical and pathological changes (Zwart 1989).
Gross pathological lesion
Dourine is characterized by cachexia, anemia, muscular hypotrophy, ataxia, and lack of coordination of the hindquarters, ptosis of the lower lip, genital lesions, skin edematous plaques, and peripheral edema (Pascucci et al. 2013). The presence of nervous signs without sensory alterations seems to confirm the tropism of T. equiperdum for the peripheral rather than the central nervous system, in contrast with other trypanosomes (Berlin et al. 2009).
At postmortem examination, gelatinous exudates are present under the skin. In the stallion, the scrotum, sheath, and testicular tunica are thickened and infiltrated. In some cases, the testes are embedded in a tough mass of sclerotic tissue and may be unrecognizable. In the mare, the vulva, vaginal mucosa, uterus, bladder, and mammary glands may be thickened with gelatinous infiltration. The lymph nodes, particularly in the abdominal cavity, are hypertrophied, softened, and, in some cases, hemorrhagic. The spinal cord of animals with paraplegia is often soft, pulpy, and discolored, particularly in the lumbar and sacral regions (OIE 2013).
The presence of dourine infection in the stallions did not appear to interfere with libido or the ability to achieve erection even where there is pronounced edema of the scrotum and sheath. Similarly, the presence of infection did not appear to adversely affect the fertility of either stallions or mares. This study also reported on five occasions clean mares conceived to services by infected stallions and on three occasions infected mares conceived to services by clean stallions. Two foals born to infected mares were normal and were reared to maturity (Barrowman 1976).
Microscopic lesions
On histological examination of tissue samples, the disease is characterized by hemosiderin deposition in the spleen, the iliac, supramammary, and popliteal lymph nodes showed non-specific reactivity with hyperplasia of the plasma cells, a sign of increased hemolymphatic activity. The edematous plaque showed a characteristic picture of pustular dermatitis, particularly severe around the lesion, with severe inflammation and vacuolar degeneration extending to the deepest layers of the skin, with involvement of the cutaneous adnexa and perivascular plasma cell inflammation. There was exudates of cell detritus in the same area, mainly eosinophils and the bodies of free parasitic protozoa, in a picture described as Btrypanosomal sand^ (Pascucci et al. 2013;Scacchia et al. 2013).
In the nervous system of infected horses, neurodegenerative lesions and inflammatory vasculitis of the central nervous system with edematous infiltration in the facial and lingual nerves were reported. In the udders, there are histological lesions attributable to severe interstitial inflammation accompanied by strong supramammary lymph node reactivity and the presence of Russell's bodies. Liver showed multifocal areas of hepatitis while the kidneys are affected by plasma cell inflammation of the renal pelvis. Periglandular inflammation in the vulva, vagina, uterus, and clitoris was also observed in the infected horse. The constant finding of iliac and supramammary lymph node positivity and lymphatic activity on both macroscopic observation and histological examination seem to confirm that the parasite spreads mainly through the lymphatic system (Pascucci et al. 2013).
Although, depigmentation around the perineum is often described as characteristic of clinical cases of dourine Hagos et al. 2010a;Vulpiani et al. 2013), no microscopic description of such lesions was cited in previous literatures. Severe dermatitis with hydropic degeneration and necrosis of the keratinocytes of stratum spinosum and necrosis of basal cells including the melanocytes with excess free melanin pigment within the epidermis were reported recently. The probable cause of depigmentation around the vulval skin of infected mares could be due to severe necrosis of melanocytes, as the depigmented areas were microscopically characterized by severe necrosis of cells, excess free melanin, and formation of cystic structures in the epidermis (Yonas 2015) (Fig. 3).
Diagnosis
Diagnosis of dourine is a challenge, due to limited knowledge about the parasite and host-parasite interaction following infection. In practice, diagnosis is based on clinical evidence supported by serology (Alemu et al. 1997;Hagos et al. 2010a). Clinical signs of dourine can provide a strong indication of the presence of the disease, but confirmatory diagnosis is needed (Claes et al. 2005). The incubation period may vary from a few weeks to several years, and some of the clinical signs, which include genital edema, weight loss, skin lesions known as silver dollar plaques, and neurological signs, may be absent in the early stages or during latent infections (Luckins et al. 2004;Claes et al. 2005). Diagnosis of dourine, therefore, requires confirmation by parasitological, serological, and molecular techniques.
Parasitological diagnosis
Wet and thick blood films In this test, 5-10 μl of blood is placed on a slide and examined microscopically at ×400 magnification under a cover slip. Trypanosomes are observed moving between the erythrocytes in infected animals. It has very low sensitivity, with a detection limit as high as 10,000 trypanosomes/ml, but it is still in use because of its low cost and simplicity. Giemsa or Field-stained thin blood films have a similarly low sensitivity. It is time consuming (10-20 min per slide) and requires expertise to recognize the parasite (Murray et al. 1979).
Microhematocrit centrifugation technique Microhematocrit centrifugation technique (mHCT), a blood concentration technique (also called the capillary tube centrifugation technique or the Woo test), is the most frequently applied concentration technique with better sensitivity than direct microscopic examination. In this test, capillary tubes containing anticoagulants are filled three-quarters full with blood. The dry end is sealed with plasticine. By high-speed blood centrifugation in a hematocrit centrifuge for 6-8 min, trypanosomes are concentrated between the red blood cells and the plasma, together with the white blood cells. The capillary tubes mounted in a special viewing holder can be directly examined at low magnification (×10 or ×40) for motile parasites. The estimated detection threshold of mHCT is 500 trypanosomes/ml of blood sample (Reid et al. 2001).
Mini anion-exchange centrifugation technique The mini anion-exchange centrifugation technique (mAECT) consists of separating the trypanosomes which are less negatively charged than blood cellular components from venous blood via anion-exchange chromatography and finally concentrating them at the bottom of a plastic collector tube by low-speed centrifugation. The tip of the glass tube is then examined in a special holder under the microscope for the presence of trypanosomes. The large blood volume of up to 300 μl enables the detection of less than 100 trypanosomes/ml, resulting in high sensitivity. However, the manipulations are quite tedious and time consuming (Reid et al. 2001;Buscher et al. 2009).
Animal inoculation Repeated attempts have been made by different workers (Alemu et al. 1997;Clausen et al. 1999Clausen et al. , 2003 to demonstrate and isolate T. equiperdum in laboratory mice but all were unsuccessful. However, once a strain becomes adapted to rodents, the parasites can be maintained by serial passages, in the same manner as T. evansi (Luckins 1994). Under laboratory conditions, dogs can be infected with T. equiperdum as reported by Rouget (1986). In experimental infections carried out in the Institute for Tropical Medicine to raise antisera against VSGs, rabbits infected with the available laboratory strains developed clinical signs that could not be distinguished from those developed by rabbits infected with T. evansi (Verloo et al. 2001). Owing to the marked predilection of T. equiperdum for the testicles of rabbits, some authors recommended intratesticular inoculation of these animals for the diagnosis of dourine in equines. Ruminants were refractory to infection with T. equiperdum (Hoare 1972).
Serological techniques
It is extremely difficult to detect the parasite in the body fluids of infected horses (Claes et al. 2005); therefore, diagnosis of T. equiperdum by standard parasitological techniques is difficult, owing to the low numbers of parasites in the blood or tissue fluids. Consequently, the demonstration of trypanosomal antibodies in the serum has become the most important parameter determining the disease status of individual animals (Bishop et al. 1995). Trypanozoon group-specific trypanosomal antigen could be of use in an antibody assay for the diagnosis of T. equiperdum infections. However, based on anecdotal evidence, it appears that T. equiperdum-infected laboratory animals and horses suspected of dourine also positively react in the Card Agglutination test trypanosomiasis (CATT)/T. evansi and Enzyme-linked Immunosorbent Assay (ELISA)/T. evansi prepared with fixed whole trypanosomes of the RoTat 1.2 VAT .
CATT/T. evansi test is fast, uses a standardized antigen, and can be performed in situ, i.e., without the need of a fully equipped laboratory. Recently, it has been proven that most so-called T. equiperdum strains also express isoVATs of T. evansi RoTat 1.2. Therefore, the CATT/T. evansi may prove to be a good test for equine trypanosomosis, regardless whether the causative agent is T. evansi (surra) or T. equiperdum (dourine) .
A B C D Fig. 3 a Depigmentation of the vulval lip (gross). b Vacuolar degeneration of the cells (lighter arrows) and necrotized cell (darker arrow) in the stratum spinosum, degeneration and necrosis of the basal cells with melanin pigment were evident (circled areas). c Excess free melanin in the stratum spinosum (small circles) and within the basal layer (large circles). d Severe dermatitis with infiltration of lymphocytes and plasma cells in the epidermis and dermis (circled areas) (Yonas 2015) The complement fixation test is the most commonly used OIE-prescribed serodiagnostic test developed for T. equiperdum and successfully used as part of a program to eliminate T. equiperdum from North America. It is still used for international trade in monitoring horses for export/import. Despite the usefulness and universal acceptance of the CFT for diagnosing dourine, some discrepancies have been recorded. The disadvantages of the CFT are that it requires careful continuous titration of numerous labile agents and that it does not function with sera having anticomplementary activity. CFT is not species specific, but only specific for the subgenus Trypanozoon. The drawback of the test is lower specificity where it cannot differentiate T. equiperdum from other similar trypanosomes. Hence, the diagnostic significance of CFT is therefore doubtful in countries where both T. equiperdum and T. evansi infections occur in equines (Luckins 1994). Although the CFT has been in use for many years for diagnosis of dourine, it is considered to be less sensitive than ELISA and IFAT for the detection of the serum antibodies against T. equiperdum (Wassal et al. 1991;Bishop et al. 1995).
Indirect fluorescent antibody test is frequently used for the diagnosis of dourine, as a confirmatory test for CFT results, since immunofluorescence provides a reliable and sensitive technique. But its interpretation is both subjective and labor intensive, and it is therefore more suited to the testing of small numbers of sera (Williamson et al. 1988).
The use of ELISA for routine diagnosis of dourine would provide a significant advantage over current serological tests if a defined antigen was used, since it would permit test standardization and more readily allow comparison of tests among laboratories. It additionally, lends itself to a considerable degree of automation, which makes it suitable for a large number of samples (Wassal et al. 1991). Different workers have stated that the ELISA has a satisfactory concordance ratio with CFT and can be used to supplement CFT (Williamson et al. 1988;Alemu et al. 1997). There are also several other alternative serological tests that are used, such as the agar gel immunodiffusion test, the arrayed immunodiffusion method (Hagebock et al. 1993), and the competitive immunoassay (cELISA). The cELISA method has several advantages over the CFT: it can be performed in less time than the corresponding CFT procedure, it is reproducible, results are objectively measured and calculated, and the method is amenable to automation (Katz et al. 1999). While serological tests can be the method of choice for mass screening of populations, their main limitation will remain as the failure to demonstrate the parasite. Unfortunately, parasitological techniques are known to lack sensitivity, especially for the detection of T. equiperdum, which is considered to be a tissue parasite rather than a blood parasite (Brun et al. 1998).
Molecular techniques
Although no T. equiperdum-specific polymerase chain reaction (PCR) method is available, subgenus Trypanozoon-specific PCR can be used for detection of T. equiperdum DNA. Recently, a highly sensitive real-time PCR for Trypanozoon subgenus was applied on tissues and fluid samples from a naturally dourine-infected horse, enabling the detection of low numbers of the parasites (Scacchia et al. 2011;Pascucci et al. 2013). PCR and other related DNA amplification methods have been used to examine exudates or tissue samples, taking into account their failure on blood samples after the initial phase of the infection (Calistri et al. 2013).
Direct diagnosis based on molecular techniques can be highly sensitive for parasite detection in body fluids such as the blood (Becker et al. 2004). However, this approach is difficult to apply for mass screening and negative results do not exclude the possibility of infection. In fact, T. equiperdum multiplies predominantly in extracellular tissue spaces and is seldom found in peripheral blood (Theis and Bolton 1980). Diagnosis of T. equiperdum infection is thus still strongly based on serological evidence.
Treatment
Pharmaceutical therapy is not recommended because animals may improve clinically but remain carriers of the parasite (OIE 2013). There are no officially approved drugs to treat horses suffering from dourine although some older publications mentioned experimental treatment of horses with su ramin and ne oar sph ena m ine (Ciuca 19 33) or quinapyramine sulfate (Vaysse and Zottner 1950). Evidence from in vitro drug sensitivity determination of T. equiperdum indicates that suramin, diminazene, quinapyramine, and cymelarsan are effective (Zhang et al. 1992;Brun and Lun 1994). These were also supported by other researchers who found cymelarsan® quite effective in curing horses at both 0.25 and 0.5 mg/kg in acute as well as chronic form of dourine (Hagos et al. 2010b). Brun and Lun (1994) reported drug sensitivity of T.equiperdum isolates in vitro and found the isolate was highly sensitive to melarsorol, isometamidium, and suramin; with regard to diminazene, T.equiperdum was not sensitive as the most sensitive T. evansi strains.
Prevention and control
There is no vaccine available for dourine. As dourine is primarily a venereal disease, prevention of natural mating or AI with infected horses (stallions or mares) or infected stallion semen is the most important means of control. Prevention of dourine is therefore based on the establishment of freedom from infection, and this is done by testing blood for the presence of antibodies against T. equiperdum, which is more reliable than testing for the presence of the protozoan parasite itself. Any introductions of horses from endemic areas or areas of incursion should be isolated and the blood tested for antibodies by complement fixation test (Sidney et al. 2013).
Control of the disease depends on compulsory notification, slaughter of infected animals, and movement control enforced by legislation in most countries (OIE 2013). Dourine should be eradicated from an incursion into a non-endemic area by identification of the source, thorough tracing and testing of all in contact, and euthanasia of infected and seropositive horses (Sidney et al. 2013). Currently, an eradication strategy is imposed by the World Organization for Animal Health (OIE) with slaughtering of seropositive horses while treatment is prohibited (Zablotskij et al. 2003). However, it is not economically feasible to apply a strict test and slaughter policy to control dourine in developing countries. Based on the result of the in vivo drug sensitivity study, a revised strategy of the appropriate drug treatment in dourine endemic areas instead of eradication could be recommended to the OIE (Hagos et al. 2010b).
It is important to note that castrating adult stallions does not always change the copulatory ability of such animals and it should be performed with caution when attempting an eradication program. To prevent the introduction of dourine, serum samples should be taken following a period of isolation (quarantine) to ensure that the animals are not in the incubation period (Zablotskij et al. 2003).
The difficulty in the diagnosis of T. equiperdum has led to difficulties in obtaining reliable data on the prevalence and distribution of the disease and for the implementation of monitoring, treatment, and control program. Moreover, shortages of trypanocidal drugs and the absence of vaccines against trypanosomosis have hampered the control and prevention of the disease in endemic areas .
Conclusion
Owing to the difficulties and challenges related to the diagnosis of T. equiperdum, it was not possible to achieve reliable data on many aspects of the disease and above all for the implementation and monitoring of the disease control program. Similarly, the less attention given to study the disease resulted in a remarkable deficiency in our current knowledge of the disease. Hence, dourine imposes further detail study in developing very sensitive and specific diagnostic tools, host parasite interaction (pathology), and chemotherapy which will have tremendous aid in effective control of the disease.
Compliance with ethical standards
Conflict of interest The authors declare that they have no conflict of interest.
Open Access This article is distributed under the terms of the Creative Comm ons Attribution 4.0 International License (http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. | 2022-12-26T14:40:31.262Z | 2017-04-24T00:00:00.000 | {
"year": 2017,
"sha1": "15255c90f29dd4e98333f3580615ac91b587dd9d",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11250-017-1280-1.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "15255c90f29dd4e98333f3580615ac91b587dd9d",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Biology"
],
"extfieldsofstudy": []
} |
7147443 | pes2o/s2orc | v3-fos-license | Solving QVIs for Image Restoration with Adaptive Constraint Sets
We consider a class of quasi-variational inequalities (QVIs) for adaptive image restoration, where the adaptivity is described via solution-dependent constraint sets. In previous work we studied both theoretical and numerical issues. While we were able to show the existence of solutions for a relatively broad class of problems, we encountered problems concerning uniqueness of the solution as well as convergence of existing algorithms for solving QVIs. In particular, it seemed that with increasing image size the growing condition number of the involved differential operator poses severe problems. In the present paper we prove uniqueness for a larger class of problems and in particular independent of the image size. Moreover, we provide a numerical algorithm with proved convergence. Experimental results support our theoretical findings.
Introduction.
When solving inverse problems in image processing in a variational framework, one faces the issue of selecting a regularizer, which on the one hand should provide suitable reconstruction quality and on the other hand should have sufficient theoretical properties to guarantee existence and uniqueness of a solution.
A common choice is to rely on convex regularizers, which in combination with a convex data fidelity term has the advantage that theory and numerics of convex optimization have been intensively studied in literature and are well understood [33]. A prominent example for a convex regularizer with suitable theoretical properties is the total variation (TV) semi-norm [32,33].
On the other hand, acknowledging the fact that we are reconstructing images, an investigation of the empirical distribution of typical images (Zhu & Mumford [43]) shows, that non-convex regularization terms are more appropriate to choose. Using such nonconvex regularizers comes with the challenge to prove existence and uniqueness for the resulting variational problem.
In our work ( [19,23,24] and the present paper), we follow a strategy which combines elements of convex and non-convex formulations. We start with a convex problem using discrete TV regularization and consider its dual formulation, which is given as a constrained quadratic optimization problem min p∈D F (p) (1.1) with a convex constraint set D. The next step is to make the constraint set D depending on a fixed element, say p 0 , i.e. we consider D = D(p 0 ). This generalization makes the regularization approach an adaptive one. Since the optimization problem is still convex, existence of a solution p is guaranteed. Finally, we consider the problem of finding a fixed-point p * of the mapping p 0 → p := arg min We refer to the convex problem in (1.2), where p 0 is fixed, as the inner problem. The overall problem of finding a fixed-point p * is equivalent to solving a quasivariational inequality (QVI) [10], thus we can make use of available theory on existence. Uniqueness of such a fixed-point, however, in general is an open issue, when the underlying dual functional (p 0 fixed) is not strictly convex. To tackle this uniqueness issue under suitable conditions is one of the main contributions of this paper. In particular, it turns out that under the assumption that the mapping p 0 → p is a contraction w.r.t. a suitable semi-norm, the corresponding primal solution is unique. Having found the fixed-point p * , this fixed-point defines the adaptivity of the regularizer via the constraint set D(p * ), while also being the solution of the convex inner problem. As a consequence, our approach implements a solution-driven adaptivity instead of a data-driven one. Furthermore, we can investigate the behavior of the regularizer at the given fixed-point and find that it mimics the behavior of a non-convex regularizer.
Related work: As exemplary applications for our regularization approach we consider total variation based image denoising and non-blind image deblurring. We start with related work concerning image denoising. For the task of denoising total variation regularization was introduced by Rudin, Osher and Fatemi in [32]. Various modifications have been proposed to make this functional adaptive to the input data [2,4,14,18,34,36,42]. The approaches in [2,4,14,36] can be described by means of locally dependent constraint sets (data-driven), i.e. a fixed p 0 in our formulation.
Another important class of approaches are the non-local methods [7,17,20,31] including non-local variants of TV. These non-local variants can also be regarded as adaptive, since their local weights are depending on the input. On the other hand, adaptive methods which steer adaptivity by locally averaging the input data over a neighborhood, e.g. using the structure tensor [16], can be interpreted as non-local.
Another commonly used modification of the ROF functional is to replace the L 2 norm of the data fidelity term by an L 1 norm [1]. We remark that, by using a standard splitting of variables, the approach presented here can also be formulated with such an L 1 data term.
Recent developments in the field of TV regularization focus also on extending TV to second-or higher-order [6,35]. In [22] we have proposed an anisotropic approach of first-and second-order TV, which due to its formulation by varying constraint sets also fits into the concept of solution-driven adaptivity presented here.
The regularization approaches considered above can also be used for the task of deblurring, see e.g. [8,11,12]. In [29] an TV based deblurring approach with adaptive choice of the regularization parameter has been proposed. Similar to the task of denoising, non-local operators have being also considered for TV deblurring, see e.g. [20,38].
Besides for image restoration tasks, TV-based regularization approaches are widely used for other inverse problems in computer vision, e.g. for optical flow [41,40] and multi-labeling [21,37].
Concerning image restoration with non-convex regularization, in addition to Zhu & Mumford [43] we want to mention here the work by Charbonnier and co-workers [13,5] and by Levin [25].
As already mentioned, the fixed-point problem (1.2), which is the core problem in our considerations, is equivalent to a quasi-variational inequality. We make use of the work on QVIs presented in [10,27,28]. While theory on existence can be directly utilized, uniqueness results do not apply due to the non-strict convexity of the inner problem. We discuss these issues in detail in the main part of this paper.
Contribution: In our previous work [23,24] we sketched the proposed framework and provided existence theory. In [24] we showed uniqueness for a very narrow class of problems, which also scaled unfavorably with the image size.
In the present paper, we show uniqueness of a fixed-point for a broad class of QVIs for image denoising, namely those, for which the underlying solution operator is a contraction. In particular, this condition is not depending on the image size. Thus, our theoretical results significantly generalize our previous work in [24].
Moreover, we give a detailed discussion why classical results (Noor et al. [28] and Nesterov & Scrimali [27]) on the uniqueness of solutions of QVIs can not directly be applied to our framework. However, there is a strong relationship between our theoretical considerations and the work of [27,28].
Finally, we propose an algorithm for solving the considered QVIs and prove convergence. We support our theoretical results by numerical experiments.
Paper organization: Our paper is organized as follows. We start with a review of three case examples of TV regularization for image denoising and non-blind image deblurring in Sect. 2. In Sect. 3 we recall our model of solution-driven adaptivity described by means of quasi variational inequalities. We consider theoretical results in Sect. 4, where we firstly recall theory on existence (Sect. 4.1), then discuss the impact of existing work on uniqueness (Sect. 4.2.1) and finally prove uniqueness for the considered QVIs under suitable conditions (Sect. 4.2.2). In Sect. 5 we provide an algorithm and prove its convergence. We present numerical experiments supporting our theoretical results in Sect. 6.
2. TV-Regularization and Data-Driven Adaptivity. In the following, we recall several variational approaches for image denoising and non-blind image deblurring that are based on total variation (TV) regularization. These approaches will be the starting point for our generalizations in Section 3.
We use the following general notations. Firstly, let Ω ⊂ R d be a d-dimensional open, bounded domain with Lipschitz boundary. Secondly, in R n , n arbitrary, we denote the closed ball with radius α centered at 0 by B α (0).
2.1. Image Denoising. We consider the standard noise model, where some noisefree image u is distorted by additive i.i.d. Gaussian noise with zero mean. For the noisy image we use the notation f and assume f ∈ L 2 (Ω). We refer to u as the original or ground truth image.
2.1.1. The Classical ROF Model. We start with the total variation denoising approach by Rudin, Osher & Fatemi (ROF) [32], where BV (Ω), Ω ⊂ R d is the space of functions of bounded total variation and is the total variation semi-norm. We rewrite α TV(u) in terms of constraint sets: where div is applied element-wise on D. The dual problem of (2.1) (cf. [8]) can be formulated as min where D is the closure of D.
Let us now consider a discretization of (2.1). To this end, we consider an equidistant grid on Ω with n grid points. The grid values of the dual variable p(x) ∈ R d are interpreted as a vector p ∈ R nd . The dual problem (2.4) in the discrete formulation then becomes min where L : R nd → R n is a discretization of the divergence operator div. The constraint set in (2.4) becomes where each local constraint set D loc i , i = 1, . . . , n is a d-dimensional closed ball B α (0) of radius α. The dual problem (2.5) will be the starting point for our generalization in Sect. 3.
Higher Order Total Variation.
Analogously to TV regularization of firstorder, higher-order models can be considered. We exemplarily focus on the second order total variation in the case d = 2: where div 2 p := ∂ xx p 1 + ∂ xy p 2 + ∂ yx p 3 + ∂ yy p 4 . Typically first-and second-order TV are used jointly for regularization, e.g., for the task of denoising, one could solve where BV 2 (Ω) is the space of functions with bounded total variation of first-and secondorder (see [33,Section 9.8] for details).
Proceeding analogously to the case of first-order TV, we can derive a dual formulation of (2.8), which after discretization reads similar to (2.5): where A : R 6n → R n is a operator discretizing div p 1 + div 2 p 2 with p = (p 1 , p 2 ) ∈ R 2n × R 4n . The constraint set D in (2.9) is given by a product set of local constraint sets D loc i , where each set D loc i is again a product of a two-dimensional ball B α (0) of radius α and a four-dimensional ball B β (0) of radius β.
We refer to [6] for the alternative model of Total Generalized Variation (TGV), which is based on a different operator A and a different constraint set D.
Image
Deblurring. In this section we consider the task of image deblurring/ deconvolution. For the sake of simplicity, we focus on non-blind deconvolution, where the convolution kernel is known a-priori. The problem formulation is as follows. Let f be some observed data, which are obtained from a noise-free image u by convolution with a kernel M (x) : Ω → R, followed by an addition of Gaussian noise, i.e.
where δ is a realization of a Gaussian random variable with zero mean. In order to recover u from f , assuming that u → M * u is an operator mapping from L 2 (Ω) → L 2 (Ω), we aim at minimizing arg min Moving to a discrete formulation of the problem, we now assume that u, f ∈ R n are the function values at the n nodes of an equidistant two-dimensional grid. Moreover, we replace the continuous convolution M * u by a matrix-vector-product M u, where M now denotes a n × n matrix. In what follows we assume that M is invertible. As in the previous examples, we denote by L the discretization of the divergence operator div. The optimization problem we consider is given as arg min for p = (p 1 , p 2 , . . . , p n ) ⊤ with p i ∈ R 2 . We derive the corresponding dual problem as follows. The optimality condition for u reads (2.14) We deduce from (2.14) that When maximizing E * (p) over D = {p ∈ R nd , p i ∈ B α (0)}, the constant term 1 2 ∥f ∥ 2 2 can be omitted without changing the optimum. Moreover, switching from the maximization of E * to the minimization of F (p) := −E * (p), we can formulate the dual problem of (2.12) as arg min From a solution p of the dual problem we can retrieve the solution u of the primal problem by by u = M −1 (f − A p). We observe that the dual problem attains the same form as in the examples before (cf. Eqns. (2.5) and (2.9)).
Adaptive Regularization.
In the literature various adaptive TV approaches have been proposed. They can generally be divided into two classes, namely, approaches with locally varying regularization strength and anisotropic TV approaches. Both concepts are covered by the formulation via constraint sets as follows. Starting with the general form arg min where D is the product set of the local constraint sets D loc 1 , . . . , D loc n as in (2.6), we now allow the sets D loc i to vary locally: • By individually changing the size of D loc i , e.g. depending on the noise or image content, the regularization strength changes locally.
• By choosing anisotropic shapes for D loc i , e.g. rectangles [4], parallelograms [36], and ellipses [2,19,22], a directionally dependent regularization is introduced. In both cases, the introduced adaptivity has to be steered by additional information, e.g. about noise level, edge position and edge orientation. The standard way is to either estimate the required properties as additional unknowns in the optimization process, or to examining a pre-smoothed version of the data f . The later case formally can be regarded as introducing a dependency of D on f , i.e. D = D(f ).
We pick up three different examples of adaptive/anisotropic TV regularization which follow the latter concept. The first is obtained from the standard ROF model by locally varying parameter α.
Example 2.1 (Data-driven adaptivity). Let us consider the following generalization of the optimization problems (2.1) and (2.11), where the parameter α is allowed to change locally: where we assume α(x) ≥ c > 0. Our aim is to reduce the local regularization parameter α(x) at edges. A simple way to find such edges would be to consider the gradient magnitude of the input data f and to set α(x) := max(α 0 (1 − κ|∇f (x)|), ε) with a constant α 0 determining the maximal regularization strength and some small ε > 0 to ensure boundedness of α from below by a positive constant, which ensures existence [22]. However, since the approach should be robust against the noise contained in f , a presmoothing of f before evaluating the gradient is inevitable.To this end, let f σ := K σ * f be the convolution of f with a Gaussian kernel K σ with standard deviation σ > 0. An adaptive choice of α(x) is α(x) := max{α 0 (1 − κ|(∇f σ (x))|), ε}. (2.24) There exist alternative choices for varying the regularization strength, such as the gfunction from the Perona-Malik model [30], or models utilizing the structure tensor [16]. Considering again the dual formulation of (2.23) in a discrete setting, we retain the form (2.5), with the only difference that the local constraint sets become dependent on the spatial location and the input data f ,
25)
where i = 1, . . . , n are the indices of the grid nodes. Note that in the discrete setting, where L is a discretization of div, the discrete pendant of ∇ is − L ⊤ . Recall that the dual problem reads arg min The key observation in this example is that we formally introduced a dependency of D on f via α. We denote this dependency by D(f ). We refer to this concept as data-driven adaptivity. ♦ Our second example generalizes the ROF model by considering a directionally dependent regularization, which results in an anisotropic shape of the local constraint sets.
Example 2.2 (Anisotropic first-order TV). We consider an anisotropic TV regularization with a strong penalization of the image gradient in homogeneous regions (isotropic) and, at edges, a weak penalization in normal direction and a strong penalization in tangential direction to the edge (anisotropic).
To this end we require information about the location and orientation of edges in terms of an edge indicator function χ e : Ω → [0, 1] and a vector field v e : Ω → R 2 of edge normals, which both can be obtained from the standard structure tensor [16] of f by setting where λ 1 ≥ λ 2 ≥ 0 are the ordered eigenvalues of the structure tensor, w 1 is the eigenvector to eigenvalue λ 1 and κ > 0 is a parameter controlling the edge sensitivity. We refer to [24] for exact definitions and further details. With this edge information, we choose D loc i = D loc (x i ) at grid node x i to be an ellipse with one half axis parallel to v e (x i , f ) of length χ e (x i )α + (1 − χ e (x i ))β, with constants 0 ≤ α ≤ β, and the perpendicular half axis of length β.
The cross product of the local constraint sets D loc i as in (2.27) defines our (global) constraint set D(f ). ♦ Finally, let us consider an example of adaptive higher-order TV regularization. Example 2.3 (Adaptive first-and second-order TV). We revisit the first-and secondorder TV regularization models from Sect. 2.1.2 with the discretized dual problem where the operator A : We are aiming at a regularization with locally varying regularization strengths α i for first-and β i for second-order. Analogously to Example 2.1, we choose with constants α 0 , β 0 > 0, i.e. in homogeneous regions (vanishing gradient ∇f = 0) we penalize the first-and second-order TV with factor α 0 and β 0 , respectively, while we reduce the regularization strength at edges (|∇f | ≫ 0). As local constraint sets we then choose The above examples show, that many popular variational approaches conform to the generic model (2.22). A limitation of the above adaptive approaches is, that the adaptivity is determined by the noisy input data f (data-driven adaptivity), rather than by the noise-free solution u. In the next section, we show how we can switch from a data-driven to a solution-driven adaptivity.
3. Solution-driven Adaptivity. In [24] we have proposed a new kind of adaptivity, where the constraint set D depends on the unknown solution of the problem. We recall this approach below.
Our approach generalizes the examples of Section 2 with respect to the operator A and the form of the constraint set D.
We describe this generalization in a discrete setting, where we consider again an equi-distant grid on Ω with n grid points. We start with a dual problem of the form where A : R mn → R n now is a general discrete operator. We assume that the constraint set D takes the form where each D loc i is a local m-dimensional closed convex constraint set at the i-th grid point. We stress that the shape of D loc can be arbitrary. The solution of the primal problem can be retrieved by u := M −1 (f − A p) from the solution p of the dual problem (3.1).
We remark that the dual problem (3.1) can be equivalently formulated based on a variational inequality (VI) In our case, the gradient of F (p) is an affine function of p: We will make use of this specific form in the following section. We now generalize the problem (2.5) by introducing a dependency of D on the dual variable, i.e. D = D(p 0 ) for some p 0 ∈ R mn and search for a fixed-point p * of the mapping Please note that we have to distinguish between a fixed-point of (3.5), denoted by p * , and a minimizer p of the convex dual problem arg min p∈D(p 0 ) F (p) for fixed p 0 . Both coincide only if p 0 = p * .
Having found a fixed point p * , the corresponding constraint set is D(p * ), i.e. the adaptivity becomes solution-driven.
Moreover, we can interpret p * as the solution of a convex problem with fixed constraint set D = D(p * ) and can consider the solution u * of the corresponding primal problem, which can be retrieved by Introducing the fixed-point problem (3.5) has several advantages: 1. The inner problem, i.e. the problem of finding arg min p∈D(p 0 ) F (p) for a fixed p 0 is a convex problem. Theoretical and numerical issues of this problem have been intensively studied. 2. Also for the outer fixed-point problem, theory on existence is at hand. 3. Concerning the inner problem, our ansatz allows us to switch between primal, dual and the saddle-point formulation for fixed p 0 . In particular, after having found the fixed-point p * , we can retrieve the primal solution u * as the solution of (3.6) with fixed p 0 = p * .
Remark 3.1.The concept of a solution-driven adaptivity also covers the case that the adaptivity is determined based on the primal variable u, since we can express D(u) by D(p) using the relationship u = M −1 (f − A p). However, the fixed-point problem (3.5) in general is not equivalent to the non-convex problem arg min Let us illustrate the considerations made so far by an example: Example 3.2. We compare the two conceptually different ways of implementing adaptivity -data-driven adaptivity, where D depends solely on the input data f , and solutiondriven adaptivity, where the constraint set D depends on the unknown u (or, equivalently, p).
Firstly, we recall the data-driven adaptive TV regularization from Example 2.1, where the regularization parameter α was chosen locally at grid node i to be Our proposed generalized approach permits to make the constraint set depending on u.
To this end, let (assuming that u is noise-free, we omit the Gaussian pre-smoothing), and Considering alternatively the dual problem (2.5), we can by means of the relationship u = M −⊤ (f − A p) instead assume that D depends on p, or, more precisely, on A p: Although there is also a formal dependency on f , we omit this in our notation to emphasize the different models D(f ) (adaptive to data f , but fixed) and D(p) (adaptive to the unknown p).
We will compare both models experimentally in Section 6. ♦ As already mentioned before, alternative choices for varying the regularization strength α, such as using the function g(|∇u|) from the Perona-Malik diffusion model [30], exist. In view of the theory provided in the next section, such an α(u) should at least be Lipschitz-continuous w.r.t. u.
Moreover, we stress that besides the examples discussed in Sect. 2.3 various other models of adaptive/anisotropic regularization exists, which are covered by the above general model (3.5), see e.g. [4,22,23,24,36].
Finally, we remark that the generalized problem of finding a fixed-point of (3.5) is equivalent to solving a quasi-variational inequality problem (QVIP) (cf. [10]) When reformulating the proposed fixed-point problem as a QVIP, we can make use of the theory existing in literature [10,27].
We will provide existence and uniqueness results for the QVIP (3.12) in detail in the subsequent section.
Theory.
The key issue of this section is to prove uniqueness for the problem (3.12) under sufficient conditions. A prerequisite for uniqueness is the existence of a solution. We therefore briefly recall existence results from literature in the next section, before turning to uniqueness results in Section 4.2.
4.1. Existence. We recall existence results from [24] for problem (3.12) together with the necessary assumptions. These assumptions will also be required for uniqueness results provided in second part of this section.
where each D i loc : R mn ⇒ R mn , i = 1, . . . , n has the following properties: (i) For fixed p the set D i loc (p) is a closed convex subset of R mn .
(ii) There exists C > 0, such that for all i, p: D i loc (p) ⊂ B C (0) (closed ball with radius C).
(iii) There exists c > 0, such that for every p and every i we have where A : R mn → R n is a linear operator. Moreover, let D(p) be defined as in (4.1), such that D i loc (p), i = 1, . . . , n satisfy Assumption 4.1. Then the problem (3.12) has a solution.
Proof. See [24,Prop. 1]. The proof in [24] utilizes a general existence result for QVIs presented in [10], whose core ingredient is Brouwer's fixed-point theorem and which makes use of the continuity of the mapping p → Π D(p) (q) (guaranteed by Assumption 4.1(iii)). We will see that for uniqueness results, a higher regularity of p → Π D(p) (q), namely a Lipschitz-continuity is required.
Remark 4.3 (A-priori bounds)
. From Assumption 4.1 (ii) we derive an a-priori bound for D(p) independent from p: We define R := √ nC. In particular, (4.2) provides a bound for a solution p * of (3.12): Uniqueness. Let us now consider uniqueness results for the QVI (3.12). This part comprises the main contribution of this paper.
We start with a discussion on related work in Sect. 4.2.1, in particular the paper by Nesterov & Scrimali [27], which provides existence results for strongly monotone gradients ∇F under certain conditions. We will see that this theory is only partially applicable in our context, since ∇F in our case is strongly monotone only on a subspace of R mn . Consequently, we will be able to show uniqueness of p only with respect to its component in that subspace. The final uniqueness result is provided in Sect. 4.2.2.
4.2.1. Existing Theory. We recall QVI (3.12), which is of the form We briefly recall the required conditions below. It is of particular importance that these conditions have to hold for an arbitrary norm ∥x∥ B := √ x ⊤ Bx for a positive definite matrix B. Note that the scalar product in (3.12) is the standard product independent from B.
Firstly, operator g : R mn → R mn is assumed to be Lipschitz-continuous with param- where ∥.∥ B * is the norm in the dual space of R mn equipped with ∥.∥ B . Note that constant µ B depends on the chosen norm B. We indicate this dependency by the subscript B. Secondly, g is assumed to be strongly monotone with parameter ν B , again depending on B, i.e., Both constants µ B and ν B define the condition number γ B := µ B ν B , which in our case is the condition number of A ⊤ A.
Finally, it is assumed that the projection Π D(p) (q) is Lipschitz-continuous w.r.t. p, i.e. for arbitrary q ∈ R mn , We refer to η B as the variation rate of D(p).
Under the above assumptions, Cor. 2 in [27] provides uniqueness in the case that One immediately observes that two open issues preclude the direct application of the theory in [27,28] to our problem: • Operator ∇F in (3.12) has a non-trivial null space N (A) and thus is not strongly monotone. • On the complement N ⊥ (A) of the null space, the condition number γ 2 w.r.t. the standard Euclidean norm tends to infinity with increasing problem size. As a consequence, assuming that η 2 is fixed, (4.8) can not be satisfied for arbitrary large image. Alternatively, in order to guarantee (4.8), η 2 has to be reduced with increasing problem size, which is unfavorable since it would mean to restrict the variability of the adaptive constraint set. Both issues in theory can be tackled by restricting the original QVI to the subspace N ⊥ (A) and switching from the standard Euclidean norm ∥ · ∥ B := ∥ · ∥ 2 to the problemspecific norm ∥x∥ B : We describe this approach in detail below. For practical applications this approach would require a singular value decomposition (SVD) of the operator A ⊤ A, which is intractable for large problem sizes.
Restriction to N ⊥ (A) . In order to deal with the missing strong monotonicity of operator ∇F , we restrict the problem (3.12) to the complement N ⊥ (A) of the null space N (A) of operator A: This restriction is justified by the following proposition. Choosing a Problem Specific Norm. We now address the issue, that the condition number γ 2 of ∇F w.r.t. the standard Euclidean norm increases with the problem size.
In order to show uniqueness of a solution to (4.9), we consider the space R mn ∩N ⊥ (A) equipped with the norm Two open problems remain, rendering the above approach, the restriction to N ⊥ (A) together with a problem specific norm, a purely academic one: • The condition η B < 1 is hard to verify in practice, since the projection Π D(p) is defined w.r.t. the specific norm ∥ · ∥ B and a closed form for this projection in general is not at hand, even if it is available for the Euclidean norm (as for example for the standard TV semi-norm). • Restriction to the space N ⊥ (A) requires the SVD of A, which, for larger images numerically is intractable. As a consequence of these two open problems, we follow an alternative ansatz. We will see that in this ansatz the subspace N ⊥ (A) and the norm ∥.∥ B will play also an important role. We prove uniqueness of v * under the following assumption:
Uniqueness
iii) The variation rateη is less than 1 Before showing uniqueness, let us first define the operator T (v) : R n ⇒ R mn as follows: Let p ∈ T (v) if and only if p ∈D(v) and it is a solution to the VI (4.12) We remark that due to our special choice of F , forD(v) being convex, closed and nonempty, the operator A •T is single-valued due to the strict convexity of minṽ ∈AD(v) We find that for any solution p * to QVI (3.12) • An equivalent condition to Assumption 4.5 (iii) is that the Lipschitz-constant of v → A •Π D(v) (q) is less than 1 for all q (cf. condition on η B in Sect. 4.2.1).
is the special norm on N ⊥ (A) considered before. Thus Theorem 4.6 provides that operator T under the said conditions is a contraction in the norm ∥ · ∥ B (B = A ⊤ A). We recall that in the considered applications for image restoration (cf. Sect. 6 and previous work [23,24]), we are actually interested in the variable u := M −1 (f − A p). It follows from Theorem 4.6 that this variable is unique under Assumptions 4.1 and 4.5.
For specific examples of adaptive TV denoising, to guarantee uniqueness of the fixedpoint problem, it remains provide a sufficiently small variation rate. The variation rate, on the other hand, is typically related to the regularization strength, as in Example 3.2 considered above. We revisit this example in the following: where we used A = M −⊤ L. We calculate the variation rateη ofD(v). Let v,ṽ ∈ R n , q ∈ R mn be arbitrary. Since the projection of q ontoD(v) is a scaling of the n components q i ∈ R mn to at most length α i (v), we find Considering the task of denoising, where M = Id, A = L and ∥L∥ 2 2 = µ 2 = 8, condition (4.21) becomes α 0 κ < 1 8 . Given a fixed maximal regularization strength α 0 we thus can determine feasible values for κ to guarantee uniqueness of the solution.
For the task of deblurring, where A = M −⊤ L, we expect that in practical applications µ 2 = ∥M −⊤ L∥ 2 2 ≫ 1 due to small eigenvalues of M and thus that uniqueness can be guaranteed only for very small α 0 (weak smoothing) or κ (weak adaptivity). ♦ 5. Numerics. Throughout this section, we assume that Assumptions 4.1 and 4.5 are satisfied.
In particular, we assume that the dependency of D(p) on p is actually a dependency on v := A p. We change the notation accordingly by writing D(v) instead of D(p).
Proposed Algorithm.
In the following, we propose an algorithm to solve the QVI (3.12). This algorithm builds on the ideas already presented in [24]. However, we now provide convergence results for the more general caseηγ 2 < 1 and, in particular, for arbitrary image sizes.
As already proposed in [24,27], we consider an outer and an inner loop. In the outer loop we update the value v which defines the constraint set D(v). The inner step consists in solving the variational inequality with fixed constraint set D(v). Recall that the operator which maps v to an exact solution p of (5.1) is denoted by T (v).
Several methods have been proposed to numerically solve (5.1). At this point, we consider some arbitrary method and denote its numerical result by sol (D, p 0 , N ), where D is the current constraint set, p 0 is an initial value and N is the number of inner iteration steps. We assume that an a-priori error bound for this method is available: for any ε > 0 we can find N large enough and independent of p 0 and D, such that the inner problem can be solved up to an error where R is the a-priori bound on p (cf. Remark 4.3). Exemplary methods fulfilling these requirements are discussed in Sect. 5.1.2.
Algorithm 1: Outer Iteration
Output: Sequence (p [k] ) k converging to a solution p * of (3.12). Choose arbitrary 5.1.1. The Outer Iteration. In Algorithm 1 we outline the outer iteration, which provides a sequence p [k] converging to a fixed-point p * of (3.12). For each iterate p [k] we set v [k] := A p [k] and fix the constraint set D(v [k] ). The corresponding inner problem (5.1) is solved in an inner iteration to obtain p [k+1] .
Solving the Inner Problem.
In order to solve the inner problem (5.1) or its equivalent saddle point formulation, several approaches providing the required error estimate (5.2) have been proposed in literature. Among them are, e.g., Nesterov's method in [26], FISTA [3], and the primal-dual algorithms proposed by Chambolle & Pock [9]. Out of these candidates we exemplarily pick FISTA (with constant step size), see Algorithm 2. We refer to the iteration within the FISTA algorithm as the inner iteration. In order to distinguish the inner iterates from the outer ones, i.e. p [k] and v [k] , we use the notation p (k) with parentheses.
We briefly recall the convergence results for FISTA [3], for which an error bound of the form (5.2) is available. We remark that similar estimates hold for the primal-dual algorithms.
Lemma 5.1. For the result obtained by FISTA applied to the problem (5.1), we have the following error estimate: Using the boundedness of p (0) and .
Proof. Recall that the inner problem (5.1) is equivalent to ) denote a solution to (5.5). The inequality (5.3) is obtained from the error estimate [3], and In view of the next subsection, we consider the following special case.
Remark 5.2. Assume that T (v [k] ) ∈ N ⊥ (A) and that starting with a value p (0) ∈ N ⊥ (A) the sequence p (k) stays in this subspace. Using the basic fact that where γ 2 = µ 2 ν 2 is the condition number of A ⊤ A restricted to N ⊥ (A). 5.2. Convergence. In the following, we show convergence of the proposed Algorithm 1 and provide convergence rates for a special case. Proposition 5.3 (Convergence). Let Assumptions 4.1 and 4.5 be satisfied. Moreover, assume that sol(D(v), p, N ) provides an approximate solution of (5.1) with an error less than ε > 0 (independent from p ∈ B R (0)), i.e.
Then, the following holds: Proof. We have Using the limit of the geometric series, we deduce claim (i). Claim (ii) follows from (i) under Assumption 4.5 (iii), since then λ 2 < 1 and thus λ k 2 → 0 for k → ∞. Proposition 5.4 (Convergence rates). Let Assumptions 4.1 and 4.5 be satisfied. Moreover, assume that the inner problem (5.1) can be solved with an error bound is the exact solution of the inner problem.) Consider a solution p * of (3.12) and v * := A p * . Then, Algorithm 1 converges according to where K is the number of outer iterations. Proof. see Appendix D. does not hold in general. The reason is that the convergence depends on the component 6. Experiments. [39] between the current iterate and the ground truth image during the outer iteration of our approach with three different regularization terms. Test images: cameraman (purple), peppers (blue), Lena (green) and boat (brown). In most cases, the similarity increases during the outer iteration. This shows that the adaptivity improves by switching from a data-driven to a solution-driven model.
Improvement by Solution-Driven Adaptivity.
In the following, we demonstrate the benefits of applying our solution-driven adaptivity compared to the data-driven variant. To this end, we consider four different standard test images, the cameraman, peppers, Lena and the boat image, which are scaled to the range [0, 1]. From each image, we generate test data for the denoising problem by adding Gaussian noise with zero mean and standard deviation 0.1, and for the deblurring problem by applying a blurring operation and adding Gaussian noise with zero mean and standard deviation 0.01. The resulting images are show in Fig. 1. On these test images we evaluate the three different adaptive regularizations presented in Sect. 2.3, namely adaptive first-order TV regularization (Example 2.1), anisotropic first-order TV regularization (Example 2.2) and adaptive first-and second-order regularization (Example 2.3).
For denoising, we use the input data to initialize the constraint set D(v [0] ), v [0] := f . Therefore, running the algorithm with only one outer iteration implements a data-driven adaptivity, while running it with more than one outer iteration gives a solution-driven adaptivity. We set the required parameters to obtain a suitable result for the data driven approach and apply the solution driven variants with five outer iterations. To quantitatively evaluate the results, we make use of the similarity measure MSSIM(a, b) for two images a and b proposed by Wang et al. [39]. This measure is well suited in particular to compare restored images with their ground truth, since it is sensitive to remaining distortions.
The evolution of MSSIM(u [k]
sd , u orig ), where k is the index for the outer iteration, for the three kinds of adaptive regularization and each test image is depicted in Fig. 2. Except for two cases, the solution-adaptive regularization improves the similarity from the first to the second outer iteration (recall that k = 1 provides the data-driven result). In most cases the similarity stays constant or is even further improved in the subsequent iteration steps. To also give a visual impression of this improvement, we depict the respective results for the cameraman image in Fig. 3. Since the differences are best [39]) to the ground truth are given in parentheses. They correspond to those plotted in Fig. 2. The solution-driven approaches enhance the reconstruction compared to the data-driven ones and standard TV regularization. In particular, artifacts from noise are reduced. Close-up of the results of deblurring the cameraman image with different regularization approaches. Given in parentheses is the similarity to the ground truth. The data-driven approaches suffer from artifacts (e.g., in the adaptive first-order TV case (b)). We therefore compare our methods to standard TV (c). The solution-driven approaches enhance the reconstruction in terms of similarity to the original data compared to standard TV. Anisotropic TV regularization gives the best result. visible in full resolution, we focus on a close-up of the head region of the cameraman. The improvement of the similarity after five outer iterations compared to the similarity of the data-driven results are shown in Table 1 with the values in percent and averaged over the four test images. We found that using the peak-signal-to-noise-ratio (PSNR) instead of MSSIM shows a similar trend.
The theory presented in Sect. 4.2 allows us to check for each method, if the obtained result is unique and in particular independent from the initialization of the algorithm. ) Evolution of the similarity to the ground truth during the outer iteration of our approach with three different regularization terms. Test images: cameraman (purple), peppers (blue), Lena (green) and boat (brown). We observe a strong increase in similarity between the first two iteration steps. After these two steps the similarity stays almost constant, with a slight decrease in some cases. Inspecting the results visually, we observe that the results still get sharper during the later steps, which probably leads to an over-sharpening compared to the original images. Gain in similarity to the ground truth by introducing solution-driven adaptivity for three different regularizations. We compare to the data-driven variants for denoising and to standard TV for deblurring. The values are averaged over the four test images. The theory presented in Sect. 4 allows to check for each case, if uniqueness of the result can be guaranteed, see the right column.
Uniqueness is guaranteed for the adaptive TV regularization for first-and second-order, where the parameters α 0 and κ where chosen small enough to assert α 0 κµ 2 < 1 (cf. Example 4.8). In the case of anisotropic TV regularization, it can be shown that the projection ΠD (v) (q) of q onto the constraint setD(v) is Lipschitz-continuous w.r.t. v (we refer to [24] for details). However, an analytic estimate of the Lipschitz constant η is not at hand. Experiments indicate thatη < 0.06 for our particular parameter setting. Assuming that this estimate is correct, our theoretical results therefore guarantee uniqueness, sinceη < 1 8 . In the case of deblurring, it turns out that applying the data-driven approaches does not provide satisfactory results since spurious structures occur independent from method and input image (see e.g. Fig. 4(b)). Similarly, applying the solution-driven approaches using the input data as initialization results in the same artifacts. However, solution-driven approaches which start with a constant image as initialization provide satisfactory results (see Fig. 4(d)-(e)). This already indicates that uniqueness of the underlying QVIP can not be expected in the case of deblurring. We further comment on this below.
Evaluating with the similarity measure, see Fig. 5, we observe a substantial increase of the similarity to the ground truth during the first two outer iterations of our approach. After the second outer iteration only a slight further improvement or, in rare cases, a decrease occurs. The same effect can be also observed with the PSNR. Inspecting the results visually shows that the results actually do not become worse in terms of visual appearance, but that the image sharpness further increases, which we interpret as a slight over-sharpening of the result.
Since the data-driven approaches do not provide accurate results, we refrain from using them to compare with the solution-driven approaches. Instead, we compare to standard TV regularization with the same regularization parameter. The average improvement by solution-driven adaptivity in percent are shown in Table 1.
Concerning uniqueness of the results in the case of deblurring, we remark that, since the smallest eigenvalue of operator M becomes very small (cf. Example 4.8), uniqueness can not be guaranteed for the values of α 0 , β 0 , and κ used in our experiments. The maximal values attained are 0.095 for the standard TV approach, 0.092 for the data-driven and 0.133 for the solution-driven approach. We conclude that the sensitivity of the proposed solution-driven approach against variations in the input data is slightly higher, but of the same order of magnitude compared to non-adaptive (standard TV) and data-driven approaches.
6.2. Dependence on Input, Initialization and Parameters. In the following, we discuss the dependence of the proposed algorithm on input, initialization and parameters. We focus on the adaptive TV regularization proposed in Example 2.1 in the context of image denoising.
In order to experimentally evaluate the dependence of our solution-driven approach on the input data f , we sample 100 noisy variants of the cameraman image with additive Gaussian noise with zero mean and standard deviation 0.1. After denoising each image, we determine the pixel-wise standard deviation over all 100 output images and compare our method to the data-driven variant and to standard TV. It turns out that high standard deviations for each method occur mainly along dominant edges in the image, e.g. along the silhouette of the cameraman, see Fig. 6. The maximal standard deviation attained is 0.095 for the standard TV approach, 0.092 for the data-driven and 0.133 for the solution-driven approach. We conclude for this example, that the sensitivity of our approach to variations of the input data is slightly higher, but in the same range as for the other methods.
Concerning initialization, our theoretical findings guarantee uniqueness of the result as long as α 0 κ < 1 µ 2 , where µ 2 = 8 in the case that A = L is a discrete divergence operator. In particular, the numerical solution is independent from the initialization of the constraint set D(v [0] ). However, we check this experimentally. Fig. 7 shows the distribution of the numerical error after 7 outer iterations of the proposed algorithm for 100 randomly chosen initializations v [0] . It turns out that the error to the analytic solution is in the range of 10 −14 and cannot be further decreased by additional outer iterations. Moreover, it is fairly independent from the initialization. This supports our theoretical result on uniqueness of the fixed-point.
A third issue is the dependence on the parameters α 0 and κ. To evaluate this dependence, we run our denoising algorithm on the cameraman test image with different parameter settings (α 0 , κ) taken from a grid {0, 0.002, . . . , 0.15}×{0, 0.05, . . . , 2.5}. Since the algorithm has to be run for a large number of times, to reduce the computational effort, we restrict this experiment to the head region of the cameraman image. For each result, we evaluate its similarity to the ground truth. The resulting 2D surface MSSIM(α 0 , κ) is depicted in Fig. 8(a). One relatively flat maximum occurs at (α 0 , κ) = (0.12, 1.7). Figs. 8(b) and (c) show cross-sections through this maximum along the α 0and κ-axis, respectively. Unfortunately, in this case, the optimal parameters lie outside the region where uniqueness is guaranteed. Fixing α 0 = 0.12, the parameter κ would need to be less than 1.08 to assert uniqueness.
From the results shown in Fig. 8 we conclude a smooth dependence on both parameters. Moreover, the flatness of the maximum guarantees robustness w.r.t. parameter variations. In practice, choosing parameters in a relatively broad neighborhood to the unknown optimal values already provides satisfactory results.
6.3. Relation to Non-Convex Regularization. Introducing adaptivity in TV regularization locally changes the way how the gradient (or higher derivatives) of the final solution is penalized. To gain insight into this effect, with a given solution u * , one can study the empirical distribution of |∇u * (x i )| versus ⟨∇u * (x i ), p(x i )⟩ (borrowing the notation from the continuous setting). We do this exemplarily for the case of denoising the cameraman image and adaptive TV regularization, where ⟨∇u * ( Studying the distribution of the norm of the discrete gradients, |∇u * (x i )|, versus their penalization in the regularization term, α i |∇u * (x i )|) i , see Fig. 9, one recognizes that our fixed-point based approach mimics a non-convex regularizer. For the other three test images, we observe similar distributions.
6.4. Convergence. In order to verify the convergence of our algorithm, we consider the example of an one-dimensional adaptive TV regularization, where an analytic solution can be provided. We remark that there is only a limited number of examples, for which an analytic solution for the ROF model is available. In such cases the problem (2.23) can be reformulated to a fixed-point problem in α.
Example 6.1. We study a discrete variant of the continuous functional in Example 3.2 with one-dimensional data. Consider a equi-distant grid of n grid points. W.l.o.g. we assume the grid size to be 1. For u, f ∈ R n let where we define α ∈ R n−1 by for a fixed u 0 ∈ R n . Recall that we are searching for a fixed-point of u 0 → arg min u E(u).
We consider data f to be given as follows: We assume n = 3N for some N > 0, such that the grid nodes can be divided into three disjoint sets It can be shown that any solution of the inner problem asserts u i ∈ [0, 1]. We make the ansatz for 0 ≤ a ≤ b ≤ 1. We show below that a fixed-point of this form exists. Assuming this form of u and analogously for u 0 in (6.1), the objective function simplifies to whereα Standard calculus (see Appendix E) then shows, that, as long as κ ≤ 1 − ε α 0 and κα 0 < N 3 , a fixed-point u * of u 0 → arg min u E(u) of the form (6.4) is given by 1} (black, blue, green, red). The plot shows an exponential error decay, which stays well below the theoretical bound (dashed lines). The bending between step 6 and 7 is caused by the fact that the point-wise errors reach machine accuracy.
Note that in the one-dimensional case four is a tight upper bound for µ 2 . Thus, the condition κα 0 < 1 4 to assert Assumption 4.5 (iii) independent from the problem size is sufficient to guarantee κα 0 < N 3 . Moreover, our theoretical findings show that u * given by (6.7) is the unique fixed-point. ♦ By means of Example 6.1, we experimentally verify the convergence rate provided by Prop. 5.4. To this end, we solve the corresponding QVI numerically with the proposed algorithm. Fig. 11 shows the theoretical and experimental convergence rates (logarithmic error over time steps) for this example and different contraction gaps δ = 1 − λ 2 = 1 − α 0 κµ 2 . The experimental errors ∥u [K] − u * ∥ 2 = ∥ L p [K] − L p * ∥ 2 stay significantly below the theoretical bound and also show an exponential decay.
Conclusion.
In the present paper, we studied quasi-variational inequalities for solution-driven adaptive image denoising and non-blind image deblurring. Our general approach covers various adaptive and anisotropic types of TV regularization of first-and higher-order.
We provided theory for uniqueness and showed convergence of suitable algorithms for a broad sub-class of the considered QVIs, namely those, for which the operator corresponding to the fixed-point problem of the QVI is a contraction. Moreover, we provided convergence results, which we verified in the experimental part.
Our experiments show, that solution-driven adaptivity is able to improve the restoration results compared to its data-driven pendant.
Future work will focus on extensions to non-local regularization. For any p * ∈ D(p * res ) such that p * res = Π N ⊥ (A) p * , it holds that p * ∈ D(p * ) = D(p * res ). Note that at least one such p * exists. We show that any such p * is a solution to the unrestricted problem (3.12). Now let p ∈ D(p * res ) = D(p * ) be arbitrary. We decompose p into p = p res +p N , where p res := Π N ⊥ (A) (p), p N := Π N (A) (p). Then it follows from A p = A p res and A p * = A p * Thus p * is a solution of (3.12). Claim (ii): Let p * be a solution to the problem (3.12). In particular, p * ∈ D(p * ). We consider the decomposition p * = p * res + p * N , p * res := Π N ⊥ (A) (p * ), p * N := Π N (A) (p * ). Then, by our assumption, p * res ∈ Π N ⊥ (A) (D(p * )) = Π N ⊥ (A) (D(p * res )).
It remains to show, that p * res solves the restricted problem (4.9). To this end, let p res ∈ Π N ⊥ (A) (D(p * res )) be arbitrary. There exists p ∈ D(p * res ) such that where the last inequality holds, since p ∈ D(p * ) due to D(p * ) = D(p * res ) and p * solves QVI (3.12). Thus p * res is a solution to the restricted problem (4.9).
Appendix B. Proof of Theorem 4.6.
The proof follows the proof of Thm. 6 in Nesterov's paper with B = Id, but uses the specific form of g(p) = ∇F (p) = A ⊤ (A p − f ). In particular we do not require g to be a strongly monotone operator.
We fix two different points v 1 , v 2 ∈ im(A). Let D i :=D(v i ), p i ∈ T (v i ) and g i = ∇F (p i ) = A ⊤ (A p i − f ). | 2014-07-03T06:44:34.000Z | 2014-07-01T00:00:00.000 | {
"year": 2014,
"sha1": "cd5059cb2e66fb8d2af5f49cc4b237bee7e219c3",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "a7c710a9c92bfae4dad01064b32281a65a38f83c",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
255622099 | pes2o/s2orc | v3-fos-license | Biotransformation of d-Xylose-Rich Rice Husk Hydrolysate by a Rice Paddy Soil Bacterium, Priestia sp. Strain JY310, to Low Molecular Weight Poly(3-hydroxybutyrate)
Poly(3-hydroxybutyrate) (PHB) is a versatile thermoplastic with superior biodegradability and biocompatibility that is intracellularly accumulated by numerous bacterial and archaeal species. Priestia sp. strain JY310 that was able to efficiently biotransform reducing sugars in d-xylose-rich rice husk hydrolysate (reducing sugarRHH) to PHB was isolated from the soil of a rice paddy. Reducing sugarRHH including 12.5% d-glucose, 75.3% d-xylose, and 12.2% d-arabinose was simply prepared using thermochemical hydrolysis of 3% H2SO4-treated rice husk for 15 min at 121 °C. When cultured with 20 g/L reducing sugarRHH under optimized culture conditions in a batch bioreactor, Priestia sp. strain JY310 could produce PHB homopolymer up to 50.4% of cell dry weight (6.2 g/L). The melting temperature, heat of fusion, and thermal decomposition temperature of PHB were determined to be 167.9 °C, 92.1 J/g, and 268.1 °C, respectively. The number average and weight average molecular weights of PHB with a broad polydispersity index value (4.73) were estimated to be approximately 16.2 and 76.8 kg/mol, respectively. The findings of the present study suggest that Priestia sp. strain JY310 can be exploited as a good candidate for the low-cost production of low molecular weight PHB with improved biodegradability and reduced brittleness from inexpensive agricultural waste hydrolysates.
Introduction
Petroleum-based plastics are among the most necessary materials in our daily life. They are closely related to a wide range of essential industries, such as aerospace, medical, automotive, and telecommunications [1]. As the use of single-use plastics has surged since the COVID-19 pandemic in 2019, global environmental problems caused by nondegradable plastics have become more serious [2]. It has been evaluated that between 4.8 and 12.7 million tonnes of plastic waste in landfills enter the ocean each year [3,4]. The projected plastic waste without improved waste management is expected to exceed 1.3 billion tonnes by 2040. In addition, as synthetic plastics are mainly produced from petroleum, which is a non-renewable resource, there are concerns about resource depletion and continuous use [3,5]. Therefore, there is a rapidly increasing demand for alternatives to petroleum-based plastics [6].
Diverse biodegradable biopolymers such as polylactide, poly(3-hydroxyalkanoates) (PHAs), and polypropiolactone have been developed as alternatives to petroleum-based synthetic plastics that are difficult to biologically degrade in natural environments [6][7][8][9]. The global production capacities of bioplastics are expected to increase from 2.42 million tonnes in 2021 to 7.59 million tonnes in 2026 (https://docs.european-bioplastics.org/ RH was purchased from a rice mill in Andong, Republic of Korea. After removing impurities from RH by washing with distilled water, the resulting RH was dried at 70 • C for 24 h in a drying oven, followed by grinding using a vacuum blender (CompLife, Incheon, Republic of Korea). The preparation of reducing sugar RHH with different compositions from RH was conducted by acid hydrolysis under various reaction conditions as follows. Different amounts of RH (150, 200, and 250 g/L) were firstly treated with H 2 SO 4 at concentrations of 1, 2, and 3% (v/v) and then autoclaved at 121 • C for 15, 30, 60, and 90 min, respectively. After thermochemical hydrolysis of RH, reaction mixtures were neutralized by adding 3 M NaOH, followed by centrifugation at 8000× g for 20 min at 4 • C. Recovered liquid solutions containing reducing sugar RHH were used in aerobic fermentation experiments as carbon sources.
Isolation of PHA-Producing Bacteria
For the isolation of heterotrophic bacteria capable of efficiently fermenting reducing sugar RHH prepared from H 2 SO 4 (3%, v/v)-treated RH (250 g/L) by autoclaving at 121 • C for 90 min for their growth and PHA production, three different soil samples were collected by digging the soil surface at a depth of >8 cm immediately after harvesting rice from paddy fields in Andong, Republic of Korea. Approximately 15 g of respective soil sample was then thoroughly suspended in 40 mL of distilled water by vigorous stirring for 5 min at room temperature. Thereafter, the suspension was allowed to stand for 20 min without stirring to precipitate solid components of soil. Enrichment cultivation of reducing sugar RHH -fermenting bacteria in a soil sample was carried out by shaking at 200 rpm for 3 d at 30 • C after inoculating 10 mL of the soil supernatant into a 500 mL Erlenmeyer flask containing 100 mL of liquid mineral salts medium. Each liter of basal medium (pH 7.0) included 20 g reducing sugar RHH , 9.0 g Na 2 HPO 4 ·12H 2 O, 1.5 g KH 2 PO 4 , 0.5 g NH 4 Cl, 0.2 g MgSO 4 ·7H 2 O, and 1 mL of trace element solution consisting of 9.70 g FeCl 3 , 10.33 g CaCl 2 ·2H 2 O, 0.22 g CoCl 2 ·6H 2 O, 0.16 g CuSO 4 ·5H 2 O, 0.12 g NiCl 2 ·6H 2 O, and 0.11 g CrCl 2 ·6H 2 O per liter of 0.1 N HCl. To reduce microbial diversity, each liquid culture procedure was consecutively repeated three times for 9 d. Briefly, 10 mL of enrichment broth culture, which was first prepared according to the aforementioned liquid culture procedure, was re-inoculated into the same fresh medium, followed by incubating under the same culture conditions. Thereafter, the liquid cultivation procedure was performed to enrich dominant reducing sugar RHH -fermenting bacterial species in the culture broth once again. Reducing sugar RHHfermenting bacteria were selectively isolated as follows. A 100 µL aliquot of the culture broth was serially diluted up to 10 −5 using a sterile liquid medium without reducing sugar RHH , after which a 50 µL aliquot of the diluted suspension was spread on a reducing sugar RHH -containing MSM agar plate and incubated at 30 • C for 3 d. Respective bacterial colonies showing different morphological characteristics formed on the solid medium were purely transferred to a new solid medium, after which isolates were incubated at 30 • C for 3 d. Of the isolated reducing sugar RHH -utilizing bacteria, strain JY310, which was identified as a superior PHB-producing candidate by pre-tests, was preferentially selected for further study. Quantitative analysis of PHB accumulated in bacterial isolates was performed by gas chromatography (GC).
Identification of a PHB-Producing Bacterial Isolate
Phylogenetic identification of strain JY310 was carried out using sequence analysis of its 16S rRNA gene. For this, genomic DNA of the isolate was extracted using a G-Spin Total DNA Extraction Kit (iNtRON Biotechnology, Inc., Seongnam, Republic of Korea) in accordance with the manufacturer's protocol. The 16S rRNA gene of strain JY310 was amplified by polymerase chain reaction (PCR) with two universal primers of 8F (5 -AGAGTTTGATCMTG-GCTCAG-3 ) and 1492R (5 -TACGGYTACCTTGTACGACTT-3 ). With a 2X Thumb Taq PCR Pre-Mix (BioFACT Co., Ltd., Daejeon, Republic of Korea), PCR was carried out using a T100 thermal cycler (Bio-Rad Laboratories, Inc., Hercules, CA, USA) with the following cycling conditions: initial template denaturation at 94 • C for 2 min, followed by 30 cycles of 94 • C for 30 s, 55 • C for 30 s, and 72 • C for 1 min. The resulting PCR products were purely isolated using a NucleoSpin Gel and PCR Clean-up (Macherey-Nagel, Düren, Germany) and then sequenced with the aforementioned oligonucleotide primers. Using MEGA 11 software (https://www.megasoftware.net, accessed on 28 October 2022), the nucleotide sequence of its 16S rRNA gene was compared with those of strains deposited in the National Center for Biotechnology Information (NCBI) database to find closely related species.
Effects of Culture Conditions on Bacterial Growth and PHB Production
To examine the effects of culture temperature on the growth of strain JY310 and its PHB production, the bacterial cultivation was performed using a 500 mL Erlenmeyer flask, which contained 100 mL of liquid mineral salts medium (pH 6.0), in a rotary shaker (200 rpm) for 60 h at 20, 25, 30, 35, and 40 • C, respectively. As a carbon source, 20 g/L of reducing sugar RHH prepared by autoclaving 3% H 2 SO 4 -treated RH for 15 min at 121 • C was added to the culture medium. However, the effects of medium pH on the growth and PHB biosynthesis of strain JY310 were investigated by culturing it at 30 • C with pH ranging from 5.0 to 9.0 under the aforementioned culture conditions with minor modifications. Flask cultures of strain JY310 were also conducted at 50, 100, 150, 200, and 250 rpm, respectively, to evaluate the effects of shaking speed on its growth and PHB production. In this case, the culture temperature of strain JY310, medium pH, and amount of reducing sugar RHH were adjusted to 30 • C, 6.0, and 20 g/L, respectively. The effects of reducing sugar RHH concentration in culture broth on the growth and PHB biosynthesis of the microorganism was assessed by growing it with the substrate at concentrations of 5, 10, 15, 20, 25, and 30 g/L, respectively, in a rotary shaker (200 rpm) for 60 h at 30 • C and pH 6.0. The effects of carbon to nitrogen (C/N) ratio in the range of 10-60 in culture medium on the growth and PHB production of strain JY310 were also analyzed. In this case, the microorganism, which was fed with 20 g/L of reducing sugar RHH , was cultivated for 60 h under the following culture conditions: temperature of 30 • C, pH 6.0, and shaking at 200 rpm. After the completion of cultivation, the cells were harvested by centrifugation at 13,000× g for 10 min at 4 • C, followed by lyophilization.
Batch Fermentation of Strain JY310
Using liquid mineral salts medium (C/N ratio: 40) supplemented with 20 g/L of reducing sugar RHH , which was prepared by thermochemical hydrolysis of 3% H 2 SO 4 -treated RH at 121 • C for 15 min, an optimized batch fermentation experiment was performed in a 3 L jar fermentor (Biofors Global Inc., Bucheon, Republic of Korea) with a working volume of 2 L. Fermentor culture of strain JY310 was initiated by inoculating with a 5% (v/v) inoculum of its overnight culture grown in nutrient broth (BD Difco, Franklin Lakes, NJ, USA), followed by incubating for 66 h. The pH, temperature, agitation speed, and aeration rate were automatically controlled at 6.0, 30 • C, 200 rpm, and 1.0 vvm, respectively. During batch fermentation, the culture broth samples of strain JY310 were taken at every 12 h to estimate its growth at 600 nm and ability to produce PHB, after which they were centrifuged at 13,000× g for 10 min at 4 • C. The resulting cell pellets were lyophilized, and the recovered culture supernatants were stored at 4 • C for further analysis of residual carbon and nitrogen sources. The batch fermentation was finished at approximately 2 h after the bacterial growth reached the stationary phase. The culture broth was then centrifuged at 13,000× g for 10 min at 4 • C.
Isolation and Purification of PHA
The PHA produced by strain JY310 was isolated from the lyophilized cells with hot chloroform employing a Soxhlet extractor. To prepare a fine product, the extracted crude PHA was precipitated by dropping into vigorously stirred cold methanol in a fume hood. This precipitation process was repeated at least three times. The resulting purified PHA was left in the fume hood for 3 d to evaporate remaining organic solvents. It was then used for further analysis.
Analytical Methods
The composition of monosaccharides in RH hydrolysates and residual amounts of monosaccharides in the culture supernatant were quantitatively analyzed by highperformance liquid chromatography (HPLC) with D-glucose, D-xylose, and D-arabinose as standards [27]. HPLC analysis was performed employing a Waters Alliance 2690 HPLC system (Waters Corp., Milford, MA, USA) equipped with a Sugar-Pak I column (10 µm, 6.5 mm × 300 mm, Waters Corp.) and a refractive index (RI) detector. The column temperature and sample injection volume used were 90 • C and 20 µL, respectively. A mobile phase consisting of 0.01 M Ca-EDTA was used at a flow rate of 0.5 mL/min. Quantitative determination of growth inhibitory substances present in RH hydrolysate was performed using furfural (Merck Millipore, Darmstadt, Germany) and 5-hydroxymethylfurfural (5-HMF) (TCI Co., Ltd., Tokyo, Japan) as standards by HPLC analysis with a Shim-pack VP-ODS column (5 µm, 4.6 × 250 mm, Shimadzu Corp., Kyoto, Japan). A mobile phase contained water and acetonitrile at a ratio of 8:2. The column temperature, sample injection volume, and flow rate were 40 • C, 10 µL, and 1 mL/min, respectively.
Colorimetric determination of residual NH 4 Cl in the culture supernatant was carried out according to the Nessler method [28]. For this, a standard calibration curve for NH 4 Cl was constructed by plotting the mean absorbance at 490 nm against NH 4 Cl concentration. It was then used for ammonium quantification. The standard reaction mixture (5.0 mL) Biomolecules 2023, 13, 131 5 of 17 contained 2.0 mL of Nessler reagent (Kanto Chemical Co., Inc., Tokyo, Japan), 2.95 mL of distilled water, and 0.05 mL of the culture supernatant.
Quantitative analysis of PHAs in lyophilized cells was conducted using GC with a GC-2010 Plus gas chromatograph (Shimadzu Corp., Kyoto, Japan) connected to an HP-1 capillary GC column (0.5 µm, 25 m × 0.2 mm, Agilent Technologies, Inc., Wilmington, DE, USA) and a flame ionization detector. For this, 20 mg of lyophilized cells was added to a PYREX screw cap culture tube with a PTFE lined phenolic cap (13 mm × 100 mm) containing a mixture of 1.0 mL chloroform, 0.85 mL methanol, 0.15 mL H 2 SO 4 , and 4 mg benzoic acid as an internal standard. The reaction mixture was then heated at 100 • C for 3 h. After methanolysis of cells, 1.0 mL of distilled water was added to the cold reaction mixture and then mixed vigorously to isolate the chloroform layer, including methyl esters of 3-hydroxyalkanoic and benzoic acids. The organic phase was carefully taken and analyzed by GC as described above. The oven temperature was initially kept at 80 • C for 4 min, after which it was increased at a rate of 10 • C/min to 230 • C. Identification of PHA monomeric units in methanolyzed samples was conducted using gas chromatography-mass spectrometry (GC-MS) analysis employing an Agilent 5977A Series GC/MSD system (Agilent Technologies, Inc., Santa Clara, CA, USA) equipped with an Agilent J&W DB-5MS GC column (0.25 µm, 30 m × 0.25 mm, Agilent Technologies, Inc.) under the aforementioned conditions. Structural identification of a PHA biosynthesized by strain JY310 was also performed by 600 MHz 1 H nuclear magnetic resonance (NMR) spectroscopy analysis with a Bruker AVANCE III 600 NMR spectrometer (Bruker Corp., Billerica, MA, USA). A PHB obtained from Sigma-Aldrich (St. Louis, MO, USA) was used as a standard.
Molecular weight and its distribution of PHA were determined using size exclusion chromatography (SEC) with a Waters Alliance e2695 SEC system (Waters Corp.) connected with an RI detector. Approximately 5 mg of purified PHA dissolved in 1 mL tetrahydrofuran was filtered with a 0.45 µm PTFE syringe filter, after which 50 µL of the sample was injected into Waters Styragel columns (HR3, HR4, and HR5E) with oven temperature set at 35 • C using polystyrene standards (1060~3,580,000 Da) for calibration. Elution of PHA was performed using chloroform at a flow rate of 1 mL/min. Thermal behavior of PHA was analyzed using differential scanning calorimetry (DSC) with a DSC 200 PC Phox (Netzsch-Gerätebau GmbH, Selb, Germany). The temperature was scanned from −50 to 200 • C at a heating rate of 5 • C/min. Thermogravimetry/differential thermal analysis (TG/DTA) of PHA to determine its thermal stability was accomplished using a TG-DTA 8122 thermal analyzer (Rigaku Corp., Tokyo, Japan) at a heating rate of 10 • C/min under nitrogen atmosphere. The temperature used for TG/DTA ranged from 20 to 900 • C.
Phylogenetic Identification of a PHB-Accumulating Bacterial Isolate
A Gram-positive, aerobic, motile, and rod-shaped bacterium, strain JY310, which was efficiently able to biotransform reducing sugar RHH (20 g/L) to PHB, was selectively isolated from a rice paddy soil using enrichment. The phylogenetic analysis of the strain JY310 revealed that its 16S rRNA gene sequence (GenBank accession number: OP542424) shared a sequence similarity of 99.85% with 16S rRNA gene sequences of some prokaryotes belonging to the genus Priestia ( Figure 1). Moreover, a phylogenetic tree displaying the relationship between strain JY310 and its closely related relatives, exhibited in Figure 1, indicated that strain JY310 was differentiated from the strains of recognized Priestia species. Based on these results, strain JY310 was identified as a new species belonging to the genus Priestia and deposited in the Korean Collection for Type Cultures with a name of Priestia sp. strain JY310 KCTC 43440.
Based on these results, strain JY310 was identified as a new species belonging to the genus Priestia and deposited in the Korean Collection for Type Cultures with a name of Priestia sp. strain JY310 KCTC 43440.
Preparation of Fermentable Reducing SugarRHH for PHB Production
For the efficient preparation of reducing sugarRHH from RH, its thermochemical hydrolysis was performed by autoclaving for 90 min at 121 °C in the presence of 1, 2, and 3% H2SO4, respectively. As a result, it appeared that the acid hydrolysis of RH gradually increased when the concentration of H2SO4 in the reaction mixture was increased from 1% to 3%, regardless of the amount of RH evaluated ( Figure 2).
Preparation of Fermentable Reducing Sugar RHH for PHB Production
For the efficient preparation of reducing sugar RHH from RH, its thermochemical hydrolysis was performed by autoclaving for 90 min at 121 • C in the presence of 1, 2, and 3% H 2 SO 4 , respectively. As a result, it appeared that the acid hydrolysis of RH gradually increased when the concentration of H 2 SO 4 in the reaction mixture was increased from 1% to 3%, regardless of the amount of RH evaluated ( Figure 2). Moreover, the preparation of reducing sugarRHH from 3% H2SO4-treated RH could be maximally achieved when 250 g/L RH was subjected to thermochemical treatment that resulted in the production of 77 g/L reducing sugarRHH. Based on the above results, 250 g/L RH and 3% H2SO4 were preferentially selected as parameters for the optimal prepara- Moreover, the preparation of reducing sugar RHH from 3% H 2 SO 4 -treated RH could be maximally achieved when 250 g/L RH was subjected to thermochemical treatment that resulted in the production of 77 g/L reducing sugar RHH . Based on the above results, 250 g/L RH and 3% H 2 SO 4 were preferentially selected as parameters for the optimal preparation of reducing sugar RHH from the feedstock.
It has been demonstrated that thermochemical treatments of lignocellulose at high temperatures in the presence of an acid catalyst generally accompany the formation of furfural and 5-HMF as undesired byproducts derived from the dehydration of hexose and pentose sugars, respectively [29]. Particularly, in the microbiological context, the furan molecules often downregulate the growth of diverse PHA-producing bacteria [30,31], although some natural and engineered PHA producers are not affected by these potent growth-inhibitory compounds [32][33][34]. Therefore, for the efficient biotransformation of lignocellulose hydrolysate into PHA, the formation of furfural and 5-HMF should be minimized during an acidic thermochemical process. Figure 3 clearly shows that during thermochemical hydrolysis of 250 g/L RH in the presence of 3% H 2 SO 4 , the generation of furfural and 5-HMF together with reducing sugar RHH was greatly increased in an autoclave time-dependent manner. Specifically, after the autoclaving of 3% H 2 SO 4 -treated RH for 15, 30, 60, or 90 min, the amount of furfural formed in the reaction mixture was measured to be approximately 58, 129, 145, or 164 mg/L, respectively. The quantity of 5-HMF formed under the aforementioned reaction conditions was also found to be considerably increased from 112 to 660 mg/L in an autoclave time-dependent manner. When cultured with reducing sugar RHH prepared by autoclaving for 15 min from 3% H 2 SO 4 -treated RH (250 g/L), Priestia sp. strain JY310 exhibited good growth during the culture period, with cell dry weight (CDW) and PHB content measured to be 6.1 g/L and 51.3 wt%, respectively ( Figure 3). However, the growth and PHB production of Priestia sp. strain JY310 were negatively affected when cultivated on reducing sugarRHH prepared by autoclaving for 30, 60, or 90 min from 3% H2SO4-treated RH (250 g/L). These results might be closely related to the concentrations of furfural and 5-HMF in the culture medium. The aromatic organic compounds are known to inhibit the growth of various PHA producers in a dose-dependent However, the growth and PHB production of Priestia sp. strain JY310 were negatively affected when cultivated on reducing sugar RHH prepared by autoclaving for 30, 60, or 90 min from 3% H 2 SO 4 -treated RH (250 g/L). These results might be closely related to the concentrations of furfural and 5-HMF in the culture medium. The aromatic organic compounds are known to inhibit the growth of various PHA producers in a dose-dependent manner [30,31]. Actually, the growth of Priestia sp. strain JY310 and its biotransformation efficiency of reducing sugar RHH into PHB were observed to be gradually downregulated together with increases in furfural and 5-HMF concentrations in the culture broth (Figure 3). Based on these results, the adequate autoclave time of 3% H 2 SO 4 -treated RH (250 g/L) for the preparation of reducing sugar RHH suitable for its growth and PHB biosynthesis was determined to be 15 min.
Optimization of Culture Conditions for Bacterial Growth and PHB Production
Similar to D-xylose-rich rice straw hydrolysate [35], reducing sugar RHH containing 12.5% D-glucose, 75.3% D-xylose, and 12.2% D-arabinose in this study was employed as a cheap carbon source for the biosynthesis of PHB by Priestia sp. strain JY310. The optimization of PHB production by the microorganism was performed by determining various cultivation parameters (Figure 4). Of the tested culture temperatures, strain JY310 showed the maximum growth and PHB biosynthesis when it was cultured at 30 • C with 20 g/L reducing sugar RHH prepared by the thermochemical hydrolysis of 3% H 2 SO 4 -treated RH for 15 min at 121 • C (Figure 4a). In this case, the CDW and PHB content were measured to be approximately 6.1 g/L and 51.7 wt%, respectively. However, after the cultivation of 60 h, it was observed that the bacterial growth and PHB production at temperatures (35 and 40 • C) above the optimal culture temperature were noticeably downregulated. It should also be noted that the optimal medium pH for the growth and PHB biosynthesis of Priestia sp. strain JY310 was found to be 6.0 (Figure 4b). Conversely, its growth was observed to be very slow at pH 5.0. It was also gradually downregulated when the microorganism was cultivated at pH values above the optimal pH value. It seems likely that Priestia sp. strain JY310 displayed optimal growth and PHB accumulation when it was aerobically grown on 20 g/L reducing sugar RHH with a shaking speed of 200 rpm at 30 • C and pH 6.0 for 60 h (Figure 4c). In this case, the CDW and PHB content were determined to be approximately 6.0 g/L and 51.3 wt%, respectively. However, an increase in shaking speed from 200 to 250 rpm resulted in an approximately 33.2% decrease in cell growth together with a 36.5 wt.% reduction in PHB content in the cells. Meanwhile, at a concentration of 20 g/L, reducing sugar RHH appeared to optimally support the growth and PHB biosynthesis of Priestia sp. strain JY310, although a similar result was also observed when 25 g/L reducing sugar RHH was used (Figure 4d). In addition, it was found that a carbon-to-nitrogen (C/N) ratio of 40 most effectively supported both cell growth and PHB biosynthesis (Figure 4e). Based on the above results, the cultivation parameters for the optimal growth and PHB production of Priestia sp. strain JY310 were established as follows: culture temperature of 30 • C, medium pH of 6.0, shaking speed of 200 rpm, reducing sugar RHH concentration of 20 g/L, and C/N ratio of 40.
Bacterial Production of PHB by Batch Fermentation under Optimized Culture Conditions
Recently, different studies on the cost-effective production of PHB by some bacterial species from various lignocellulose hydrolysates in shake flasks, batch bioreactors, or fed-batch bioreactors have been frequently reported (Table 1). The flask cultures of strain JY310 were performed for 60 h, as described in the Materials and Methods section. The optimal culture temperature (a) of the organism was examined at 20, 25, 30, 35, and 40 °C, respectively, and its optimal medium pH was determined in a pH range from 5.0 to 9.0 (b). The shaking speed (c) to optimize the growth and PHB production of strain JY310 was investigated at 20, 100, 150, 200, and 250 rpm, respectively. The optimal concentration of reducing sugarRHH (d) in the culture medium to support its growth and PHB production was evaluated in a concentration range from 5 to 30 g/L. The optimal C/N ratio (e) was assessed in the range between 10 and 60. The values are mean ± SD of triplicate tests. The flask cultures of strain JY310 were performed for 60 h, as described in the Materials and Methods section. The optimal culture temperature (a) of the organism was examined at 20, 25, 30, 35, and 40 • C, respectively, and its optimal medium pH was determined in a pH range from 5.0 to 9.0 (b). The shaking speed (c) to optimize the growth and PHB production of strain JY310 was investigated at 20, 100, 150, 200, and 250 rpm, respectively. The optimal concentration of reducing sugar RHH (d) in the culture medium to support its growth and PHB production was evaluated in a concentration range from 5 to 30 g/L. The optimal C/N ratio (e) was assessed in the range between 10 and 60. The values are mean ± SD of triplicate tests. [44] In previous studies, lignocellulose hydrolysates for the bacterial production of PHB were generally prepared using the following methods: biological hydrolysis [43], thermochemical hydrolysis [21,33,38,44], thermochemical and enzymatic hydrolysis [34,35,[39][40][41][42], the AFEX process and enzymatic hydrolysis [36], or thermomechanical pulping and enzymatic hydrolysis [38]. Accordingly, in this study, reducing sugar RHH (20 g/L) simply prepared by the thermochemical hydrolysis of 3% H 2 SO 4 -treated RH was used as a cheap carbon source for the substantial production of PHB by Priestia sp. strain JY310 under optimized culture conditions ( Figure 5). However, the production of PHB by Burkholderia cepacia USM [35] and Cuprividus necator [45] was performed with RH hydrolysates prepared by the enzymatic hydrolysis of alkali-and steam flash-explosion-treated RH, respectively. In particular, it has also been described that the difference in the preparation method of RH hydrolysates results in the formation of a mixture showing different sugar compositions [35,45]. During the batch fermentation process, the production of PHB in the cells was first detected in a small quantity (<0.1 g/L) after 12 h of cultivation, as shown in the production of PHB by Paraburkholderia sacchari (synonym Burkholderia sacchari [46]) IPT 101 with hardwood hydrolysate in a fed-batch bioreactor [38]. However, its growth and PHB production were markedly increased during the logarithmic phase, accompanied by a continuous consumption of reducing sugarRHH (12.5% D-glucose, 75.3% D-xylose, and 12.2% D-arabinose). Especially, the complete consumption of D-glucose by Priestia sp. strain JY310 was observed before a cultivation period of 12 h, while most D-xylose and D-arabinose in the medium were continuously uptaken by the organism during the batch fermentation, as determined by HPLC analysis. In this case, the maximum CDW and PHB accumulation of Priestia sp. strain JY310 analyzed after a cultivation period of 60 h were estimated to be 6.2 and 3.1 g/L, respectively. These results were very comparable to those of the PHB production by some other bacteria from different lignocellulose hydrolysates prepared by thermochemical hydrolysis (Table 1). Previously, it has been reported that B. cepacia IPT 048 and B. sacchari IPT 101 can grow by 4.4 g/L of CDW together with a PHB accumulation of 2.3 and 2.7 g/L, respectively, when cultured with sugarcane bagasse hydrolysate in a batch bioreactor [21]. In addition, the CDW and PHB accumulation of Halomonas halophila CCM 3662 grown with spent coffee grounds hydrolysate were determined to be 3.5 and 2.1 g/L, respectively [44]. Furthermore, the amount (3.1 g/L) of PHB produced by Priestia sp. strain JY310 from reducing sugarRHH was approximately 1.9-fold higher than that (1.6 g/L) of PHB biosynthesized by Bacillus firmus NII 0830 from rice straw hydrolysate [33]. The above descriptions suggest that Priestia sp. strain JY310 is a potential candidate capable of efficiently producing PHB from reducing sugarRHH, which can be simply prepared by autoclaving 3% H2SO4-treated RH for 15 min at 121 °C. Meanwhile, it has been demonstrated that lignocellulose hydrolysates prepared by both thermochemical and enzymatic hydrolysis processes support the bacterial growth and PHB biosynthesis better than those During the batch fermentation process, the production of PHB in the cells was first detected in a small quantity (<0.1 g/L) after 12 h of cultivation, as shown in the production of PHB by Paraburkholderia sacchari (synonym Burkholderia sacchari [46]) IPT 101 with hardwood hydrolysate in a fed-batch bioreactor [38]. However, its growth and PHB production were markedly increased during the logarithmic phase, accompanied by a continuous consumption of reducing sugar RHH (12.5% D-glucose, 75.3% D-xylose, and 12.2% D-arabinose). Especially, the complete consumption of D-glucose by Priestia sp. strain JY310 was observed before a cultivation period of 12 h, while most D-xylose and D-arabinose in the medium were continuously uptaken by the organism during the batch fermentation, as determined by HPLC analysis. In this case, the maximum CDW and PHB accumulation of Priestia sp. strain JY310 analyzed after a cultivation period of 60 h were estimated to be 6.2 and 3.1 g/L, respectively. These results were very comparable to those of the PHB production by some other bacteria from different lignocellulose hydrolysates prepared by thermochemical hydrolysis (Table 1). Previously, it has been reported that B. cepacia IPT 048 and B. sacchari IPT 101 can grow by 4.4 g/L of CDW together with a PHB accumulation of 2.3 and 2.7 g/L, respectively, when cultured with sugarcane bagasse hydrolysate in a batch bioreactor [21]. In addition, the CDW and PHB accumulation of Halomonas halophila CCM 3662 grown with spent coffee grounds hydrolysate were determined to be 3.5 and 2.1 g/L, respectively [44]. Furthermore, the amount (3.1 g/L) of PHB produced by Priestia sp. strain JY310 from reducing sugar RHH was approximately 1.9-fold higher than that (1.6 g/L) of PHB biosynthesized by Bacillus firmus NII 0830 from rice straw hydrolysate [33]. The above descriptions suggest that Priestia sp. strain JY310 is a potential candidate capable of efficiently producing PHB from reducing sugar RHH , which can be simply prepared by autoclaving 3% H 2 SO 4 -treated RH for 15 min at 121 • C. Meanwhile, it has been demonstrated that lignocellulose hydrolysates prepared by both thermochemical and enzymatic hydrolysis processes support the bacterial growth and PHB biosynthesis better than those made by thermochemical hydrolysis processes (Table 1). For example, the amount (3.9 g/L) of PHB produced by B. cepacia USM [35] from rice husk hydrolysate in a batch bioreactor was approximately 1.2-fold higher than that (3.2 g/L) of PHB produced by Priestia sp. strain JY310 from reducing sugar RHH . Moreover, amounts of PHB biosynthesized by Ralsonia eutropha NCIMB 11599 [39] from wheat bran hydrolysate and R. eutropha ATCC 17699 [40] from rice paddy straw hydrolysate were assessed to be 15.3 and 9.8 g/L, respectively. Nevertheless, it is considered that compared to the known thermochemical hydrolysis processes of lignocellulosic biomass (Table 1), the thermochemical and enzymatic hydrolysis processes to make lignocellulose hydrolysates have some disadvantages, such as process complexity and enzyme costs.
Characterization of PHB Biosynthesized by Priestia sp. Strain JY310
The 1 H NMR spectrum of a PHA sample biosynthesized by Priestia sp. strain JY310 from reducing sugar RHH is shown in Figure 6. It was found that the chemical shifts and patterns of peaks in the spectrum coincided well with those expected from a commercial PHB standard, indicating that the obtained PHA was a PHB homopolymer consisting of only 3-hydroxybutyrate repeating units. Ralsonia eutropha NCIMB 11599 [39] from wheat bran hydrolysate and R. eutropha ATCC 17699 [40] from rice paddy straw hydrolysate were assessed to be 15.3 and 9.8 g/L, respectively. Nevertheless, it is considered that compared to the known thermochemical hydrolysis processes of lignocellulosic biomass (Table 1), the thermochemical and enzymatic hydrolysis processes to make lignocellulose hydrolysates have some disadvantages, such as process complexity and enzyme costs.
Characterization of PHB Biosynthesized by Priestia sp. Strain JY310
The 1 H NMR spectrum of a PHA sample biosynthesized by Priestia sp. strain JY310 from reducing sugarRHH is shown in Figure 6. It was found that the chemical shifts and patterns of peaks in the spectrum coincided well with those expected from a commercial PHB standard, indicating that the obtained PHA was a PHB homopolymer consisting of only 3-hydroxybutyrate repeating units. The result of thermal analysis clearly showed that the melting temperature (Tm) and heat of fusion (ΔHm) of the PHB produced by Priestia sp. strain JY310 were 167.9 °C and 92.1 J/g, respectively, while its glass transition temperature (Tg) was unclear (Figure 7). In addition, the TG/DTA thermogram clearly revealed that the decomposition temperature (Td) of PHB was 268.1 °C, and its thermal degradation was completed at 302.5 °C ( Figure 8). Taken together, it should be noted that the thermal properties of PHB biosynthesized by Priestia sp. strain JY310 were noticeably different from those of standard PHB and other known PHB polymers (Table 2). For example, the Tm (167.9 °C) and Td (268.1 °C) values of PHB produced by Priestia sp. strain JY310 were lower than those (Tm: 176.0 °C and Td: 302.0 °C) of standard PHB [47,48]. Moreover, the Tm and Td values of PHB produced by C. necator from RH hydrolysate have been reported to be 175.1 and 280.0 °C, respectively [49]. Furthermore, the Tm of PHB accumulated in Shewanella marisflavi BBL25 [34] and Loktanella sp. SM43 [42] grown with barley straw and pine tree hydrolysates, respectively, was analyzed to be 176.7 °C. In particular, it was assessed that the Td (268.1 °C) of PHB biosynthesized by Priestia sp. strain JY310 was much lower than that (283.5 °C) of PHB produced by R. eutropha ATCC 17699 from rice paddy straw hydrolysate [40] and that (292.8 °C) of PHB extracted from the same organism grown with kenaf hydrolysate [41]. It is assumed that compared to the Tm values (>171.5 °C) of other PHB polymers listed in Table 2, the lower The result of thermal analysis clearly showed that the melting temperature (T m ) and heat of fusion (∆H m ) of the PHB produced by Priestia sp. strain JY310 were 167.9 • C and 92.1 J/g, respectively, while its glass transition temperature (T g ) was unclear (Figure 7). In addition, the TG/DTA thermogram clearly revealed that the decomposition temperature (T d ) of PHB was 268.1 • C, and its thermal degradation was completed at 302.5 • C (Figure 8). Taken together, it should be noted that the thermal properties of PHB biosynthesized by Priestia sp. strain JY310 were noticeably different from those of standard PHB and other known PHB polymers ( Table 2). For example, the T m (167.9 • C) and T d (268.1 • C) values of PHB produced by Priestia sp. strain JY310 were lower than those (T m : 176.0 • C and T d : 302.0 • C) of standard PHB [47,48]. Moreover, the T m and T d values of PHB produced by C. necator from RH hydrolysate have been reported to be 175.1 and 280.0 • C, respectively [49]. Furthermore, the T m of PHB accumulated in Shewanella marisflavi BBL25 [34] and Loktanella sp. SM43 [42] grown with barley straw and pine tree hydrolysates, respectively, was analyzed to be 176.7 • C. In particular, it was assessed that the T d (268.1 • C) of PHB biosynthesized by Priestia sp. strain JY310 was much lower than that (283.5 • C) of PHB produced by R. eutropha ATCC 17699 from rice paddy straw hydrolysate [40] and that (292.8 • C) of PHB extracted from the same organism grown with kenaf hydrolysate [41]. It is assumed that compared to the T m values (>171.5 • C) of other PHB polymers listed in Table 2, the lower T m (167.9 • C) of PHB produced by Priestia sp. strain JY310 might be due to its low molecular weight (Figure 9), as described previously [26].
JY310
hydrolysate NI a 167. It is of great interest to note that the number average molecular weight (Mn), weight average molecular weight (Mw), and peak molecular weight (Mp) of PHB produced by Priestia sp. strain JY310 were 16.3, 76.8, and 40.6 kg/mol, respectively, by SEC ( Figure 9, Table 2). The molecular weight and molecular weight distribution of the PHB were very comparable to those of standard PHB and other PHB polymers biosynthesized by different microorganisms from lignocellulose hydrolysates (Table 2). Especially, the Mw (76.8 kg/mol) of PHB with a polydispersity index (PDI: Mw/Mn) value of 4.73, which was produced by Priestia sp. strain JY310, was significantly lower than that (1403 kg/mol) of PHB with an Mw/Mn value of 1.10 biosynthesized by S. marisflavi BBL25 [34] from barley straw hydrolysate. Additionally, the Mw (810.0 kg/mol) of PHB with an Mw/Mn value of 1.58 produced by Loktanella sp. SM43 [42] from pine tree hydrolysate was much higher than that (76.8 kg/mol) of PHB accumulated in Priestia sp. strain JY310. It has been reported that the Mw and Mn of PHA polymers are commonly determined by the ratio of the PHA synthase gene (phaC) to 3-ketothiolase and acetyl-CoA reductase genes (phaAB) expression levels [49]. Therefore, it is considered that a big difference in the PDI values of the PHB polymers It is of great interest to note that the number average molecular weight (M n ), weight average molecular weight (M w ), and peak molecular weight (M p ) of PHB produced by Priestia sp. strain JY310 were 16.3, 76.8, and 40.6 kg/mol, respectively, by SEC ( Figure 9, Table 2). The molecular weight and molecular weight distribution of the PHB were very comparable to those of standard PHB and other PHB polymers biosynthesized by different microorganisms from lignocellulose hydrolysates (Table 2). Especially, the M w (76.8 kg/mol) of PHB with a polydispersity index (PDI: M w /M n ) value of 4.73, which was produced by Priestia sp. strain JY310, was significantly lower than that (1403 kg/mol) of PHB with an M w /M n value of 1.10 biosynthesized by S. marisflavi BBL25 [34] from barley straw hydrolysate. Additionally, the M w (810.0 kg/mol) of PHB with an M w /M n value of 1.58 produced by Loktanella sp. SM43 [42] from pine tree hydrolysate was much higher than that (76.8 kg/mol) of PHB accumulated in Priestia sp. strain JY310. It has been reported that the M w and M n of PHA polymers are commonly determined by the ratio of the PHA synthase gene (phaC) to 3-ketothiolase and acetyl-CoA reductase genes (phaAB) expression levels [49]. Therefore, it is considered that a big difference in the PDI values of the PHB polymers produced by Priestia sp. strain JY310 and other bacterial species [34,42,45] might be caused by differences in the expression levels of the aforementioned three genes among distinct PHB producers. Meanwhile, it has been reported that Azotobacter vinelandii is able to biosynthesize high and ultra-high M w PHB polymers with values between 2300 and 6600 kg/mol from sucrose [50]. Based on these results, it is suggested that the low M w PHB biosynthesized by Priestia sp. strain JY310 from reducing sugar RHH is expected to be useful as an eco-friendly biomaterial with improved biodegradability and reduced brittleness for various industrial applications, as described by Hong et al. [26].
Conclusions
A rice paddy soil isolate, Priestia sp. strain JY310, efficiently biotransformed reducing sugar RHH simply prepared by the thermochemical hydrolysis of RH to low M w PHB with a broad M w /M n value under the optimized culture conditions. Due to its ability to biosynthesize low M w PHB, the microorganism can be exploited as a suitable candidate for the production of diverse low M w thermoplastics with distinct biodegradability and brittleness that consist of either 3-hydroxybutyrate, 3-hydroxyvalerate, or in combination. Using various lignocellulose hydrolysates made by thermochemical hydrolysis together with thermochemical and enzymatic hydrolysis, batch and fed-batch fermentation experiments of Priestia sp. strain JY310 for the low-cost production of low M w PHB polymers are in progress. | 2023-01-12T16:26:21.181Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "fdd16701a5ac068016ab02121ddb378178f4bdd3",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2218-273X/13/1/131/pdf?version=1673257617",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "490b3d5528998a839107139710628cbba3962db0",
"s2fieldsofstudy": [
"Environmental Science",
"Biology",
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
231679053 | pes2o/s2orc | v3-fos-license | Sex-specific differences in HPA axis activity in VLBW preterm newborns
Objective Sex-specific differences in hypothalamic–pituitary–adrenal axis activity might explain why male preterm infants are at higher risk of neonatal mortality and morbidity than their female counterparts. We examined whether male and female preterm infants differed in cortisol production and metabolism at 10 days post-partum. Design and methods This prospective study included 36 preterm born infants (18 boys) with a very low birth weight (VLBW) (<1.500 g). At 10 days postnatal age, urine was collected over a 4- to 6-h period. Glucocorticoid metabolites were measured using gas chromatography-mass spectrometry. Main outcome measures were: (1) cortisol excretion rate, (2) sum of all glucocorticoid metabolites, as an index of corticosteroid excretion rate, and (3) ratio of 11-OH/11-OXO metabolites, as an estimate of 11B-hydroxysteroid dehydrogenase (11B-HSD) activity. Differences between sexes, including interaction with Score of Neonatal Acute Physiology Perinatal Extension-II (SNAPPE II), sepsis and bronchopulmonary dysplasia (BPD), were assessed. Results No differences between sexes were found for cortisol excretion rate, corticosteroid excretion rate or 11B-HSD activity. Interaction was observed between: sex and SNAPPE II score on 11B-HSD activity (P = 0.04) and sex and BPD on cortisol excretion rate (P = 0.04). Conclusion This study did not provide evidence for sex-specific differences in adrenocortical function in preterm VLBW infants on a group level. However, in an interaction model, sex differences became manifest under stressful circumstances. These patterns might provide clues for the male disadvantage in neonatal mortality and morbidity following preterm birth. However, due to the small sample size, the data should be seen as hypothesis generating.
Introduction
Preterm birth has been associated with increased risks of mortality and morbidity (1). Studies among preterm infants showed that males have higher risks (2,3,4) of morbidities like respiratory distress syndrome (RDS), lateonset sepsis (LOS), bronchopulmonary dysplasia (BPD) and intraventricular hemorrhage (IVH), and more often require invasive respiratory support (2,4,5), than females. In childhood, male preterm infants had a greater odds of (major and minor) handicaps and had lower scores on tests of neurodevelopment than their female counterparts (5,6).
Integrity of the hypothalamic-pituitary-adrenal (HPA) axis is crucial during critical illnesses. Among preterm infants, the HPA axis seems to be essential for blood pressure maintenance (7) and has been presumed to play a role in the dampening of the immune response (8). During their first weeks of life, many preterm infants fail to mount an adequate cortisol response for the degree of stress or illness (9,10,11), termed relative adrenal insufficiency. After the second week of life, HPA axis has been shown to recover rapidly (12).
It has been suggested that sex-specific differences in HPA axis activity might explain part of the male disadvantage after preterm birth (13). Although sex differences in HPA axis activity have been postulated to emerge during puberty, recent evidence suggests that such differences are already present early in life (14,15). However, little is known about sex-specific differences in cortisol production and metabolism in preterm newborns in their first weeks of life. Earlier research in a small sample of preterm infants (n = 5) suggested that boys, in contrast to girls, had no cortisol response to arterial hypotension (13).
We aimed to examine whether male and female preterm infants differ in cortisol production and metabolism as assessed by glucocorticoid metabolite excretion in urine, as a possible explanation for the sex differences in neonatal mortality and morbidity.
Participants
This study is part of the early nutrition study, which is a double-blind randomized controlled trial (16) comparing donor mother's milk to preterm formula during the first 10 days of life as add-on to own mother's milk in preterm infants with very low birth weight (VLBW), that is, a birth weight < 1500 g. VLBW infants admitted at one of six participating neonatal intensive care units (NICUs) throughout the Netherlands were enrolled between March 30, 2012 and August 17, 2014, as previously described (16). Exclusion criteria were maternal intoxications during pregnancy, major congenital anomalies or birth defects, congenital infections, perinatal asphyxia, and use of cow's milk prior to randomization. The study was approved by the medical ethical review committee VUmc, and written informed consent was obtained from all parents. For this specific study, only the infants who were admitted to the Amsterdam UMC location VUmc were included. Infants who were treated with interfering medication in the 5 days prior to sample collection (hydrocortisone, dexamethasone, ampicillin, neomycin, ketoconazole, miconazole and fluconazole (17,18,19)) were excluded. Written informed consent was obtained from all parents.
Study protocol
Urine collections were planned at the 10th day of life. For this purpose, the external genitals were covered using a latex patch with gauzes within it to minimize urine absorption by the diaper. Diapers were closed properly to minimize evaporation and/or leakage of urine. After a 4-to 6-h period, the gauzes were removed and placed in a tube that was subsequently stored at −20°C. In case of contamination by stools or low urine output, the procedure was repeated.
Clinical data were collected from participants, including the Score of Neonatal Acute Physiology Perinatal Extension II (SNAPPE II) (20), and the presence of bronchopulmonary dysplasia (BPD) and sepsis were assessed. The SNAPPE II score is an illness severity score that can be used to predict mortality during NICU admission (21). BPD was defined as the need for supplemental oxygen for at least 28 days, and its severity was rated based on the need for supplemental oxygen or respiratory support at 36 weeks postmenstrual age, according to international criteria (22). Sepsis was defined as one positive blood culture with non-coagulasenegative staphylococci or one positive blood culture with coagulase-negative staphylococci in combination with a C-reactive protein level greater than 10 mg/L within 2 days of blood culture or two positive blood cultures with coagulase-negative staphylococci drawn within 2 days (16).
Laboratory analysis
Urine specimens were stored at −20°C and thawed only once just before analysis. After placement in a salivette, 10:2 the pad was centrifuged at 1900 g for 5 min, enabling the extraction of urine.
Urinary steroids were determined using quantitative data produced by gas chromatography-mass spectrometry (GC-MS) analysis. Briefly, free and conjugated steroids were extracted from up to 5 mL of urine by solid phase extraction and the conjugates were enzymatically hydrolyzed. After recovery of hydrolyzed steroids by solid phase extraction, known amounts of internal standards (5A-androstane-3 A,17 A -diol, stigmasterol) were added to each extract before formation of methyloximetrimethylsilyl ethers. GC was performed using an Optima-1 fused silica column (Macherey-Nagel, Dueren, Germany) housed in an Agilent Technologies 6890 series GC that was directly interfaced to an Agilent Technologies 5975 inert XL mass selective detector. MS was run in the selected ion monitoring mode (23). All samples were measured in the same batch.
Statistical analysis
All outcomes had non-normal distributions. We compared glucocorticoid parameters between sexes using the Mann-Withney U-test. Next, linear regression models were used to correct these analyses for gestational age and birth weight. For this purpose, outcomes were Ln transformed. Effect modification by sex of associations between morbidities and glucocorticoid parameters was tested by first including the variables sex (in male = 1 and female = 0) and SNAPPE II score or the presence of sepsis (total) or BPD (in which yes = 1 and no = 0) in the regression equation followed by the inclusion of their product. These analyses were corrected only for gestational age. A P-value of < 0.05 was considered as statistically significant.
Results
Urine samples were available from 40 infants (20 boys and 20 girls). Four of them were excluded because of protocol violations related to the collection of urine or the use of interfering medication in the 5 days prior to sample collection, leaving 36 infants for analysis. Their characteristics are shown in Table 1. Girls and boys did not differ in birth weight, gestational age or disease risks. The majority of the sample collections (n = 25, 70%) started in the morning. Glucocorticoid parameters were no different between subjects who provided urine samples in the morning vs those who provided urine samples at other times of the day (data not shown). Table 2 displays the glucocorticoid parameters by sex. There were no differences in cortisol excretion rate, corticosteroid excretion rate or 11B-HSD activity at 10 days post-partum between sexes. Linear regression analysis showed that sex differences remained absent after adjustment for gestational age and birth weight (data not shown). Table 3 presents the interaction models of sex with SNAPPE II score, sepsis (total) or BPD on glucocorticoid parameters, corrected for gestational age. We observed an interaction between sex and SNAPPE II on 11B-HSD activity (P = 0.04), with the interconversion favoring cortisol in girls with higher SNAPPE II. A tendency toward a possible interaction was observed between sex and sepsis on corticosteroid excretion rate (P = 0.09), with girls with sepsis having a higher corticosteroid excretion rate compared to boys with sepsis. 10:2 Furthermore, an interaction was observed between sex and BPD on cortisol excretion rate (P = 0.04), with boys with BPD having a higher cortisol excretion rate compared to girls with BPD. In addition, a tendency toward a possible interaction was observed for corticosteroid excretion rate (P = 0.08), with boys with BPD having a higher corticosteroid excretion rate compared to girls with BPD, and for 11B-HSD activity (P = 0.08), with the interconversion favoring cortisol in girls with BPD.
Discussion
In our study among preterm VLBW infants, we did not find evidence for sex differences in HPA axis activity on a group level. However, in an interaction model sex differences became manifest under stressful circumstances, reflected by high SNAPPE II score, sepsis and BPD, including differences in cortisol excretion rate, corticosteroid excretion rate and 11B-HSD activity.
A previous study among preterm infants showed that HPA axis activity differed between sexes depending on the timing of antenatal betamethasone treatment (25). Females born <72 h of betamethasone exposure had higher urinary cortisol levels on day 1, if exposed to perinatal stress, than males under similar circumstances. Our study suggests that sex differences persist during the second week of life, which supports our hypothesis that mortality and morbidity are higher in preterm boys partly due to a lower capability to secrete cortisol for the degree of stress. Contrary to our expectation, we found that boys who developed BPD had an elevation in both cortisol excretion rate and corticosteroid excretion rate at 10 days post-partum as compared to girls who developed BPD. Earlier research in VLBW infants demonstrated that lower serum cortisol concentrations in the first week of life predisposed to chronic lung disease (26). In general, preterm boys are more prone to develop (severe) BPD than preterm girls (27,28). Therefore, we expected that boys who went on to develop BPD would have a lower instead of a higher corticosteroid excretion rate at 10 days post-partum than girls developing BPD. On the other hand, we found that the balance between the activities of 11B-HSD isozymes was more at the side of cortisone in boys than in girls developing BPD, implying that the higher cortisol production in these boys is partly offset by an increased elimination in them. However, our results should be balanced against the small size of our sample. In our study, girls with BPD were overrepresented.
Our study has several strengths. This is the first study exploring sex differences in cortisol production and metabolism in the second week of life in preterm VLBW infants as an explanation for the sex differences in neonatal mortality and morbidity as observed after preterm birth. Cortisol metabolites were measured with GC-MS, which enabled the calculation of both cortisol production and the interconversion with cortisone. Our study also has its limitations. First of all, our study had a small sample size, which increases the likelihood of chance findings. Therefore, replication of our findings in a larger, independent sample seems warranted. Secondly, urine samples were only collected at 10 days post-partum. Preferably, urine samples would have been collected at multiple days during the first weeks of life for a more precise assessment of HPA axis development. Thirdly, we had no information on the timing of antenatal corticosteroid treatment. Finally, for practical reasons related to the NICU setting we were not able to collect 24-h urine. However, young infants have not yet developed an adult-like circadian rhythm (29).
Conclusion
This study did not provide evidence for sex-specific differences in cortisol production and metabolism in preterm VLBW infants on a group level. However, in an interaction model, sex differences became manifest under stressful circumstances. These patterns might offer an explanation for the sex-specific differences in neonatal mortality and morbidity as observed after preterm birth. However, due to the small sample size, the data should be seen as hypothesis generating. | 2021-01-23T06:16:27.249Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "f8adf5e6ce205aebd01c3b1799011e9ee1833b75",
"oa_license": "CCBYNCND",
"oa_url": "https://ec.bioscientifica.com/downloadpdf/journals/ec/10/2/EC-20-0587.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0a81ba351cbb142dfd729f38273d0e5437f04fc7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
53776717 | pes2o/s2orc | v3-fos-license | Role action embeddings: scalable representation of network positions
We consider the question of embedding nodes with similar local neighborhoods together in embedding space, commonly referred to as"role embeddings."We propose RAE, an unsupervised framework that learns role embeddings. It combines a within-node loss function and a graph neural network (GNN) architecture to place nodes with similar local neighborhoods close in embedding space. We also propose a faster way of generating negative examples called neighbor shuffling, which quickly creates negative examples directly within batches. These techniques can be easily combined with existing GNN methods to create unsupervised role embeddings at scale. We then explore role action embeddings, which summarize the non-structural features in a node's neighborhood, leading to better performance on node classification tasks. We find that the model architecture proposed here provides strong performance on both graph and node classification tasks, in some cases competitive with semi-supervised methods.
INTRODUCTION
Recently, work on (structural) role embeddings [5,20] has returned to fundamental questions about node positions in networks posed in the sociological networks literature [2,25]. In contrast to communitybased embeddings like DeepWalk/node2vec [9,19] which represent network neighbors close in embedding space, role embeddings place nodes with similar local structures close in embedding space. For instance, when embedding a social network, a community-based embedding places two friends close in embedding space, while role embedding places two people with similar local networks close in space (regardless of whether they know one another).
Rapid progress on graph neural networks (GNNs) [11,14] offers the possibility of scaling role embeddings to large graphs [29]. To learn role embeddings with GNNs, we propose an unsupervised within-node loss function which places nodes close in embedding space when their local neighborhood structures are similar. This allows GNNs to produce role embeddings similar to analytical methods [5], while remaining inductive and more scalable than comparable role embedding techniques [12,20]. By simply changing the loss function for an unsupervised GNN, we can improve its ability to learn role embeddings.
We also introduce the concept of role action embeddings, in contrast to the more familiar structural role embeddings. By "action", we mean any non-structural features of nodes, such as the words used by a paper in a citation graph. Whereas structural role embeddings [5,12,20] place nodes with similar network neighborhoods close in embedding space, role action embeddings propagate node actions along the graph, representing nodes as similar when their local neighborhoods contain similar action profiles filtered through similar structures. In addition to being a meaningful theoretical distinction, this choice has practical significance: it improves performance on node classification tasks. Table 1 lays out where this paper fits in with recent literature on node embeddings.
To implement role embeddings with the loss function proposed, we use a modeling framework which we call RAE (short for "role action embeddings") 1 . RAE is based on GraphSAGE [11] with several important distinctions. It overcomes the underfitting problem sometimes observed with unsupervised GNNs [26] while being simpler than the standard GraphSAGE model. RAE achieves strong results on node and graph classification tasks, competitive with semi-supervised methods for the former and outperforming more complex kernel methods for the latter.
CONTRIBUTIONS
• We propose a within-node loss function which creates highquality role embeddings, is compatible with scalable GNN architectures, and requires less graph information than an adjacency-based loss function • We propose an unsupervised model architecture which learns quickly and is simpler than many alternatives • We show that, for node classification tasks, focusing on action vectors of nodes leads to increased unsupervised performance over the common practice of concatenating action and structural features • We introduce neighbor shuffling to quickly create training examples within batches • We evaluate this framework (called RAE) in a variety of settings, finding good performance on both node and graph classification tasks
SETTING AND MODEL
We have a graph G = (V , E) which is treated as undirected. Each node in G has a vector of attributes X = (X s , X a ), which can be divided into structural features X s and action features X a . Structural features can include degree, clustering coefficient, and centrality measures. However, we assume only node degrees are readily available. Action features are characteristics of nodes, such as words in documents, chemical properties of molecules, or actions of people. The goal is to find d-dimensional embeddings z u ∈ R d for each node u ∈ V . Importantly, z u is required to be inductive, meaning embeddings of unseen nodes can be obtained.
We build on the GraphSAGE framework [11] for this task. Graph-SAGE takes a depth parameter K and has two basic operations: combine and aggregate. At each depth k ∈ {1 . . . K }, the representation from the previous layer h k −1 u is updated by sampling 1 Code used in this paper may be found at https://github.com/georgeberry/ role-action-embeddings. [9,11,19]. We term this J neiдhbor because it creates positive samples from neighbors in the graph: here, u and v are neighbors so v is a positive example for u. On the right is the within-node loss function J wit hin we propose in this paper. It uses as positive examples two samples from u's own neighborhood. Nodes are colored by distance from a focal node, indicating that GNNs iteratively summarize further reaches of a focal node's neighborhood.
Goal
Type Note that usually h 0 u = x u . There are many possible choices for both combine and aggregate [4,11,26]. Common choices for combine are concatenation and mean, and common choices for aggregate are mean, max pool, summation, and LSTM. A fuller discussion of the strengths of different frameworks can be found in [26]. The GraphSAGE framework is in some ways comparable to the Graph Convolutional Network (GCN) framework of [3,14], although it is more scalable [29].
We propose a modification of GraphSAGE called RAE. There are several important differences from a standard GraphSAGE model. First, we ignore the combine operation entirely, and always include u whenever the neighbor function N(u) is used. For clarity, we write In combination with an elementwise mean aggregate function, this has the effect of blending together u's embedding with its neighbors,.
Second, RAE uses a tanh activation function rather than the standard ReLU. Through experiments, we found dramatically increased unsupervised model performance. Algorithms 1 and 2 lay out the specifics of these choices.
Additionally, we make two practical decisions which improve performance. First, when generating two embeddings z u and z v which will be multiplied in a loss function, we generate z u and z v from two separate models, similarly to word2vec. Finally, GNNs have the powerful property of providing a distinct embedding at each aggregation level {1 . . . K }. We specify K = 2 below, and use both first and second step embeddings for prediction. This additional information improves model performance and is essentially free, since no additional model training is needed.
Within-node loss function for role embeddings
Consider two neighboring nodes, (u, v) ∈ E. The unsupervised loss function proposed by [11] seeks to place u and v close in embedding space, by treating z v as a positive example for z u , (1) We refer to this as the "between node" or "neighbor" loss function.
Here, σ represents the logsigmoid transformation, Q is the number of negative samples, and P(u) samples a random node not adjacent to u. The intuition is close to that from word2vec [15] and associated methods applied to graphs such as DeepWalk [19] and node2vec [9]. Essentially, J neiдhbor treats u as a collection of its neighbors v.
An alternate way to think about u's position in embedding space is as a collection of substructures in its own local network neighborhood. Assume we take two samples from u's local neighborhood u 1 and u 2 to create embeddings z u 1 and z u 2 . We'd like these embeddings to be close in space, while the embedding of a walk from a random node random node z w , w u, should be more distant. The intuition here is closer to deep graph kernel techniques [16,17,27]. Let R(u) sample any w u at random. Then, we can create a within-node loss function J wit hin will place nodes close in embedding space when they themselves have similar local neighborhoods. For some graph structures, both J neiдhbor and J wit hin will lead to similar embeddings, but this is not true in all cases. A simple example of divergence can be seen in Figure 2, where for depth K = 1, J neiдhbor would consider u and v different since two-hop neighbors have different degrees. On the other hand, J wit hin would consider u and v similar since the K-step neighborhoods are identical.
This highlights an important difference between J neiдhbor and J wit hin : J wit hin requires relatively less graph information for each training pair since it relies on the K step neighborhood of u rather than the K +1 step neighborhood. This offers the possibility of faster training for large graphs, for instance by parallelizing individual neighborhoods. Below, we find that J wit hin produces better role embeddings on ground-truth graphs when measured by silhouette score compared to J neiдhbor .
Neighbor shuffling
We expect embeddings based on two samples from u's neighborhood to naturally be more similar than a sample from u and from a random node v. If it is too easy for the model to distinguish samples from u and negative samples from v, this could limit embedding quality.
To address this we introduce neighbor shuffling to create harder negative examples. This uses the the neighbors of some random node v instead of u to create negative examples. Practically, this means that we choose some v u in line 2 of Algorithm 2. Neighbor shuffling can be easily implemented within batches by permuting the within-batch adjacency list. This approach is similar to the idea of a corruption function in [23] with an important distinction: we shuffle the neighbors rather than the features of u, which we found leads to better performance with RAE.
ROLE EMBEDDING PERFORMANCE ON EXEMPLAR GRAPHS
Past work on structural role embeddings has studied the barbell and house graphs as visual tests of role embedding performance [5,20]. Figure 3 displays our approach in comparison to two alternate methods. We use depth K = 2, sorted vectors of neighbor degrees as node features, and train for 100 epochs. We sample 2 neighbors at depth K = 1 and 4 neighbors at depth K = 2.
We compare to GraphWAVE [5] and node2vec/DeepWalk [8,19]. GraphWAVE is a matrix factorization approach to structural role embeddings based on heat diffusion. Because of GraphWAVE's strong performance for role embeddings, we consider it close to ground truth for this task. node2vec/DeepWalk provides a contrast between role embeddings and community-based embeddings. As expected, the structural role embeddings produced by our method display some variance when compared to GraphWAVE, but they reproduce the pattern of that strong baseline.
We compare the embeddings produced by J within and J neiдhbor in Table 2 on otherwise similar models using silhouette scores, finding that J within ranks higher, particularly for the house graph. This makes sense, since J neiдhbor represents nodes as combinations of their neighbors, leading to somewhat higher variance within roles. Both loss functions produce reasonable role embeddings in combination with structural features. House graph 91.2 ± 2.9% 89.6 ± 4.2% Barbell graph 69.6 ± 8.1% 69.5 ± 6.8%
EXPERIMENTS
We now turn to the performance of RAE on two types of tasks: node classification and graph classification.
Intuitively, good representations of a node's neighborhood should allow classifying nodes well in an unsupervised setting. Further, the combination of embeddings for all nodes in a graph should provide distinct graph vectors suitable for graph classification. This treats a graph as a collection of roles. We note that these two tasks put RAE in comparison with two distinct lines of research: the first on node embeddings [9,11,19] and the second on graph kernels [18,21,24,27].
Model setup
We describe model architecture in Algorithms 1 and 2. We choose depth K = 2, and sample sample 10 neighbors at depth K = 1 and 25 neighbors at depth K = 2, for a total receptive field of 261 nodes (ego + 10 at depth 1 + 250 at depth 2). We use a dropout rate of 0. The model is trained with 5 positive examples and 20 negative examples, using J within or J neiдhbor directly with no margin. When using J within , half of the negative examples are generated via neighbor shuffling and half are generated randomly from R(u). For J neiдhbor , all negative examples are generated from non-neighbors with P(u). Models are trained with Adam [13] and initialized with Xavier uniform weights [7].
In the case of sparse input vectors representing text in citation networks, no preprocessing is performed. This means directly inputting high-dimensional (500 to 3703) sparse vectors into the model. Following common practice, we use embeddings in an L2penalized logistic regression model. In the multiclass case, a onevs-rest classifier is used. To obtain an accurate picture of both performance and variance, we report results from 20 runs, providing mean performance and standard deviation of the mean 2 .
Our choices of d = 256, K = 2 and a receptive field of 261 are equivalent or conservative compared to prior research on GNNs. For instance, [23] Table 5: Accuracy and mean standard errors of various methods on three standard node classification tasks. The left column indicates which information is available to the algorithm: A the adjacency matrix, X s structural features, X a action features, Y the outcome of interest. d is either the embedding dimension in the unsupervised case, or the dimension of the last hidden layer in the supervised case. K is the maximum depth in the network that the model has access to. The first block presents unsupervised baselines, the second block presents results from our modeling architecture, the third block presents semisupervised models. The best unsupervised model is bolded, and the second best is underlined.
use many fewer parameters than a GraphSAGE-style model which has both hidden and output dimensions of 256.
Graph classification results
We choose six standard datasets studied in the graph classification literature (full dataset descriptions can be found in [27]), with results presented in Table 4. Most of these datasets do not have features beyond those from the graph itself 3 . In these cases, we label each node with the sorted degrees of its neighbors, capped at 30. When a node has more than 30 neighbors, we sample randomly; when it has fewer, we pad with zeros. For MUTAG, IMDB-B, and REDDIT-B only sample 4 neighbors are sampled at depth 1 and 4 at depth 2, for a total receptive field of 21. For the rest of the graphs we use the parameters described above. We observed that training did not substantially improve performance, so these models are untrained. They therefore indicate the raw expressive power of the model. A discussion of different GNN architectures and theoretical performance guarantees can be found in [26].
Since each node has a d-dimensional vector, we need to choose a "readout" function to produce graph vectors from node vectors. We choose a simple summation to represent the embedding for graph G i : z G i = u ∈G i z u . Graph vectors z G i are then employed in a one-vs-rest classification problem using 10-fold cross validation. Figure 4: Visualization of embeddings on three citation graphs using RAE with J within , reduced to two dimensions using TSNE. For Cora and Citeseer, we used the entire dataset, while we visualize the test set for Pubmed. Clusters are quite distinct in all cases. On Cora, this model achieves a silhouette score of 0.166.
In the binary classification case, we use the test folds to select the probability cutoff which maximizes test fold accuracy.
RAE outperforms two strong baselines on many of the graph classification tasks: deep graph kernels [27] and Patchy-san [18]. We also include graph isomorphism networks (GIN) [26] to give an idea of the upper bound for model expressiveness if we were to use one-hot encodings for features, a summation aggregator, training, and a larger K. The simple mean framework we use performs quite well.
Node classification results
We choose three standard datasets which provide both graphs and rich features: Cora, Citeseer, and Pubmed. Dataset details can be found in Table 3. These are citation datasets where each node is a paper and each link is a citation. Each paper is annotated with a sparse vector representing the words used in the paper. We follow the procedure described in [14] for these datasets: 20 nodes are selected as training examples from each class, with 1000 nodes selected as test nodes and 500 as validation. We use the same splits used by Kipf and Welling 4 . We optimized hyperparameters on the validation set given by Kipf and Welling, but do not make use of validation set on the fly to determine early stopping. Hyperparameters were tuned on the Cora dataset and then applied these settings directly to Citeseer and Pubmed. Table 5 presents results of RAE compared to a variety of other unsupervised and semi-supervised baselines. The most direct comparison is with other unsupervised methods. All versions of RAE outperform DeepWalk-based methods (including node features directly). Moving to two more difficult baselines, RAE outperforms Deep Graph Infomax (DGI) [23] on Pubmed and EP-B [6] on Cora. DGI has access to depth 3 rather than the depth 2 used in our model, making the performance of RAE impressive. EP-B is an efficient method which does a single aggregation step, but also comes with a margin hyperparameter which must be tuned, and samples all neighbors of a focal node as opposed to the fixed number used here.
Surprisingly, RAE with J neiдhbor outperforms the supervised models on Pubmed, and scores the second highest overall. These 4 The specific splits can be found here https://github.com/tkipf/gcn. results indicate that the modeling framework proposed here provides strong performance on these standard tasks despite model simplicity. J within using only neighbor shuffling-the most scalable parameterization of RAE-performs comparably to DGI on Pubmed and only a shade below EP-B on Cora. The most puzzling part of these results is the weak performance on Citeseer, which could be related to the larger input dimension (3703). Regularization may help in this setting, but was not employed since it did not prove effective when developing the model on the Cora validation set.
The embeddings presented here do not incorporate structural features (e.g. degree, neighbor degrees, motif counts) in the input vector [1,11,12]. We therefore refer to them as action embeddings. We consistently found that including even node degree in the feature vector reduced performance. This implies that unsupervised GNNs are not adept at preserving both structural and action features without additional modeling work. Structural features easily overwhelm sparse action features.
The combination of J within and J between provides the strongest performance on Cora, but not on the other two datasets. While this may seem surprising, it is potentially related to the small size of the training set (20 nodes per class). As a final note, we have considered only transductive embeddings here 5 , but RAE can be applied inductively as well.
CONCLUSION
We have presented RAE, a scalable methodology for learning role embeddings, and introduced the concept of role action embeddings. This creates a useful distinction in the type of information which may be incorporated into node embeddings, in addition to providing strong performance on several standard tasks.
We note that the performance of action feature embeddings is related to the type of task under consideration: when classifying papers into topic categories, it makes sense that paper text is important. However, without knowledge of how embeddings will be used beforehand, this distinction creates a challenge for researchers. A simple response is to train separate models for structural and action features. A direction for future work is to combine these two types of features with novel model architectures. In addition to the intuitive usefulness of role action embeddings, the modeling choices we have made are likely to be useful in other settings as well. | 2018-11-28T22:39:38.307Z | 2018-11-19T00:00:00.000 | {
"year": 2018,
"sha1": "def3d8eda88a548e8f690b90ec7ba0f9d3a78409",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1811.08019",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "def3d8eda88a548e8f690b90ec7ba0f9d3a78409",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
219424504 | pes2o/s2orc | v3-fos-license | Is cannabidiol hepatotoxic or hepatoprotective: A review
Questions have been raised regarding the potential hepatotoxicity of cannabidiol (CBD). Conversely, several animal studies have demonstrated the hepatoprotective effects of CBD against bile duct ligation, cocaine, thioacetamide, alcohol, and several other chemicals. This review summarizes the current literature concerning the hepatic effects of CBD in humans and animals. Based on the available data, it may be concluded that there is a low probability of serious hepatotoxicity at the high therapeutic doses that are used and a much lower risk of adverse hepatic effects and a potential for hepatoprotection effects at the lower doses commonly used in dietary supplements and food products. However, a detailed safety study in rats using highly purified CBD rather than enriched Cannabis extracts is needed, enabling the determination of hepatic as well as other tissue effects and potential margin of safety.
Introduction
Cannabidiol (CBD) is a nonpsychoactive cannabinoid derived from the plant Cannabis sativa and has been the subject of much discussion and marketing in recent years. It is a major component of Epidiolex ® , a drug approved for the treatment of drug-resistant seizure disorders. [1][2][3][4][5] However, there is great interest in its use either as a dietary supplement or as an over-the-counter product for a wide range of health benefits including pain management, relaxation and stress relief, sleep aid, antidepressant, antioxidant, anti-inflammatory, neuroprotective, and other indications. 6 Typical doses of CBD that have been used for seizure disorders and psychotic conditions are in the range of 10-20 mg/kg/day, with the higher dose being most commonly used. [1][2][3][4][5]7 At these doses, the most common adverse events have included somnolence, diarrhea, decreased appetite, fatigue, and, less frequently, elevated serum aminotransferases. [1][2][3][4]8,9 Several animal studies have raised questions regarding the potential hepatotoxicity of CBD. [10][11][12] However, these studies may have generated more questions than they have answered, as will be discussed below.
To this end, is CBD safe and free of serious adverse events, including hepatotoxicity, at the doses that are commonly and widely used in dietary supplements and foods? These supplements and food-related doses are typically in the range of 25-100 mg/day as compared to doses of approximately 1000-1400 mg/day for a 70 kg (154 lb.) human for neurological disorders. These questions will be addressed, and both human and animal studies related to hepatic effects are reviewed.
Human studies
A number of human studies have addressed the safety, including hepatotoxic potential of the CBD drug product Epidiolex in conjunction with its efficacy in seizure and psychotic disorders. A primary issue in assessing safety and the potential existence of adverse events of CBD in these studies is the concurrent use of one to five other medications for neurological disorders. One of the most prominent of these other drugs is valproic acid which is known for its hepatotoxicity. 19,20 As a consequence, based on human studies, it is unclear whether adverse hepatic effects are due to the valproic acid or caused or potentiated by high doses of CBD.
In an open label study involving 162 patients with treatment-resistant epilepsy, all the subjects were given oral doses of 2-5 mg/kg/ day of CBD for 12 weeks. 1 It was noted that 7% of the patients experienced slightly elevated transaminases, with one patient being withdrawn from the study due to high levels. All patients were concomitantly taking valproic acid.
In a 14-week study, 120 children with drug-resistant Dravet syndrome were given CBD orally at a dose of 20 mg/kg/day. 2 Elevated aminotransferase levels occurred in 12 patients. All patients were taking valproic acid. In nine cases in which the patient continued the trial, the enzyme levels returned to normal while receiving CBD.
An open-label extension trial involving long-term CBD treatment of Dravet syndrome patients was conducted. 3 A total of 264 patients were enrolled and completed a median treatment duration of 274 days with a modal CBD dose of 21 mg/kg/day. Patients also received a median of three concomitant antiepileptic drugs. Twenty-two of them were also on valproic acid, who experienced elevated serum aminotransferase levels greater than three times upper limits of normal.
In a placebo-controlled double-blind study involving treatment-resistant Lennox-Gastaut syndrome, 4 76 patients received CBD at a dose of 20 mg/kg/day and 73 patients received 10 mg/kg/day for 14 weeks. Increases in serum aminotransferase levels greater than three times the upper limit occurred in 11 patients at the high dose and 3 patients at the low dose. Of these 14 patients, 11 were also receiving valproic acid. The elevated serum enzyme levels resolved spontaneously during treatment (five patients), after reducing the dose of CBD, CBD discontinuation, or reducing the dose of another antiepileptic drug (nine patients).
An open-label study was conducted in 55 epilepsy patients with CDKL5 deficiency disorder and Dup15q and Doose syndromes. 5 The average CBD oral dose at 48 weeks of treatment was 28.9 mg/kg/day. All patients were receiving other antiepileptic medications. Four patients withdrew from the study stating adverse events as the reason. The most frequently noted adverse events were diarrhea (29%), fatigue (22%), somnolence (22%), convulsions (9%), status epilepticus (9%), and respiratory infections (5%). No mention was made of elevated serum aminotransferase enzymes or hepatotoxicity. Taken together, the above studies indicate a low level of hepatic effects with resolution upon continued use of the product in some cases, and only several subjects being withdrawn from the studies as a result thereof.
Animal studies
A paucity of published, peer-reviewed, well-designed animal safety studies involving orally administered highly purified CBD devoid of tetrahydrocannabinol (THC) exists. Marx et al. 10 conducted a 14-day oral dose rangefinding study in rats treated with 1000, 2000, and 3000 mg/ kg/day of a Cannabis supercritical fluid extract that contained 25% CBD. The product also contained 61% fatty acids; 13% combination of plant sterols, triterpenes, and tocopherols; and less than 1% of the psychoactive THC. The authors were unable to determine a no-observedadverse-effect-level (NOAEL). One animal died after several doses of 4000 mg/kg of the extract. No median lethal dose (LD 50 ) was determined.
In male rats, the increase in serum alanine transaminase (ALT) was not significant even at the highest dose, while a twofold increase in ALT occurred in female rats at the 2000 and 3000 mg/kg doses. 10 Small, less than twofold increases occurred in serum alkaline phosphatase (ALP) levels at 3000 mg/kg dose in both male and female rats. Greatest increases were observed in serum gamma-glutamyl transferase (GGT) levels at all doses of the extract in both male and female rats in this 14-day study. Taken together, the results indicate mild hepatotoxic effects at high doses of the 25% CBD extract.
These authors also conducted a 90-day repeated dose toxicity study in rats that orally received 100, 360, or 720 mg/kg/day of the extract containing 25% CBD. 10 No increases were observed in this 90-day study in serum ALT or aspartate transaminase (AST) levels in either male or female rats at any of the doses. No increases in serum ALP levels were observed in male rats and less than a threefold increase in ALP in female rats at the 720 mg/kg dose. An increase of less than three-fold occurred in serum GGT in male rats at the720 mg/kg dose, while a less than five-fold increase in serum GGT was observed in female rats at this highest dose. Levels of all these serum parameters returned to normal or were approaching normal at the end of the 28day recovery period. These results indicated that the hepatic effects of this extract in rats were mild and reversible at doses as high as 180 mg of CBD (720 mg of the extract)/kg for 90 days.
Ewing et al. 11 conducted an acute hepatotoxicity study of a CBD-rich Cannabis extract in mice. The extract contained about 58% CBD and approximately 4.8% other cannabinoids including 1.69% THC. Approximately one-third of the components in the extract were not identified. Doses of CBD were based on the CBD content of the extract. In an acute toxicity study, mice were treated with 0, 246, 738, or 2460 mg/kg of CBD as a single oral dose. No animals died after a single dose up to and including 2460 mg/kg of CBD. At the highest dose, no effects were observed for ALP or GGT, while less than three-fold increases were observed for serum ALT, AST, and total bilirubin. Small increases in liver-to-body weight ratios were also observed.
These authors 11 also conducted a 14-day study with daily oral doses of 0, 61.5, 184.5, or 615 mg/kg of CBD. Hepatic effects were observed only with the highest dose, with less than three-fold increases occurring in serum AST and ALT. Total bilirubin was also increased. No effects of any dose of CBD were observed for ALP and GGT. A dosedependent increase in liver-to-body weight ratios was observed. At the 615 mg/kg dose, more than 50 hepatic genes were differentially modulated, including genes linked with oxidative stress, lipid metabolism, and drug metabolism. However, no significant differences in serum glutathione (GSH) were noted at any dose of CBD, suggesting a lack of oxidative stress. Based on these studies, it is clear that the authors have reported suboptimal liver injury but have claimed CBD-induced hepatotoxicity.
Ewing et al. 12 also published a second study using the CBD-rich (58%) Cannabis extract. Mice were gavaged daily for 3 days with 116 mg/kg. CBD-treated animals were given 400 mg/kg acetaminophen (APAP) intraperitoneally (i.p.) on day 4 to induce hepatotoxicity, resulting in a 37.5% mortality. No animals died with APAP alone. GSH depletion and oxidative stress were confirmed by microscopic examination. However, when mice were treated orally for 3 days with 290 mg/kg of CBD followed by APAP on day 4, no mortality occurred with no GSH depletion and no histopathological effects were observed. Thus, the high (290 mg/kg) dose of CBD appeared to be hepatoprotective with respect to a lethal dose of APAP. These disparate effects may have been due to the antioxidant: prooxidant properties of CBD and will be discussed below. The authors emphasized that their results highlighted the potential for CBD/drug interactions.
The adverse effects associated with the administration of CBD to healthy dogs at a dose of 10 and 20 mg/kg/day for 6 weeks have been reported. 13 The dogs were treated with CBD in a topical cream, a capsule form, or in an oil. The only significant change in a biomarker during the study was an increase in serum ALP, which occurred in about one-third of the dogs. All dogs experienced diarrhea regardless of dose or formulation. The authors reported no evidence of short-term hepatotoxicity with bile acid levels remaining normal throughout the study and suggested that longer term studies were warranted. As with other studies, the products used in this study contained various amounts of THC as well as other cannabinoids and therefore uncertainties regarding the cause of the observed effect on ALP.
These authors also studied the effect of administering the same three CBD products (CBD in a topical cream, a capsule form, or in an oil) in addition to conventional antiepileptic treatment on the frequency of seizures in dogs with idiopathic epilepsy for 12 weeks. 14 As was the case in healthy dogs, 13 the primary adverse finding was a significant elevation in serum ALP. Whether the elevation was related to an effect of CBD or other components in the products on bone, liver, intestine, or another organ was not known. No adverse behavioral effects were noted.
Several studies have been reported by Magen et al. 15,16 regarding the ability of CBD to ameliorate toxicity caused by bile duct ligation in mice, a model of chronic liver disease. In the initial study, 15 bile duct-ligated mice were given either the vehicle or 5 mg/kg of CBD i.p. daily for 4 weeks. The ligated mice exhibited cognitive and locomotor impairment, increased the expression of tumor necrosis factor-1 (TNF-1) receptor gene, and reduced the expression of the brain-derived neurotrophic factor (BDNF) gene. In CBD-treated mice, cognitive impairment and locomotor function improved, while CBD reduced TNF-1 gene expression and increased BDNF gene expression.
In a subsequent study, 16 the ability of 5 mg/kg of CBD i.p. for 4 weeks to reverse the effects of bile duct ligation of mice with respect to locomotion, cognitive function, and the expression of genes associated with TNF-1 and BDNF was conducted. The authors concluded that the cognitive impairment and decreased locomotion from bile duct ligation resulted from both neuro-inflammation and 5-hydroxytryptamine-A1 (5-HT 1A ) receptor downregulation. Furthermore, CBD reversed these effects through a combination of anti-inflammatory activity and the activation of the 5-HT 1A receptor.
Avraham et al. 17 studied the effects of CBD on mice with experimentally induced liver failure, which was induced by treating the mice with 200 mg/kg of thioacetamide i.p. The mice were treated one day after the thioacetamide with either 5 mg/kg of CBD i.p. or the vehicle. Neurological and motor functions were evaluated 2 or 3 days after the liver failure, respectively. Cognitive and neurological functions of the mice were severely impaired, while 5-hydroxytryptamine (5-HT) levels were enhanced following the thioacetamide treatment, and these functions were restored and normalized by CBD treatment. The decreased locomotor functions produced by thioacetamide were partially restored by CBD. The authors also showed that CBD at 5 mg/kg dose gave a maximal effect as compared to doses of 1 and 10 mg/kg.
The ability of CBD to protect against hepatic toxicity and seizures in mice produced by cocaine was demonstrated by Vilela et al. 18 CBD was given i.p. 30 min prior to the administration of 75 mg/kg of cocaine i.p. CBD reduced acute liver damage and prevented seizures induced by the cocaine. A dose of 30 mg/kg of CBD provided greater protection than 60 and 90 mg/kg. A previous study by these authors showed that CBD inhibited hyperlocomotion produced by D-amphetamine (5 mg/kg) and ketamine (60 mg/kg). 19 CBD was given in doses of 15-60 mg/kg immediately after the two psychomimetic drugs with 30 mg/kg providing the optimal dose. All substances were given i.p.
CBD was shown to attenuate alcohol-induced hepatic steatosis, metabolic dysregulation, inflammation, and neutrophil-mediated injury in mice. 20 Mice were fed a liquid diet containing 5% ethanol for 10 days and on day 11 were gavaged with a single dose of 5 g/kg ethanol. The mice had been given CBD at a dose of 5 or 10 mg/kg i.p. or the vehicle for the 11 days of ethanol exposure. CBD treatment significantly attenuated the ethanol-induced elevation of serum AST and ALT levels and the liver-associated increases in triglycerides, fat droplet, protein 3nitrotyrosine and 4-hydroxynonenal formation, lipid peroxidation, inflammation (increased messenger RNA expressions of interlukin-6 and other mediators of inflammation), and neutrophil accumulation.
Discussion
Concerns have been raised regarding the potential hepatotoxicity of CBD. [9][10][11][12][13] In conjunction with assessing the efficacy of CBD in various neurological diseases, the potential hepatic effects have been determined in human subjects who received CBD in daily doses of 10-29 mg/kg. [1][2][3][4][5] In addition to the high doses of CBD, essentially all patients in these studies were taking at least one antiepileptic medication, with the most common being valproic acid, a drug well known to be associated with hepatotoxicity. 21,22 Many variables are involved, which impact the assessment and comparison of human and animal studies that have been conducted relative to hepatotoxicity versus hepatoprotection of CBD. Pharmacokinetic differences, metabolic differences, condition of the liver, doses, duration of studies, purity of product, concurrently administered drugs, and healthy versus disease states as well as paucity of published studies complicate the assessment picture. However, some information can be gleaned from the extant literature.
It is noteworthy that with few exceptions, the hepatic effects in these human studies were mild with small, less than three-fold increases in AST and ALT. In several studies, it was noted that the elevated plasma levels of AST and ALT returned to normal for some subjects during the study 2,4 and after reducing the dose of another antiepileptic drug, reducing the dose of CBD, or discontinuation of the CBD. 4 In one study, it was stated that although AST and ALT were modestly elevated, no hepatic damage occurred. 1 Based on these human studies which are limited in number, it can be concluded that the incidence of hepatotoxicity was low due to CBD, and in only several cases, aminotransferases were sufficiently elevated to warrant withdrawal of the subjects from the study. It appears that the increased AST and ALT levels may have been due most frequently to the augmentation by CBD of the hepatic effects of the antiepileptic medications, although effects due to the high dose of CBD cannot be ruled out.
The concurrent use of valproic acid and/or other antiepileptic drugs known to cause hepatotoxicity constitutes a major confounding factor in these studies. To determine the actual incidence of adverse hepatic effects due to CBD, studies are needed where CBD is given without the concomitant administration of antiepileptic drugs as valproic acid.
The hepatotoxicity of valproic acid in patients with epilepsy has been reviewed. 23 These authors summarized the available mechanistic literature regarding formation of valproic acid reactive metabolites, excess oxidative stress, altered fatty acid metabolism, and genetic variants of some enzymes such as glutathione transferases, uridine diphosphate (UDP)-glucuronosyltransferases, superoxide dismutase, and mitochondrial polymerase gamma.
Both CBD and valproic acid are known to undergo metabolism, and a metabolite of valproic acid (2-enevalproic acid) has been shown to be hepatotoxic. 23,24 Because here are some structural similarities between the metabolites of CBD and valproic acid, CBD may potentiate the hepatotoxicity of valproic acid via this mechanism. 24 Several animal studies have specifically addressed the potential hepatotoxic effects of CBD-containing products. [10][11][12][13][14][15] Unfortunately, in one case, the product contained approximately 25% CBD, 10 and in the other cases, it contained approximately 58% CBD. 11,12 As a consequence, it is not possible to attribute the observed effects to CBD, recognizing that the other constituents in the products could have attenuated, enhanced, or had no effect on the hepatic outcomes. Although NOAELs were determined for male and female rats in the study of Marx et al., 10 because the extract contained only 25% CBD, it is not known how these values would relate to a product that would contain pure CBD. The authors did follow standard Organization for Economic Cooperation and Development (OECD) 407 and OECD 408 protocols for their 14-day and 90-day studies, respectfully.
The study of Ewing et al. 11 in mice in addition to using a product that contained only 58% CBD presents a number of additional issues. The authors did not determine an LD 50 , although a single dose of 2460 mg/kg of CBD was not lethal. As points of comparison, the LD 50 of table salt (sodium chloride) is about 3000 mg/kg in rats and 4000 mg/kg in mice, while the LD 50 of caffeine is about 200 mg/kg in rats and 150 mg/kg in mice. The authors failed to determine the no-observed-effect-level (NOEL) and NOAEL, which enable the determination of margins of safety. If one assumes based on the data provided that the NOAEL in mice was 61.5 mg/kg and this is directly extrapolated to a 60 kg human per Food and Drug Administration (FDA) guidelines, this would represent a single dose of 3690 mg of CBD in humans as compared to 1200 mg for a 20 mg/kg dose, which is commonly used in seizure disorders [1][2][3][4][5]7 The authors concluded that "CBD exhibited clear signs of hepatotoxicity" and "it poses a risk for liver injury" but failed to emphasize that the doses required to do so. 11 One can also conclude that the potential for hepatotoxicity is vastly overstated by the authors and draw the opposite conclusion based on the data provided.
In the second study of Ewing et al. 12 in mice, a CBD dose of 116 mg/kg was shown to enhance the hepatotoxicity of APAP, while a dose of 290 mg/kg protected the liver against the toxicity of APAP. The high-dose protective response of CBD might be explained based on having an appropriate dose that prevented oxidative damage due to the free radicals and reactive oxygen species produced by the metabolism of APAP. The CBD and/or other constituents in the extract may have acted as free radical scavengers, antioxidants, and anti-inflammatory, 25 therefore prevented hepatotoxicity due to APAP.
It is not known whether the enhanced toxicity at the lower dose was due to CBD or other constituents in the extract. Unless pure CBD is used, it is not possible to attribute the enhanced toxicity of the extract to CBD. The antioxidant and anti-inflammatory properties of CBD are well known and the associated mechanisms have been extensively reviewed. 26 However, as with other antioxidants, CBD may act as a prooxidant under certain conditions, 27 which may have been the case with the lower dose of CBD in conjunction with the APAP toxicity. 12 Finally, the authors of the hepatotoxicity studies in rats 10 and mice 11,12 failed to review studies that have shown effects that are contradictory to their own findings.
A series of studies have demonstrated that pure CBD can be hepato-and neuroprotective in mice against bile duct ligation 15,16 as a model of hepatic encephalopathy, thioacetamide-induced fulminant hepatic failure as a model of hepatic encephalopathy, 17 cocaine-induced hepatotoxicity and seizures, 18 and alcohol-induced hepatic steatosis, metabolic dysregulation, inflammation, and neutrophilmediated injury. 20 In addition, CBD inhibited hyperlocomotion produced by D-amphetamine and ketamine. 21 Pure CBD and not an enriched extract was administered i.p. in all of the above hepatoprotective studies. [15][16][17][18][19][20] An effective CBD dose of 5 mg/kg/day reversed the adverse effects of bile duct ligation 15,16 and thioacetamide-induced liver failure. 17 A CBD dose of 30 mg/kg was most effective against the effects of cocaine, 18 D-amphetamine, and ketamine, 19 while a 5 and 10 mg/kg dose response effect of CBD was observed against alcohol-induced hepatic steatosis, metabolic dysregulation, inflammation, and neutrophilmediated injury in mice. 20 As compared to the i.p. administration of 5-30 mg/kg hepatoprotective doses of CBD, 15-20 the oral administration of 290 mg/kg protected against the hepatotoxic effects of APAP. 12 Again, it should be noted that this latter extract product was only 58% CBD. In a pharmacokinetic study, when mice were given 120 mg/kg of CBD orally and i.p, 28 the i.p. administration yielded plasma maximum concentration and area under the curve values that were 6.45-and 6.30-fold higher, respectively, than when the CBD was administered orally. These observations can be extrapolated to the APAP study where 290 mg/kg of CBD given orally provided protection. 12 Recognizing that this may not have been the optimal dose, this dose would translate into an i.p. dose of CBD of approximately 45 mg/kg. Conversely, a dose of 30 mg/kg of CBD given i.p. that provided hepatoprotection could translate into an oral dose of approximately 190 mg/kg, 18,19 indicating that the results between the various studies involving i.p. administration are within the range of the oral dose used in the APAP study. 12 It is important to know whether CBD causes hepatotoxicity at therapeutic doses. However, for the general public, the question is whether CBD causes hepatotoxicity at the much lower doses that are more widely and commonly used in the form of dietary supplements and food products, that is, to deal with common ailments, such as pain management, relaxation and stress relief, sleep aid, and depression. As previously noted, the doses of CBD that are commonly used therapeutically as a drug to treat resistant neurological disorders such as seizures are about 20 mg/kg or 1200 mg for a 60 kg individual. The doses of CBD that may be appropriate when used for providing relief for aches and pains, headaches, insomnia, and so on, are in the range of 25-100 mg/day. Taken together, this suggests that CBDinduced hepatotoxicity reported in mice in the recent literature is not pharmacologically relevant. 12 Based on the data provided, if one assumes that the NOAEL for CBD in mice was 184.5 mg/kg, 11 and this is extrapolated to a 60 kg human per FDA guidelines, this would represent a single dose of 11,070 mg of CBD in humans. If one assumes a typical daily dose of 50 mg of CBD, this yields a margin of safety of 221. This factor should be kept in mind when designing animal experiments and clinical trials with human subjects.
Based on the available data, it may be concluded that there is a higher probability of serious hepatotoxicity at the high therapeutic doses that are used, particularly when used in conjunction with other antiepileptic drugs such as valproic acid, and a much lower risk of adverse hepatic effects and the potential for hepatoprotection at the lower doses commonly used in dietary supplements and food products. However, a safety study in rats using a highly purified CBD rather than enriched Cannabis extracts is needed. Studies should include an assessment of the LD 50 and a 90-day subchronic toxicity study that enables the determination of hepatic as well as other tissue-specific effects, NOEL, NOAEL, and potential margins of safety.
Declaration of conflicting interests
The author(s) declared the following potential conflicts of interest with respect to the research, authorship, and/or publication of this article: SJS has served as a consultant for Boston Biopharm, Inc.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This review was funded by a grant from Boston Biopharm, Inc. | 2020-05-21T00:02:27.294Z | 2020-05-08T00:00:00.000 | {
"year": 2020,
"sha1": "ff1107fdd2948087804d1b8b051de03d02f79f3f",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1177/2397847320922944",
"oa_status": "GOLD",
"pdf_src": "Sage",
"pdf_hash": "92170d8b53788438ff57246f7cb5355445479b0d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
246031898 | pes2o/s2orc | v3-fos-license | Assessment of port governance model: evidence from the Brazilian ports
Purpose – Since enacting Act 8630/93, Brazilian port activities have been going through significant modifications, changing from the public port service management to the landlord model. Act 12815/2013 enforced a new regulatory framework increasing Port Authorities ’ dependence on the Federal Government. Since 2019, the Government has attempted to elaborate a Port Authorities ’ identity based on the private port governance model inspired by the Australian and United Kingdom ones. This paper assesses Brazilian ’ s Port Authorities management models from 1993 to 2020 and considers the Australian, the United Kingdom and Antwerp port governance models as benchmarks. Design/methodology/approach – This paper adopts a two-step methodological approach, namely a combined desk and field research methodological approach and considers three essential resources: government legislative acts and published data available online; ports ’ data and information issued by governments ’ agencies, academic papers and national and international ports ’ websites; and a semi-structured questionnaire survey targetingthe leadingassociations representing portusers, foreigntrade and stevedoring companies. Findings – The outcome shows that the solutions to overcome the existing Brazilian Port Authority governance problems remain in the Federal Government ’ s hands by (1) removing its control through bureaucracy,(2)preventingtheparty-politicalinfluencefollowinginthepublicportsand(3)decentralisingportmanagementbychiefexecutiveofficersnamedbyPortAuthorityCouncils. Research limitations/implications – This paper does not explore the regulatory frameworks underlying the “ Lease Terminal ” and “ Private User Terminal ” . Originality/value – This paper assesses the management models that led Brazilian ’ s Port Authorities from 1993 to 2020, comparing them with the UK and Australian private service port and Antwerp landlord model.
Introduction
In Brazil, on February 23, 1993, Act 8630, the so-called 1993 Ports Act, entered into force and the Federal Government (FedGov) incorporated the landlord port authority governance model. This model allows port authorities to operate as economic catalysts by promoting investment, tax revenue, employment, trade volume and increasing regional and national gross domestic product (Notteboom et al., 2021). They can act as development agencies by planning and promoting industrial activities and logistical operations linked to their 2. Literature review At the beginning of this century, the main factors driving port industry changes were shipping technology, cargo-handling system, storage equipment and management, which should be added information and communication technology, port operations and institutional environment (Notteboom and Winkelmans, 2001). Such a changeable atmosphere affects seaports' governance and management (Tovar and Wall, 2014;Zheng and Negenborn, 2014). In addition, larger vessels' demand increasing seaport capacity to handle their cargo and respond to the increasing demands on their logistics systems. Under uncertain scenarios regarding global trade patterns, seaports must react to meet this uncertainty; therefore, they should apply agility to their operations (Paixão and Marlow, 2003).
Assessment of port governance model
In maritime transport, agility is associated to the concept of efficiency toward high cargohandling capacity with lower costs per unit handled (Lunkes et al., 2013). The World Bank Toolkit identified five port governance models (see Table 1) based on the public and private sectors' level of control and responsibility (World Bank, 2007). The landlord port governance model could be seen as stuck in the middle of the scale and characterised by a mixed public-private business model. It must have a long-time planning outlook and act as a regulatory body, following the countries' transport/infrastructure ministry rules. Hence, it has been the most successful port governance model worldwide. According to the European Port Governance Report (European Sea Ports Organisation, 2010), the landlord function can be considered the main port governance model.
Besides, port authorities are responsible for their seaports competitive, sustainable and safe development (Notteboom and Winkelmans, 2001). The literature explains that port authorities' strategic decisions are subject to executives' specific beliefs who change their perception across the industry over time, and based on self-cognition, strategic decisions are primarily determined by executives' past experiences. Executives outside the port industry without conceptual knowledge about the sector face difficulty in making decisions and taking the appropriate path. Unlike the shipping industry, port authorities work in a double derived demand uncertain environment (Paixão and Marlow, 2003) where many players interact, which explains why their strategic decisions must include political, economic, environmental and commercial interests. In a globalised business model, customers require close collaboration among business partners and port authorities to streamline their supply chains; therefore, port authorities must "calibrate" the public-private demands adjusting its strategy (Van der Lugt et al., 2017).
Brazilian port governance some studies can be found in the body of the literature. Barros and Barros (2013) affirmed that the low modal integration and port infrastructure deficit hinders the advanced stage of the Brazilian ports. Galvão et al. (2013Galvão et al. ( , 2017 stated that in Brazil, the 1993 Ports Act had its effects and policy changes regarding the Government's strategic sectors during the 1990s and early 2000s. This paper contributes to the body of the literature because it builds on the work done so far.
Methodology
This paper adopts a two-step methodological approach, namely a combined desk and field research methodological approach. The first step uses a desk research methodological approach, namely an external desk research approach. An external desk research approach means that information is gathered outside the organisation's boundaries. This information can be published in soft or hard copies and gathered online or physically in different governmental agencies, respectively. However, it does not guarantee that all the necessary information is available.
The present research gathers information about port governance models in Brazil, Australia, the UK and Antwerp (Belgium). Information about Australian, British and Belgium port governance models is necessary since it will be used as a benchmark. Information is gathered from three sources. The first is the FedGov legislative acts and published data available online. These data concern port legislation and its regulatory framework to assess the market beyond the public and private agents' publication. The second one concerns ports' data and information issued by government agencies, academic papers and the national and international ports websites. Finally, the third one consists of a semi-structured questionnaire survey targeting the leading associations representing Brazilian port stakeholders. Section 4 presents the external desk research approach for Brazil, and Section 5 for Australia, the UK and Antwerp (Belgium).
The second step concerns a field research approach using a semi-structured survey questionnaire addressing two important issues. First, it addressed port authority governance under three scenarios drawn from the body of the literature (Tongzon, 1995;Peters, 2001;Notteboom and Winkelmans, 2001). The first one, under public and centralised port management, with managers named by the Federal Government, represents the current situation in Brazil. Under private and decentralised port management led by private corporations, the second scenario is seen in some Australian Port Authorities and UK Ports. Finally, under public and decentralised port management conducted by managers named by the PAC, the third scenario is placed in the Port of Antwerp and its principles led to the enactment of Act 8630/1993. Second based on the existing literature (Tongzon, 1995;Peters, 2001;Notteboom and Winkelmans, 2001), it incorporated bringing some critical key port governance competitiveness determinants (Tongzon, 1995;Peters, 2001;Notteboom and Winkelmans, 2001). They are: (1) operational efficiency of ports/terminals; (2) port infrastructure charges; (3) cargo-handling tariffs; (4) reliability in the port authority; (5) selection of port preference by shippers and carriers; (6) level of dredging of access channels and evolution basins; (7) contractual flexibility to adapt to changes in the market; (8) land accessibility; and (9) diversity of supply of services and products. A five-point Likert scale was used (1 5 terrible; 2 5 bad; 3 5 reasonable; 4 5 good and 5 5 great) to classify these key determinants, according with the governance scenario perception.
This survey was sent to three leading port authorities' stakeholders associations, which represent the stevedoring companies, trade companies and shippers. The Brazilian Ports Terminals Association is a non-profit organisation representing over 100 stevedoring companies operating in leasing areas into Port Authorities, answering for more than 70% of the cargo handling in the Brazilian maritime trade. The Brazilian Foreign Trade Association is a non-profit organisation representing the Brazilian trade companies, automobile industries, stevedoring companies, shipowners association, machines industries and consulting companies. Finally, the Brazilian Shippers Association is a non-profit organisation representing the interests of port users as shippers, exporters, importers, general cargo port terminals, shipowners, brokers, carriers and freight forwarders.
This work discarded the possibility of conducting interviews with the leading Brazilian Port Authorities chief executive officers (CEOs) because these positions are FedGov oriented, i.e., theBrazilian Government nominates the people who take responsibility for these positions. This is a political philosophy used by Portuguese-speaking countries, not only Brazil. Consequently, as these CEOs follow the Government's guidelines, this research could run the risk of being biased, thus undermining the impartiality of the research. Finally, the survey questionnaire was sent to these representative associations via e-mail in August 2021. Their answers were also received in the same month.
The Brazilian Port Authorities drama
Brazil counts 37 public ports, where the FedGov manages 19, and 18 are delegated ports to states or municipalities. In addition, it has 144 private terminals (TPUs) located outside the Assessment of port governance model jurisdictional borders of the organised seaports' areas which handle 70% of the country's bulk cargo (National Waterway Transportation Agency NWTA, 2021). In contrast, the leased terminals deal with 30% of the cargo handled under the Port Authority jurisdiction. Furthermore, bulk cargo terminals throughputs are far less affected by human intervention or operational planning since their operations are standardised and driven by automation systems. Regardless of their location or management standards, their business model remains the same. Containers terminals leased under the Port Authorities jurisdiction handle 70% of the country's containerised cargo, whereas the TPUs deal with 30%. Moreover, container terminals' throughputs are far more affected by human intervention, with their operational planning driving their operational cost rather than bulk terminals. Hence, where container terminals are settled matters for their business model. For example, container terminals outside the Port Authority jurisdiction do not deal with dockers unions or pay leasing according to their operational performance or even down payment for bid. On the other hand, they must construct their infrastructure and superstructure and manage their maintenance. Figures 1 and 2 show Brazilian's landlord port governance model operational response. Figure 1 shows leasing terminals and TPUs container handling throughput, combined. Figure 2 exposes that TPUs doubled their TEU handling market share in the last decade, soaring from 15% to 30% of the market share for box handling. Their main gain occurred between 2010 and 2015 when TPUs market share grew 73%, contrasting with the 19% growth between 2015 and 2020, resulting in a smooth growth trend.
Between the enactment of Act 8630/1993 and the end of 2002, the FedGov granted more than 140 lease contracts (CAU, 2020). Similarly, the figures are still not cooperating after seven years of Act 12815/2013, with the Federal Government's dismantling an entire regulatory framework that was still in progress to its full implementation, arguing adjustments needs to increase investments in the sector. Figure 3 shows TPUs' cargo-handling gain just after enacting Act 12815/2013. However, this comparison does not infer cargo loss by the leased terminals since the 20-foot equivalent (TUE) shipments have been steady during most of the decade; a slight advantage occurred for the TPUs between 2016 and 2017. Only when the 2015/2016 crisis derived from the country's economic and political turmoil resulting in the president's impeachment was overcome, did the container volume handled by private and leasing terminals normalise. The overall scenario infers that the TPUs increased their cargo shipment market share (ton/TEU) due to regulatory advantage rather than operational and/or cost performance.
Furthermore, although the Brazilian port throughput soared after 1993, the FedGov could not overcome the substantial party-political influence on Port Authorities' control, even after the new Act 12815/2013. Besides, the Government's excessive power and several bureaucratic layers derived from that legislation slowed the ports' lease process. According to the NAOR (CAU, 2020), port leases have taken around 28 months in public ports.
Data released in late 2019 by the NWTA reveal that less than 12 of the 168 TPUs currently authorised have applied for an International Traffic Permit. Most of those TPUs got their authorisation to support offshore operations and/or operate as sea terminals in coastal shipping, mainly for handling their cargo, generally raw material, in the process of industrial verticalisation. In addition, less than 8% of authorised port facilities have the features of a multi-purpose, small-size terminal, in comparison to the large, specialised terminals under lease in the jurisdiction of the Port Authorities.
Since 2019, the Ministry of Infrastructure (MinInf) has been trying to give Port Authorities more administrative independence through several ordinances to accelerate the assessments of investments in public port infrastructure. By focusing on adding value in the port assets towards the privatisation process, the Government has been working hard to cut edges in Port Authorities' FedGov dependence, at least concerning the port projects issues.
Under a pandemic atmosphere (COVID-19), on 8 August 2020, the FedGov enacted Act 14047/2020 containing various decentralised measures regarding the commercial and Assessment of port governance model management relation among organised ports, their concessionaires and third parties. Like a legal umbrella, Act 14047/2020 covers several legal aspects targeting the operational leased contracts, including those for port facilities. The FedGov removed some mandatory provisions in the classic leased contracts such as the required methods and practices of carrying out the port terminal activities, the return of the assets at the end of the contract, adoption and compliance with customs supervision measures for goods, vehicles and persons. Leased contracts will be considered under the private companies' Act commercial rules although this will not exclude supervision by NWTA. Apart from the FedGov control, the MinInf seems to induct a commercially positive context towards Port Authorities' privatisation.
5. Port management experiences in Australia, United Kingdom and Belgium 5.1 The Australian private service port experience Australia has over 70 ports spread all over its territory with a mixed model granting part of its port activities to private groups consortia or business associations that account with three main regulatory frameworks: (1) The State-Owned Corporations Act 1989 covering the landlord port governance model.
( (2016) in the VIC Territory. However, unlike the classic Port Authority landlord governance model, in the Australian private service port model, the maintenance of waterway access to the ports is not in charge of the government authorities. Local port authorities must contract dredging and beaconing services under the Central Government's approval and align with the national dredging plan. When the private service port model started in 2010, it asked for pricing regime and port services regulation and the Australian Government enacted several ports management acts in 2015. Their objective was to grant the private service ports a regulatory framework to guarantee a safe, efficient and effective management under the Ports and Maritime Administration Act 1995. Thus, in 2015, that body of legislation established the operational rules and pricing regime for some ports designated by the Australian Government. To comply with the 2015 Ports Management Acts, the Government named an independent statutory body under the Utilities Commission Act 2000, which would be responsible for observing ports' access and the prices of services prescribed and provided by their private port authorities. The Australian Government's experience showed that it would be necessary to enact a new act, the Port Law Amendment Act 2020, to assess the pricing regime's effectiveness established by the 2015 Ports Management Acts.
Although the Australian port privatisation has short-term positive impacts on the States Governments' balance sheets, it may result in a risky port assets' underevaluation, increased port charges, impeded port competition, less port investment and less concern for the public MABR 7,1 interest in the long term (Chen et al., 2017). The Australian model has demonstrated so far the brutal demand for short-term financial returns that port operators must submit to these private consortia, seeking to meet their shareholders' goals.
The Australian Competition and Consumer Commission (ACCC) has appealed to the Federal Court on several occasions against NSW Ports, arguing "anti-competitive" and "illegal" acts. According to ACCC Reports (2018-2020) and several Australian Federal Court proceedings, the aim is to remove cartels' barriers to competition in supply and port services with implications for the cost of goods across the Australian economy that consumers ultimately support. However, there is no success due to the model contracts signed in the privatisation processes and the limited power of Australian Government interference in the private sector. The Port of Melbourne has experienced hefty infrastructure surcharges following the increase in the price of several services since the Victorian Government leased the Port of Melbourne in 2016 (ACCC, 2020). Port of Melbourne's users face a continuing increase in fees charged by stevedoring companies, making them urge a greater regulation to avoid damage to the Victorian economy (Victorian Transport Association, 2020). Higher rent, energy and other costs are the main arguments to justify the serial infrastructure surcharge at Melbourne's Port and further private port services in Australia.
The UK privatisation port experience
There are about 120 commercial ports in the UK (Maritime, 2021). These include major allpurpose ports, such as London and Liverpool; ferry ports, such as Dover; specialised container ports, such as Felixstowe; and ports catering for bulk traffic, such as coal or oil. Many smaller ports cater to local traffic or specialise in fishing or leisure boating (Maritime, 2021). However, most UK cargo traffic runs in a relatively small percentage of the commercial ports, where the top 20 ports account for 88% of the total cargo handled. The UK terminal representative organisations are similar to the Brazilian TPUs' representative organisations. In Brazil, only the Association of Private Port Terminals, representing the interests of 29 large fully private port terminals, brings together 56 TPUs. In the UK, two associations represent the country's port state's interests: The British Ports Association, holding around 107 ports and The UK Major Ports Group Ltd (UKMPG), holding around 42 ports operated by nine UKMPG members.
In the late 1980s and early 1990s, the UK privatised their largest ports, while minor ports remained in the hands of independent public trusts or municipal authorities. This privatisation abolished the National Dock Labour Scheme (NDLS), ended frequent port disruption periods that threatened the country's maritime trade and removed restrictive and archaic employment regulations required. Hence, there are three main models of ports' management in the UK, namely "Private Ownership", "Trust" and "Local Authority Owned Ports". Private Ownership ports are owned, run and invested by international groups or private companies commercially.
The so-called British port privatisation was all about "ports on sale", which meant selling state-owned ports assets and railway ports in the early 1980s rather than planning a new governance approach to improve port management, infrastructure and facilities. It aimed to remove public ownership and its accountability from government rule, including its regulatory command. Conceptually speaking, there is no classic landlord port governance model in the UK as in continental Europe. Hence, the UK port reform model was a unique port privatisation program worldwide. Hence, taking advantage of the lack of port policy to redirect the outcome, the financial markets drove the British port industry into heavy consolidation, imposing monopolistic practices (Brooks and Pallis, 2012). The Department for Transport shows that the UK port tonnage is mainly driven by movements occurring in the English ports, which made up 70% of cargo handled in 2019. When the remaining major UK Assessment of port governance model ports are considered, their market share soared to 98% of the total cargo handled in the UK. However, some pitfalls are worth mentioning. The World Bank (2007) claimed that one of the UK's port system main structural problems, mainly among Trust Ports, was their boards' composition. It tended to be strongly made up of port users' representatives who were by nature reluctant to authorise tariff increases sufficient to generate the revenues needed to allow for depreciation and subsequent reinvestment in port facilities. In addition, there was general concern that Britain could not have enough port capacity as private investment delayed well behind trade growth (Baird and Valentine, 2006). A focus on short-term returns weakens corporations' long-term perspective, reducing their ability to grow by reducing the accumulated profits that could be used as investments to generate long-term value. In this scenario, it would not be difficult to understand China's shareholder control policy by the Chinese State of all its global corporations. Baird and Valentine (2006) identified that port privatisation was used to undo many Second World War nationalisations. The UK's reform was as much about reversing this hallmark seen as a socialist philosophy as it was about selling companies that should not belong, under capitalist philosophy, in public hands (Baird and Valentine, 2006). They were defective from the public/taxpayer interest view and, for a long time, there was less investment than could have occurred (Saundry and Turnbull, 1997;Baird, 2000). As expected, the investments only came along to respond to maritime trade growth, so private investors began to acquire port infrastructure holdings. There has been considerable investment in new UK port facilities in recent years, with the port owners running after to cope with demand.
The Port of Antwerp current governance overview
The Port of Antwerp (PoA) was the FedGov's inspiration almost 30 years ago towards establishing its landlord port governance model. Port of Antwerp was initially a city port, but it moved its operations away from city centres across the time. Since Second World War, the PoA has been acting under the landlord port governance model principles delineated by a combined public and private orientation. However, its strategy began taking a new design in 1997, when, at the request of the local port community players, such as port operators, industrial companies, logistics companies, the Belgium Government recognised that the Port needed to adopt more economic principles. To deepen its long-term planning and face the new global challenges, it would be necessary to change the Municipal Council model renewed every six years, since the investment policy depended upon the elections' result. Hence, Antwerp's Port became a self-governing municipal body with its own governance rules, run by port and maritime professionals, reporting to a Port Municipal Council comprising 18 councillors, 17 of whom were elected politicians, representing individually different port community economic and social areas and 1 representing the private initiative.
Later in 2016, the PoA refined the model and moved to a public company's limited liability status. Its board of directors comprises private sector and government authority representatives, ensuring democratic control of the Port supervised by independent auditors. The board of directors comprises six elected politicians representing the different port community economic and social sectors and six representatives of private initiatives chosen by private companies' CEOs outside the PoA to avoid any possible conflict of interest. The board of directors elect the PoA president (CEO). The new CEO accompanies the previous one for one year, who retains all his authority over the port until the elected president takes up the mandate definitively. The daily administration is carried out by an Executive Committee, whose members are also appointed by the board of directors and the CEO chairs this Executive Committee. Finally, the PoA continues to work with the Belgium Government and the port community to implement the Port's strategic planning to generate value for the region and the country.
MABR 7,1
In 2019, Antwerp's cargo volume was more than 240 million tonnes, making the PoA by far the largest Belgium port and the second-largest European port, ranking 14th in the 20 largest container ports in the world. It handled more than 11 million TEU with more than 130 million cargo tons regarding container handling. Acting as a global player, the PoA created a subsidiary, the Port of Antwerp International, to invest in ports abroad and port-related projects in strategic regions, such as Brazil, seeking horizontal integration.
6. Unbalance of the Brazilian port governance models During the second decade of the 21st century, there was an overwhelming feeling of constant hunger for "revolutions" rather than "evolution" in port authorities' governance. What is perceivable is something symptomatic, i.e., incomplete and misunderstandings actions in the port industry, creating exotic derivatives in the management, mainly in federal public ports, contrasting with global reference models. The Brazilian' port authorities have been experimenting with trial-and-error different governance efforts without removing the main factors that undermined the port authorities' management. Moreover, even though the terminals' throughput run by stevedoring companies stands in equal terms as the sector's international best practice, the Brazilian experience goes against the prevailing international practice (CAU, 2020). In most reference countries, stevedoring companies look for leasing areas into port authorities on the landlord model. On the contrary, in Brazil, most investors prefer to install themselves outside the organised ports (CAU, 2020).
The current FedGov plans to move into the private service port indicate a port reform supported by ideology rather than operational needs, as was the case of the port reform in the UK; the UK urged to abolish the NDLS scheme. In Brazil, the FedGov excessive control resulted in strong party-political influence in port management, and bureaucracy overload undermined the role to be played by port authorities. However, according to the Brazilian MinInf (Portogente, 2020), the main objective of the Brazilian port reform program toward the private service port is to develop the appropriate port complex, based on efficiency and timely investment, and strengthen port integration in their communities and cities. Hereafter, a question arises: What is the perception that the directly impacted port community has about the issue? The answer to this question is given in Figures 4-6 and Table 2, which are the survey outcome. Figure 4 shows port users' competitiveness perception regarding port authority performance by governance model. According to them, the operational efficiency, infrastructure charges, cargo-handling tariffs, reliability, port preference and level of dredging would be better reliable in the public and decentralised governance model. This outcome contrasts with the private and decentralised governance model that would be considered terrible regarding operational efficiency, infrastructure charges, cargo-handling tariffs, reliability and land accessibility. On the other hand, the private and decentralised governance model would be considered reasonable regarding port preference, level of dredging and diversity of services. According to the perception of the port users, the only key determinant that would be better in the private and decentralised governance model is the contractual flexibility. Regarding the current model (public and centralised port authority), the only competitiveness aspect that is considered excellent is the port preference, aligned with the public and decentralised governance model. The remaining determinant keys are classified as bad or terrible. Figure 5 shows the stevedoring companies competitiveness perception regarding port authority performance by governance model. According to them, the operational efficiency, infrastructure charges, cargo-handling tariffs and reliability would be better in the public and decentralised governance model than in the private and decentralised governance model but respectively classified as reasonable and good for reliability and operational efficiency. The private and decentralised governance model would be considered inadequate for the stevedoring companies regarding infrastructure charges and cargo-handling tariffs. On the Assessment of port governance model other hand, the private and decentralised governance model would be considered as good as the public and decentralised governance model regarding port preference, level of dredging and land accessibility. However, the private and decentralised governance model would be considered better than the public and decentralised governance model regarding contractual flexibility and diversity of services. Therefore, those two last ones have a great perception by the stevedoring companies. Figure 6 shows trade companies (importers and exporters) competitiveness perception regarding port authority performance by governance model. According to them, the infrastructure charges, cargo-handling tariffs, reliability, port preference, contractual flexibility and diversity of services would be better in the public and decentralised governance model than in the private and decentralised governance model. On the other hand, the private and decentralised governance model would be considered as good as the public and decentralised governance model regarding port preference, level of dredging and land accessibility. However, the private and decentralised governance model would be considered better than the public and decentralised governance model regarding operational efficiency. Concerning reliability and port preference, the private and decentralised governance model would be considered better than public and decentralised governance model. For the trade companies, the current public and centralised model performance are under the private and decentralised governance model and the public and decentralised governance model, only aligned with the first one regarding reliability and port preference, but both under the public and decentralised governance model. Table 2 addresses the port performance perception. According to those main stakeholders, it shows the average perception of port authority performance by governance model. The overall performance perception regarding the current port authorities governance model, that is, public, with centralised port management, with managers named by the Federal Government, is considered a subpar performance expected regardless of the freedom to select port preference by shippers and carriers. On the other hand, according to the respondents, comparatively, with the current model, private and decentralised port management led by private corporations is expected to have good performance, slightly better than the current Assessment of port governance model model. However, the private and decentralised model brings a vital concern when key competitiveness points are infrastructure charges, cargo-handling tariffs and reliability in the Port Authority would be worse than they are nowadays. On the other hand, the contractual flexibility to adapt to changes in the market is considered a tremendous positive point in such a governance model. Finally, comparatively with the current model, according to the respondents, the public and decentralised port management would be expected to have far more overall performance, either by the current model or by the private and decentralised with port management led by a private corporation. The percentage shows the general perception weight regarding the port authority performance, levelled to each governance model. That weight of general perception is calculated by the average of the satisfaction by the port authority governance model, comparatively by the level of perception given by each of the associations. The weight 5 is the most outstanding performance, 100% satisfaction perception. In Table 2, a particular assertive highlight as significant performance is the reliability of the public and decentralised Port Authority, which would be considered 100%.
The Brazilian port community represented by the leading associations of the port users, trade companies and port operators show that a fully private model led by private corporations would not bring reliability to the port sector. According to them, public and decentralised port management conducted by managers representing the primary stakeholders would be a better model, bringing reliability fully to the governance system. Table 3 summarises the conclusions about the difficulties that undermine the Brazilian Port Authority's effectiveness and the appropriate countermeasures toward efficiency, flexibility and reliability.
Overall, this research infers that the Belgian model, representing the public port with decentralised governance, with managers named by the PAC, spread an excellent competitive perception for the respondents. However, the private service ports, similar to the Australian model, as private and decentralised port management, led by private corporations, do not transmit a good feeling perception concerning competition issues. Moreover, the UK model does not apply as a comparable situation. The reason is that the traditional concept of Port Authority does not exist in practice in the UK, because of the port privatisation model in force. If a comparison would be made, then it could be said that the UK portsystem is almost like the TPU system in Brazil. Both are responsible for carrying out all the investments, monitoring and operations as presented in Table 1.
Conclusions
Since 2013, the Brazilian Government has created business environments that were not recommended in the literature for its port authorities, facing its consequences and retrospectively strives to mitigate the problem through regulation, mainly through normative resolutions and ministerial ordinances long-term spiral amendments.
The Government's privatisation agenda implications bring a list of uncertainties regarding the proposed modelling and its attractiveness to private investors. Concerns on the project's financial view, like risk valuation, the rate of return calibration and the freedom to meet the logistics chain with the most significant profitability must be discussed in all projects, supporting classic corporative finance and private governance view. On the other side, concerns about the project's economic outlook must meet the National Master Plan and the local Port Development and Zoning Plan. In addition, they must address the extension of tariff freedom and the expansion projects driven by private or public policies. How to calibrate the scales to meet the needs and correctly balance all factors is a complex equation.
Hence, this research finds that the remedy to overcome the main problems in Brazilian Port Authority governance is in the Federal Government's hands: removing the heavy bureaucracy, preventing the usual party-political influence and decentralising port managing. Thus, a comprehensive port governance improvement based on landlord port model evolution seems to be more engaged to the outstanding port management quality worldwide than a private service port could provide to the Brazilian case.
Barriers Reasons Countermeasures
Heavy bureaucracy Excessive central government control leads to accountability overload To enact a new regulatory framework to facilitate port business based on modern compliance practices Party-political influence The choices by CEOs out of the industry with party-political focus leads to inappropriate decisions CEOs' choice by the primary stakeholders includes industries associations, trades, exporters, importers, shippers, workers, and local authorities Governance model Port legislation demands central government approval by the port authorities' port projects and their day-byday governance model To enact New National Port Act driving to the port Authorities decentralisation with clear restrictions to the party-political management Note(s): Drawn by authors based on the findings of this work | 2022-01-19T16:05:24.153Z | 2022-01-18T00:00:00.000 | {
"year": 2022,
"sha1": "733e3ac74511d9c18bceb54c96fb7db66edac076",
"oa_license": null,
"oa_url": "https://www.emerald.com/insight/content/doi/10.1108/MABR-03-2021-0026/full/pdf?title=assessment-of-port-governance-model-evidence-from-the-brazilian-ports",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "eab6a5c05e4a897d592b929ccdae62322a2f64dc",
"s2fieldsofstudy": [
"Business",
"Political Science"
],
"extfieldsofstudy": []
} |
54652606 | pes2o/s2orc | v3-fos-license | ASSESSMENT OF SLOPE STABILITY USING PS-INSAR TECHNIQUE
In this research work, PS-InSAR approach is envisaged to monitor slope stability of landslides prone areas in Nainital and Tehri region of Uttarakhand, India. For the proposed work, Stanford Method for Persistent Scatterers (StaMPS) based PS-InSAR is used for processing ENVISAT ASAR C-Band data stacks of study area which resulted in a time series 1D-Line of Sight (LOS) map of surface displacement. StaMPS efficiently extracted the PS pixels on the unstable slopes in both areas and the time series 1D-LOS displacement map of PS pixels indicates that those areas in Nainital and Tehri region have measurement pixels with maximum displacement away from the satellite of the order of 22 mm/year and 17.6 mm/year respectively. * Corresponding author (ramiitk07@gmail.com)
INTRODUCTION
Synthetic aperture radar (SAR) is an advanced technology of radar community.It is an active microwave remote sensing mechanism which is capable of monitoring geophysical parameters (e.g.displacement vector) of earth features.It provides data in various formats which can be easily absorbed by the researchers as well as by the industry.Interferometric SAR (InSAR) uses two or more SAR images at different acquisition time to generate interferograms and digital elevation model (DEM), and thereby determine the change in the position of resolution cells in the satellite line of sight (LOS).It is widely used for deformation monitoring purposes but severely affected from the errors such as temporal and spatial decorrelation.
In order to overcome the aforementioned limitations of InSAR, permanent scatterer InSAR (PS-InSAR) was abstracted and devolved by Ferretti et al. (2000).PS-InSAR identifies measurement pixels known as permanent scatterer (PS) with stable amplitude and phase history over a long interval of time.Examples of such PS candidates are manmade objects such as buildings, roofs etc.Although PS-InSAR is more accurate and consistent it too suffers from some limitations i.e. less PS density in non-urban areas, one dimensional representation of 1D-LOS (Greif and Vlcko, 2013).
In order to remove the aforesaid limitations, Stanford Method of Persistent Scatterer (StaMPS) approach is conceptualized and developed by Hooper et al. (2007).StaMPS based PS-InSAR method uses spatial correlation of interferogram phase to identify phase stable pixels even with low amplitude stability which makes the approach capable of detecting PS pixels in non-urban areas.
In this research work, we have applied StaMPS based PS-InSAR processing for assessing the slope stability in Nainital and Tehri regions of Uttarakhand, India.
STUDY AREA AND SATELLITE DATASET
Landslide is the one of the most threatening geo-hazards of the Himalaya causing colossal damages to the infrastructure and livelihood of common people.Therefore, two Himalyan town of Uttarakhand, India-Nainital and Tehri are under our scanner in this research work.
Geological settings of study areas
Nainital, is a popular hill station in the state of Uttarakhand at the Kumaon foothills of the lesser Himalayas.Nainital township is situated in a valley containing a kidney-shaped lake at an altitude of 2,084 metres above sea level and surrounded by mountains.The town has experienced disastrous landslide events in 1867, 1880, 1893, 1898, 1924 and 1998 according to (Sharma, 2006).The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XL-8, 2014ISPRS Technical Commission VIII Symposium, 09 -12 December 2014, Hyderabad, India This contribution has been peer-reviewed.doi:10.5194/isprsarchives-XL-8-35-2014 of the multi events of landslide induced by Tehri reservior.Also in December 2010, debris from the landslip has blocked a division tunnel forcing a stop to generation of electricity.This has caused a loss of more than Rs 100 crore.Due to aforesaid facts, it has become an area of interest for this slope stability study.
Satellite Dataset of Nainital and Tehri
In order to monitor the critical slopes around Nainital lake, we have processed 13 descending ENVISAT C-Band ASAR images of track 248 (frame: 3015) acquired between October 2008 to August 2010.. Similarly, to monitor critical slopes around Tehri reservoir, 16 Envisat ASAR C-Band SLC images of track 291 (frame: 83) acquired between January 2009 to July 2010 are used.ASAR image of 25 th December 2009 and 9 th October 2009 is chosen as the master image for Nainital and Tehri region respectively based on minimizing the temporal, Doppler and perpendicular baseline (B ┴ ) (Hooper et al., 2007) The aforementioned dataset is presented in Table 1 and 2 along with the critical perpendicular baseline length with respect to master image, acquisition date, orbit number and Doppler centroid frequency.Apart from the SLC images, a 90 m resolution SRTM Digital Elevation Model (DEM) (Figure 1) is used to remove the topographic phase from the differentially corrected interferogram.The orbital corrections are done with the help of precise orbits obtained from ESA.The DORIS precise orbits for the year 2008, 2009 and 2010
Interferogram Generation
The interferogram generation in StaMPS is done using a single master image and is based on the maximization of the correlation amongst the set of images used for processing.The correlation consists of a number of parameters as stated in the following equation by Hooper et al. (2007): F are the values of Doppler centroid frequency and its critical value respectively.Based on this, a master image is chosen and each of the slave images is then co-registered with the selected master image and resampled to the grid of the master image.The slave images go under complex multiplication with the master image to produce a set of interferograms.The topographical correction is done using an external DEM to convert the interferograms into differential interferograms, which are suitable for PS processing.The differential interferograms are input to the next step.
Phase stability estimation
The pixels are initially selected on the basis of amplitude stability, in which those pixels which have a value of the amplitude dispersion index A D (ratio of the standard deviation (σ A ) and the mean of amplitude values (µ A ) within the threshold are selected as initial PS candidates.
The candidates are tested for phase stability using a measure X stated in the following equation Hooper et al. (2007):
PS Detection
The pixels which satisfy the convergence of X to the threshold value are picked as PS pixels.The selected PS pixels contain a wrapped phase value, which is to be unwrapped, i.e. they must be added by an estimated number of phase cycles of 2π to retrieve the original phase value, a process known as phase unwrapping.Other nuisance terms, such as the master and atmospheric error terms, spatially uncorrelated look angle error, satellite orbit errors are also estimated and removed from the unwrapped phase , xi of the detected PS pixels.
Displacement Estimation
The displacement can then be estimated using the phase values of the individual PS pixels.A 1D Line of Sight (LOS) displacement map is generated as an output of the StaMPS method.
Time series displacement analysis of Nainital
In this section, results of StaMPS based PS-InSAR processing of the 13 Envisat ASAR SLC images of Nainital and its surrounding area is presented.With 13 SLC images, 12 geocoded interferograms are generated as shown in Figure 2.
The parameters used for StaMPS based PS-InSAR processing are shown in Table 3.Initially, more than 100000 PS candidates were selected using a D A value of 0.35 with the area being divided into 6 patches and finally 5606 pixels were detected as PS pixels.The processing resulted in generation of time series displacement of PS pixels in satellite LOS as shown in Figure 3 (X and Y axis represent longitude and latitude respectively) with cold and warm colours representing movements towards and away from the satellite respectively.
CONCLUSION
We have successfully applied StaMPS based PS-InSAR processing in Nainital and Tehri regions of Uttarakhand, India.Time series displacement plots clearly show that there are various patches of area having significant displacement away from the satellite LOS and are probable unstable zones.We also draw conclusion that the results achieved after PS-InSAR approach can be a valuable input in comprehensive assessment the slope stability, if supplemented with other sub-surface investigation.In future, a thorough field survey is essentially required to comment on the present status of these unstable zones.Further, if the ASAR images in satellite data stacks are increased, a more precise estimation of the deformation pattern can be observed.StaMPS based Small Baseline Subset (SBAS) can also be investigated for this study.
Figure 1 .
Figure 1.Study area location Tehri town is inhabited near Tehri reservoir at the union of Bhagirathi and Bhilangana rivers at an altitude of 1,750 m (5,740 ft).In the past, much causality has been reported because temporal correlation, spatial correlation, correlation in Doppler centroid frequency and correlation in thermal noise respectively.The values of T and c T state the temporal baseline and the critical temporal baseline, of the x th pixel in the i th interferogram, , xi is the mean value of , the spatially uncorrelated part of the look angle error for the x th pixel in the i th interferogram.
Figure 2 .
Figure 2. Interferograms for the Nainital region.The image acquired on 25th December 2009 is chosen as the master image.
4. 2
Time series displacement analysis of Tehri In this section, results of StaMPS based PS-InSAR processing of the 16 Envisat ASAR SLC images of Tehri region are presented.With 16 SLC images, 15 geocoded interferograms are generated as shown in Figure 4.The parameters used for StaMPS based PS-InSAR processing are shown in Table 4. Initially, more than 12,50,000 PS candidates were selected based on the D A value of 0.45 and 19,549 PS pixels were detected.The processing resulted in generation of time series displacement plot as shown in Figure 5.
Figure 4 .
Figure 4. Interferograms for the Tehri region.The image acquired on 19th October 2009 is chosen as the master image.Parameters Value of parameter Number of SLC images (N) 16 Pixel grid size 50 Amplitude dispersion threshold (D A ) 0.45 Rate of convergence ( X ) Hanssen, (2003) (2000)ital bias.InSAR method developed byFerretti et al. (2000), and modified byHanssen, (2003), Lyons and Sandwell (2005) and Crossetto et al. (2005) succeeded in finding PSs in urban areas and required a minimum of 15-20 interferograms to obtain a time series of deformation of each detected PS pixel.The StaMPS method, introduced by Hooper et al. (2007) came as an improvement to the above mentioned methods in the sense that the method is capable of finding PS pixels in urban as well as nonurban areas and also less number of interferograms is sufficient to map the surface displacement.The method involves four major steps, namely Interferogram generation, Phase stability estimation, PS detection and Displacement estimation. | 2018-12-12T05:04:37.548Z | 2014-11-27T00:00:00.000 | {
"year": 2014,
"sha1": "fcfa15c3020e9e65476d23b243c047c0a3959cc8",
"oa_license": "CCBY",
"oa_url": "https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XL-8/35/2014/isprsarchives-XL-8-35-2014.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "fcfa15c3020e9e65476d23b243c047c0a3959cc8",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geography"
]
} |
248836364 | pes2o/s2orc | v3-fos-license | Emotion Regulation Strategies and Sense of Life Meaning: The Chain-Mediating Role of Gratitude and Subjective Wellbeing
This study aimed to explore the mechanism of college students’ meaning of life. The Emotion Regulation Questionnaire, the Gratitude Questionnaire Six-Item Form, the General Wellbeing Schedule, the Meaning in Life Questionnaire were used as measurement instruments. In total, 1,312 valid responses were obtained. The results showed that the cognitive reappraisal and expression suppression strategies were significantly positively and negatively correlated with gratitude, subjective wellbeing, and the sense of life meaning, respectively. Further, Emotion regulation strategies can affect college students’ sense of life meaning through three paths: the mediating effect of gratitude; the mediating effect of subjective wellbeing; the chain mediating effect of gratitude and subjective wellbeing. This study illuminated the roles of gratitude, and subjective wellbeing in influencing the sense of life meaning among the Chinese college students. Limitations and future research directions are discussed.
INTRODUCTION
College students encounter important life issues such as professional education, adaptation, making friends, and choosing careers. However, due to undeveloped mind, they are prone to psychological conflicts, leading to great mental pressure and loss of confidence in life. Many students incompletely experience the meaning of life, even some of them experience existential crisis and extreme suicidal ideation. Psychological problems and suicidal behaviors of college students originate from the lack of sense of life meaning, which is an indicator of mental health (Kleiman et al., 2013a). Therefore, it is very important to explore the source of sense of life meaning, and to explore ways and measures to enhance it.
Emotion is the core driving force of life that can sustain growth (Campos et al., 1989). A good mood can promote the development of physical and mental health. Emotional dysfunction occurs when an individual's emotions are uncoordinated with their life situation. The Chinese National Mental Health Development Report (2019-2020) noted that 8.4 and 18.5% of college students have a tendency to suffer from depression and anxiety, respectively; moreover, 4.2% are at high risk of depression (Fu and Zhang, 2021). Positive emotions help increase perception of meaning in life (Fredrickson et al., 2000;Hicks et al., 2012). Students need to adjust the their emotions to avoid experiencing the negative ones in order to obtain enhanced life adaptation. Emotion regulation is the internal key mechanism of individual development. Individuals use different strategies to control the emotions that affect their occurrence. It is the process of monitoring, evaluating, and regulating the occurrence, experience, and expression of emotions so that individuals can better adapt for surviving and achieving their goals in emotionally arousing situations (Thompson, 1994;Meng, 2005).
Gratitude, an important concept in the field of positive psychology, was referred to as the psychological tendency of individuals to recognize that they have received valuable favors or help from the outside world and are willing to reciprocate. Current research generally regards gratitude as an emotional trait (Rosenberg, 1998;McCullough et al., 2002). Generation of gratitude depends on the individual being able to recognize the value and meaning of the object of gratitude (Adler and Fagley, 2005;Disabato et al., 2017). Individuals with high gratitude tendencies have a higher perception of the purpose and meaning of life (Wood et al., 2008;Lin, 2021).
Wellbeing is a subjective experience; it is the overall evaluation and feeling toward the quality of life based on the standards set by oneself, consisting of two parts: cognitive component and emotional component. Cognitive component refers to life satisfaction (Diener, 1984). Individuals with high wellbeing can balance their positive and negative emotions well, and have more energy to explore the world, explore themselves, and seek the meaning of life, so as to gain more experience of the meaning of life (Ryff and Singer, 2008;Yin et al., 2019).
Thus far, college students' sense of life meaning and the internal mechanism of the emotion regulation strategies affecting it have been insufficiently discussed. Therefore, this study examined the influence of emotion regulation strategies on college students' sense of life meaning. Further, it assessed the independent and chain mediation effects of gratitude and subjective wellbeing on the impact of emotion regulation strategies on the sense of meaning of life, with an aim to reveal the effect of the former on the latter. Simultaneously, it provided effective suggestions for college students to rationally use emotion regulation strategies for improving their sense of happiness and alleviate the impact of negative emotions on the sense of life meaning.
LITERATURE REVIEW
Emotion Regulation Strategies and Sense of Life Meaning Masters (1991) believed that emotion regulation strategies are the methods used by people to regulate their emotions in a conscious and planned manner. Gross (1999Gross ( , 2001 proposed that affects the kind of emotion an individual feels, when it occurs, and how to experience and express it.
Emotion regulation process theory pointed out that there are two most commonly used and valuable emotion regulation strategies in daily life: cognitive reappraisal and expressive suppression. The two have different effects on the adjustment of emotion, cognition and social behavior. Expression suppression is associated with negative outcomes and cognitive reappraisal is associated with positive outcomes (Gross, 1998a(Gross, ,b, 1999(Gross, , 2001. The cognitive reappraisal strategy occurs in the early stage of emotions. Through re-understanding and evaluation of emotional events, the reaction produced by them can be alleviated. It can effectively reduce negative emotions and physical stress (Sheppes et al., 2011) as well as help individuals make effective decisions (Heilman et al., 2010). Moreover, it changes the perception of personal meaning of such occurrences to positively impact social behaviors. As an effective technique to suppress negative emotions, it helps to improve life satisfaction (Gong et al., 2013); moreover, it is closely related to wellbeing and mental health (Boden et al., 2012;Xu et al., 2020).
The expression suppression strategy entails suppressing and avoiding the emotion's expression after the individual experiences it. It does not reduce the psychological experience produced by a negative emotion, and is a non-adaptive method. Individuals who are accustomed to using expression suppression strategies have relatively more and fewer negative and positive experiences, respectively (Wang and Guo, 2003), thereby reducing the level of mental health. Expression suppression requires the consumption of cognitive resources and negatively impacts other cognitive activities, emotional experiences, and behaviors (Julian and Richard, 2000;Gross, 2002;Ochsner et al., 2002;Garnefski et al., 2004).
Studies have demonstrated that cognitive reappraisal strategies are superior to expression suppression in maintaining physical and mental health (Gross and John, 2003;Moore et al., 2008;Hughes et al., 2011). The former can reduce symptoms of depression to some extent and help individuals better cope with life (Garnefski and Kraaij, 2011).
The first psychologist to conduct systematic research on the sense of life meaning was Viktor Frankl, who believed that everyone needs the meaning of existence, has the motivation to constantly search for it, and persistently explores it and the value of life. If people stop exploring the sense of life's meaning, it will produce spiritual emptiness and cause psychological problems. The sense of life meaning refers an inner psychological experience, entailing people experiencing and comprehending the meaning of their lives, while also recognizing their the goals and life missions (Crumbaugh, 1973;Steger et al., 2009). Tamirb (2016) revealed that cognitive reappraisal is highly correlated with sense of life meaning. A model constructed by Zhu et al. (2017) demonstrated that cognitive reappraisal strategy plays a positive role in sense of life meaning. Therefore, combing the theoretical perspectives, Hypothesis 1 was derived as follows: H1: Emotion regulation strategies significantly predict the sense of life meaning.
Emotion Regulation Strategies, Gratitude, and Sense of Life Meaning
Gratitude as an emotional trait, according emotion regulation process theory, different specific strategies of emotion regulation will have different effects on emotions. Gratitude mediated the link of emotion regulation to burnout. Gratitude as a positive resource buffer the effects of cognitive change on emotional exhaustion (Guan and Jepsen, 2020).
The internal and external goal theory of gratitude describes it as closely related to self-management. Individuals with a high level of gratitude pay greater attention to a task's meaning and value. They exert more effort to achieve internal goals, while being less directed toward materialistic objectives. Furthermore, they can experience happiness and lead individuals to perceive a deep sense of meaning (Bono and Froh, 2009). Fredrickson (2001) employed his broaden-and-build theory to study gratitude. Gratitude can expand and construct individual cognitive levels and opportunities to build resources, magnify the beautiful things in life, and actively feel the sense of life meaning; thus, gratitude promotes enhanced development and adaptation of persons. Moreover, it can effectively buffer the adverse effects of external pressures on the individual, form an adaptive response to negative events, expand the individual's timely thinking and behavior paradigm to efficiently trigger positive reactions, seek self-worth, and gain more happiness (Wood et al., 2008;Li, 2016). Gratitude, as a protective factor, plays a regulatory role in enhancing the sense of life meaning and reducing the risk of suicide; additionally, it can be utilized as a valuable intervention to enrich the sense of life meaning (Kleiman et al., 2013b;Tongeren et al., 2015).
The mediating mechanism of gratitude in emotion regulation strategies and the sense of life is unclear. However, strong evidence has demonstrated the relationship between it and the latter. Therefore, gratitude is expected to play a mediating role between the aforementioned two variables. Hypothesis 2 was formulated as follows: H2: Gratitude plays a mediating role between emotion regulation strategies and the sense of life meaning.
Emotion Regulation Strategies, Subjective Wellbeing, and Sense of Life Meaning
Subjective wellbeing plays a key role in human health and social adaptation (Liu et al., 2013). Empirical research has reported that the cognitive reappraisal and expression suppression strategies are related to high and low happiness, respectively (Haga et al., 2009;Balzarotti et al., 2016). Cognitive reappraisal strategies have a positive effect on human advanced emotions, such as subjective wellbeing, life satisfaction (Gross and John, 2003). Individual who use cognitive reappraisal strategies more often feel more satisfaction, more positive emotions, and less negative emotions in their lives, and they are more able to maintain a positive attitude in the face of stressful situations, and they are able to re-understand and recognize stressful events, positive efforts to change negative emotions. Expression suppression strategies leads to lower subjective wellbeing and increased negative emotional experience (Dryman and Heimberg, 2018).
Researchers have conducted empirical studies on the relationship between subjective wellbeing and sense of life meaning, but have not achieved consistent results (Shrira et al., 2011;Li et al., 2021). Shrira et al. (2011) showed that subjective wellbeing and meaning in life are likely to compensate for each other. A meta-analysis based on a Chinese sample showed the sense of life meaning to be significantly positively correlated with subjective wellbeing, life satisfaction, and positive emotions (Jin et al., 2016). Li et al. (2014) revealed that College students' wellbeing index positively predict the sense of life meaning. Wellbeing can promote individual coping with meaning and perception of meaning in life. From the literature above, Hypothesis 3 was formulated as follows: H3: Subjective wellbeing plays a mediating role between emotion regulation strategies and the sense of life meaning.
Emotion Regulation Strategies, Gratitude Subjective Wellbeing, and Sense of Life Meaning
Gratitude is a positive emotional trait that helps people construct lasting personal resources, promotes individual happiness and personal growth, changes cognition, and increases individual pursuit and possession of the sense of life meaning. Life satisfaction is a main indicator for measuring subjective wellbeing. Several studies have demonstrated that gratitude, as a positive variable, is closely related to life satisfaction (Lyubomirsky et al., 2005;Wood et al., 2007). It is significantly and positively correlated with subjective wellbeing, as confirmed by previous empirical research (Watkins et al., 2003;Chan, 2013;Witvliet et al., 2018). McCullough et al. (2002) conducted a study with college students and found that individuals with a higher tendency to gratitude have greater life satisfaction and a more optimistic and energetic attitude toward life.
Considering the in-depth study of subjective wellbeing, researchers are not restricted to the direct impact of emotion regulation strategies on it; they can explore the internal mechanisms of the two effects and possible intermediary factors. Studies have found that emotion regulation strategies can affect subjective wellbeing through internal factors such as individual mental flexibility and self-esteem (Liu et al., 2015;Chai et al., 2018). Therefore, how to mobilize positive internal resources such as subjective wellbeing through these strategies is an important way to enhance the sense of life meaning. Broaden-and-build theory and related research suggests that there may be some mediation between gratitude and a sense of meaning in life. Gratitude manifests its extended construction effect by acting on these mediating variables, thereby affecting the individual's sense of life meaning (Fredrickson, 2001).
Furthermore, Watkins et al. (2003) reached a consistent conclusion and pointed out that there is a mutually reinforcing effect between gratitude and wellbeing. In summary, emotion regulation strategies, such as the cognitive reappraisal ones, play a valuable role in promoting human high-level emotions, such as subjective wellbeing. Simultaneously, gratitude and subjective wellbeing, as positive emotions, can help the individual's coping style and the perception of the life meaning (King and Hicks, 2006;Chu et al., 2019).
If there are multiple mediations in the mediation model that are interrelated, the chain mediation can occur (Hayes, 2013). Therefore, we systematically explored the intermediary relationship between gratitude and subjective wellbeing in the
Hypothetical Model
According to the aforementioned theories and studies, Figure 1 provides a diagram of the hypothetical model. In the mode, Emotion regulation strategies is assumed to predict the sense of life meaning of college students, and gratitude and subjective wellbeing are the two chain-mediating factors in this relationship.
Participants
Using the cluster sampling method, freshmen to senior students from an undergraduate college in the Fujian Province of China were selected as the survey participants. College counselors help investigator recruit participants. Respondents were told that their data would remain confidential. The questionnaires were filled out after obtaining informed consent from the participants. Overall, 1,400 questionnaires were distributed and recovered; 88 questionnaires were eliminated due to missing answers, while 1,312 valid questionnaires were obtained, with an effective response rate of 93.71%. There were 588 males and 724 females; 435, 389, 310, and 178 were freshmen, sophomores, juniors, and seniors, respectively. Furthermore, 246 were only-child and 1,066 had siblings; 1,130 and 182 were from rural and urban areas, respectively. The average age of the participants was 19.26 years (SD = 1.15). The age distribution ranged from 17 to 24 years ( Table 1).
Emotional Regulation Strategy Scale
The Emotion Regulation Questionnaire (ERQ) was compiled by Gross and John (2003), and revised by Wang et al. for use with a Chinese sample (Wang et al., 2017). It has 10 items that are divided into 2 dimensions: cognitive reappraisal and expression suppression. Each dimension includes the regulation of positive emotions and negative emotions. The items are rated on a sevenpoint Likert scale, ranging from 1 "completely disagree" to 7 "completely agree." In this study, the Cronbach's α was 0.753.
Gratitude Scale
The six-item Chinese version of the Gratitude Questionnaire Six-Item Form(GQ-6) was compiled by McCullough et al. (2002) and revised by Wei et al. (2011). It has five levels of scoring; the higher the score, the greater the tendency to be grateful. In this study, the Cronbach's α was 0.622.
Subjective Wellbeing Scale
The General Wellbeing Schedule (Fazio, 1977) was developed by the National Center for Health Statistics in 1977 to evaluate happiness; moreover, Duan revised its Chinese version (Duan, 1996). Many previous studies used this tool to measure an individual's subjective wellbeing. In this scale, the subjective wellbeing is divided into six dimensions, including concerns about health, energy, satisfaction and interest in life, melancholy or pleasant mood, control of emotions and behaviors, and relaxation and tension. Of the total 18 items, questions 2, 5, 6, and 7 use a 5-point scoring method, while 15-18 employ a 10-point one; the remaining questions utilize a 6-point scoring system. The higher the score, the greater the happiness index. In this research, the Cronbach's α was 0.791.
Sense of Life Meaning Scale
Liu and Gan (2010) revised the Meaning in Life Questionnaire compiled by Steger et al. (2006). It has high reliability and validity and is widely used in the Chinese context. The scale is composed of two subscales: having a sense of meaning and seeking it. The higher the score, the stronger the sense of life meaning. In this study, the Cronbach's α was 0.842.
Analytical Method
The SPSS 21.0 software was employed to perform the descriptive statistics and the correlation analysis for each variable. The PROCESS program developed by Hayes and the non-parametric percentile bootstrap were used to examine the chain mediating role of gratitude and subjective wellbeing in the relationship between the emotion regulation strategies and the sense of life meaning (Hayes, 2013).
Common Method Bias Test
Harman's single factor test was used to assess common method bias. Overall, 10 factors with eigenvalues greater than 1 were found; the variance explained by the first factor was 22.75%, which was less than the critical standard of 40%. Thus, common method bias was excluded from this study.
Descriptive Statistics and Correlation Analysis
As shown in Table 2, the descriptive statistics indicated that the college students' gratitude, subjective wellbeing, and the sense of life meaning were at an intermediate level. Additionally, the correlation analysis reported a significant positive correlation between the cognitive reappraisal strategy, gratitude, subjective wellbeing, and the sense of life meaning. There was a significant negative correlation between the expression suppression strategy and gratitude, subjective wellbeing, and the sense of meaning life.
Chained Mediating Analyses
Under the condition of controlling gender and grade, the regression analysis results of gratitude and subjective wellbeing in the cognitive reappraisal strategy and the sense of meaning life are shown in Table 3. The cognitive reappraisal strategy had an impact on the sense of life meaning. It had a significant positive predictive effect (β = 0.355, p < 0.001), and the sense of meaning of life had a direct positive predictive effect on gratitude (β = 0.366, p < 0.001) and subjective wellbeing (β = 0.166, p < 0.001). Gratitude had a significant positive predictive effect on subjective wellbeing (β = 0.334, p < 0.001); when cognitive reappraisal strategies, gratitude, and subjective wellbeing predicted the sense of life meaning simultaneously, they had a positive predictive effect on the sense of life meaning (β = 0.176, p < 0.001; β = 0.332, p < 0.001; β = 0.203, p < 0.001, respectively). Table 4 displays the analysis results of the mediating effects of gratitude and subjective wellbeing on expression inhibition strategies and the sense of meaning of life under the conditions of controlling grade and gender. Expression suppression was found to have a significant negative impact on the sense of life meaning. The predictive effect (β = -0.155, p < 0.001) of the sense of life meaning, and had a significant negative predictive effect on gratitude (β = -0.123, p < 0.001) and subjective wellbeing (β = -0.180, p < 0.001); the latter had a significant positive predictive effect on subjective wellbeing (β = 0.373, p < 0.001). When expression suppression strategy, gratitude, and subjective wellbeing predicted the sense of life meaning simultaneously, expression suppression strategies had a significant negative effect on the sense of life's predictive effect (β = -0.058, p < 0.05). Furthermore, gratitude and subjective wellbeing both had positive predictive effects on the sense of life meaning (β = 0.383, p < 0.001; β = 0.220, p < 0.001, respectively).
In order to further test the mediating effect of gratitude and subjective wellbeing on cognitive reappraisal strategy and sense of life meaning, the PROCESS macro version 3.3 for SPSS (model 6) was employed for the chain mediation analysis (Hayes, 2015). The bootstrapping method was utilized to repeat the sample 5,000 times to calculate for a 95% confidence interval (CI). The results are displayed in Table 5. The mediating effect of gratitude and subjective wellbeing was significant, with a mediating effect value of 0.1857. Specifically, the impact of the cognitive reappraisal strategy on the sense of life meaning in college was affected by three indirect effects, all of which reached a significant level: First, regarding the indirect effect 1 consisting of the cognitive reappraisal strategy → gratitude → sense of life meaning (0.1255), the 95% confidence interval was [0.0990, 0.1577], excluding 0, indicating that the mediating role of gratitude was significant. Second, for the indirect effect 2 through cognitive reassessment → gratitude →subjective wellbeing → sense of life meaning (0.0259), the 95% confidence interval [0.0176, 0.0361] excluded 0, indicated that gratitude and subjective wellbeing played a significant role in the chain mediation between cognitive reappraisal strategy and sense of life meaning. Third, regarding the indirect effect 3 (0.0343) consisting of the cognitive reappraisal strategy → subjective wellbeing → the sense of life meaning, the 95% confidence interval [0.0202, 0.0516] did not contain 0, indicating that the mediating effect of subjective wellbeing was significant. Figure 2 presents the specific path through which the undergraduates' cognitive reappraisal strategy affects the sense of life meaning.
The mediating effect of gratitude and subjective wellbeing on expression suppression strategy and the sense of life meaning is shown in Table 6. The total indirect effect was −0.1334; moreover, the influence of expression suppression strategies on the sense of life was indirectly affected by three paths. First, the indirect effect 1 (−0.0674) consisting of expression suppression strategy → gratitude → the sense of life meaning, with a 95% confidence interval of [−0.0982, −0.0406] excluding 0, indicated that the mediating effect of gratitude was significant. Second, for the mediating effect 2 (−0.0146) comprising expression suppression strategy → gratitude → subjective wellbeing → the sense of life meaning, the 95% confidence interval [−0.0230, −0.0083] excluded 0, indicating gratitude and subjective wellbeing play a significant chain intermediary role between expression suppression strategies and the sense of life meaning. The chain mediation effect between expression suppression strategy and the sense of life was significant. Third, regarding the indirect effect 3, composed of expression suppression strategy→ subjective wellbeing → the sense of life meaning 3 (−0.0514), the 95% confidence interval [−0.0750, −0.0335] did not contain 0, indicating that the mediating effect of subjective wellbeing was significant. The specific path through which the undergraduates' expression suppression strategy affects the sense of meaning in life is shown in Figure 3.
Descriptive Statistics and Correlations
This study divided emotion regulation strategies into cognitive reappraisal and expression suppression. The results indicated that the former was significantly positively correlated with the sense of life meaning in college students; moreover, it significantly positively predicted the sense of life meaning. This was consistent with the results of previous studies (Zhu et al., 2017). However, expression suppression strategy was significantly negatively correlated with the sense of meaning of life; additionally, it significantly negatively predicted the sense of life meaning. Simultaneously, it demonstrated that the two emotion regulation strategies had different mechanisms and effects that may be related to the consumption of cognitive resources (Wang and Guo, 2003). Cognitive reappraisal occurs when individuals readjust their cognition before the emotions occur, thus changing their understanding of the emotional events and reducing the psychological experience of negative emotions. It consumes less cognitive resources and can acquire a positive emotion regulation ability; therefore, it can actively experience the meaning of existence. Expression inhibition occurs when emotions are awakened, consciously hindering one's own emotional expression behavior. It needs to participate in the whole process of emotion occurrence, in which it greatly consumes cognitive resources and affects the positive emotional experience's influence. Simultaneously, it easily produces meaninglessness (Peng et al., 2011). Additionally, it confirms the conclusions of previous studies. Cognitive reappraisal can significantly change a person's emotional experience, however, the effect of expression suppression is relatively poor, with lower positive and negative emotional experiences (Chen et al., 2009). As a protective factor, the cognitive reappraisal strategy is an effective technique for emotion regulation and is superior to the expression suppression strategy (Hughes et al., 2011).
The Mediating Role of Gratitude
The results demonstrated that for both the cognitive reappraisal and expression suppression strategies, gratitude plays a mediating role between the emotion regulation strategies and the sense of meaning in life. Thus, it can be considered that gratitude is important for the sense of meaning in life. The cognitive reappraisal strategy can be employed as an effective way for regulating emotions, helping individuals disaffiliate from negative frustration incidents, evaluate events from a rational and objective perspective, and enhance positive emotional experiences such as gratitude. According to the extended construction theory of gratitude, gratitude can improve individual cognition as well as help re-recognize and interpret meaning. Reminding people about gratitude occasionally can resist the impact of negative emotions on mental health (Kumar and Epley, 2018), thereby enhancing the perception and experience of the value of one's own existence. However, if an individual uses the expression suppression strategy for a prolonged period, they report a lower level of gratitude. Due to the habitual suppression of one's negative emotions, it consumes more cognitive resources, which is not conducive to the generation of positive emotions and physical and mental health.
The Mediating Role of Subjective Wellbeing
Studies have confirmed the mediating role of subjective wellbeing on the relationship between emotion regulation strategies and the sense of meaning in life. The two emotion regulation strategies of cognitive reappraisal and expression suppression have different effects on the latter by regulating subjective wellbeing. This is consistent with the results of previous research (Srivastava et al., 2009;Cutuli, 2014;Kobylińska et al., 2020).
The cognitive reappraisal strategy promotes the level of the sense of life meaning through subjective wellbeing, while the effect of expression suppression is contrary. Those students who tend to re-evaluate cognitively have greater positive emotional experiences and behaviors. Individuals internally use cognitive reappraisal strategies to construct positive perceptions of life events, thereby promoting happiness. Accordingly, they have sufficient energy to explore the world and discover themselves, which to a certain extent enhances the college students' understanding and experience of the sense of life meaning (Quoidbach et al., 2015;Szczygie and Mikolajczak, 2017). Gross and John (2003) investigated the relationship between the two emotion regulation strategies and wellbeing and depression. Cognitive reappraisal and expression suppression were related to positive and negative outcomes, respectively. It can be observed that different emotions have varying regulating effects that will lead to discrepancies in the impact of subjective wellbeing on the sense of meaning in life.
Chain-Mediating Effect of Gratitude and Subjective Wellbeing
The research results indicated that gratitude and subjective wellbeing play a chain-like mediating role between the emotion regulation strategies and the sense of meaning in life. College students experience major changes in their status, roles, and living environment; moreover, higher requirements are placed on their adaptability and self-regulation. If the emotion regulation strategies cannot be employed rationally, strong psychological conflicts are likely to occur, thereby reducing the experience of the sense of life meaning. Fredrickson believed that positive emotional states can broaden the categories of attention and cognition as well as stimulate individuals to explore the meaning of existence (Strumpfer, 2006). Additionally, gratitude and subjective wellbeing, as positive emotions, have also been found to positively impact an individual's physical and mental health or behavioral response. For example, people with high gratitude experience less loneliness and depression (Fan and Wu, 2020). Simultaneously, individuals can give meaning to life through subjective wellbeing (Xie and Zou, 2013). In previous studies on the relationship between emotions and other variables, the focus was on the former's internal mechanism as a whole, and the uniqueness of the different emotion regulation strategies in the process was disregarded. This study's findings demonstrated that emotion regulation strategies can influence college students' sense of meaning in life through the chain mediation of gratitude and subjective wellbeing. This indicated that the use of the emotion regulation strategies can affect their gratitude and subjective wellbeing, and subsequently impact the sense of life meaning. This result reflected the close connection between the four variables. However, it is worth mentioning that the cognitive reappraisal and expression suppression strategies had different effects on the sense of meaning of life through gratitude and subjective wellbeing. Therefore, the relationship model formed was distinct as well.
The cognitive reappraisal strategy can adjust one's emotions by changing the cognitive evaluation of events. When encountering a negative emotional incident, by using a positive perspective to give the event a new meaning, this emotional processing method will transfer to the experience of life. The expansion construction theory of gratitude proposes that gratitude can help individuals improve their cognitive styles, absorb more positive signals from life, broaden effective and lasting psychological resources, obtain greater happiness, and deepen the understanding and experience of the sense of life meaning. The expression suppression strategy prevents one's emotional expression behavior through selfcontrol. Further, emotional suppression conceals the expressive response. Although it can bring a certain effect in the short term, it does not reduce the negative emotional experience; moreover, the motivation of negative emotions is not weakened (Gross, 1998b;Brenning et al., 2015). According to the "water pressure model" of emotions, inappropriate or long-term inhibition of negative emotional expression will lead to an increase in the intensity of negative subjective experiences (Huang and Guo, 2002). Therefore, if strong negative emotions are only suppressed and are not effectively weakened, rebounds will occur easily, leading to a stronger negative psychological experience, which in turn affects the perception of the sense of life meaning.
CONCLUSION
This study outcomes show that cognitive reappraisal strategy serves as a significant positive predictor of the sense of life meaning. While expression suppression strategy serves as a significant negative predictor. Emotion regulation strategies can affect college students' sense of life meaning through the chain mediating effect of gratitude and subjective wellbeing.
IMPLICATIONS AND SUGGESTIONS
This study offers the following implications: First, the cognitive reappraisal strategy is a positive factor that promotes the growth of college students. However, the longterm use of the expression suppression strategies is more likely to produce emotion regulation disorders (Chen et al., 2009); therefore, they should be used with caution. Regarding the role of cognitive reappraisal, it is necessary to pay attention to and effectively guide students' emotional experience; the focus should be on educating them to understand the negative life events they face from different angles, actively re-evaluate, think differently, or rationalize emotional incidents.
Second, education regarding gratitude and subjective wellbeing of college students should be strengthened; effective intervention methods such as writing a "gratefulness diary" should be adopted to enhance the awareness of life, and learn how to be grateful. Social comparison theory states that happiness originates from the comparison between reality and standards. When reality is higher than the standards, more pleasure can be obtained. Therefore, in life, students should be guided to be efficient at exploring the positive aspects of things and to treat life with a grateful attitude.
LIMITATIONS AND FUTURE DIRECTIONS
This study has certain limitations: first, the study revealed the mediating role of gratitude and subjective wellbeing on the influence of emotion regulation strategies on the sense of meaning of life through a cross-sectional study; however, it could not make inferences regarding the causal relationship between the variables. It reflected a continuous process of changes in the psychological characteristics of college students' emotional adjustment, gratitude, subjective wellbeing, and the sense of life meaning. The breadth and depth of the research are limited to a certain extent. In the future, a combination of transverse and longitudinal design could be used to examine the characteristics and changes in-depth. Second, the research adopted the questionnaire survey method, which is relatively simple. Future studies may use experimental research methods to further verify the influence and mechanism of emotion regulation strategies on the sense of life. The results obtained would be more convincing. Third, the research participants were limited to college students in China. Previous study showed the sense of life meaning to be affected by age, culture, and other factors. In the future, studies could be extended to different age ranges and different cultural backgrounds which would be more conducive to the promotion of research results.
Although there are some shortcomings, the research results verified our hypothesis, explained to a certain extent the mechanism of emotion regulation strategies affecting the sense of meaning of life, enriched the previous research results, and provided strong support for educational practitioners to carry out educational activities.
DATA AVAILABILITY STATEMENT
The raw data to this study will be made available by the authors upon resonable request.
ETHICS STATEMENT
Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
The author confirms being the sole contributor of this work and has approved it for publication. | 2022-05-18T13:23:34.194Z | 2022-05-17T00:00:00.000 | {
"year": 2022,
"sha1": "2b8ef89adba12a522057569987b1e6df516a8a0f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "2b8ef89adba12a522057569987b1e6df516a8a0f",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
} |
256924309 | pes2o/s2orc | v3-fos-license | Hierarchical Porous Chitosan Sponges as Robust and Recyclable Adsorbents for Anionic Dye Adsorption
Biomass waste treatment and detrimental dye adsorption are two of the crucial environmental issues nowadays. In this study, we investigate to simultaneously resolve the aforementioned issues by synthesizing chitosan sponges as adsorbents toward rose bengal (RB) dye adsorption. Through a temperature-controlled freeze-casting process, robust and recyclable chitosan sponges are fabricated with hierarchical porosities resulted from the control of concentrations of chitosan solutions. Tested as the adsorbents for RB, to the best of our knowledge, the as-prepared chitosan sponge in this work reports the highest adsorption capacity of RB (601.5 mg/g) ever. The adsorption mechanism, isotherm, kinetics, and thermodynamics are comprehensively studied by employing statistical analysis. Importantly and desirably, the sponge type of chitosan adsorbents exceedingly facilitates the retrieving and elution of chitosan sponges for recyclable uses. Therefore, the chitosan sponge adsorbent is demonstrated to possess dramatically squeezable capability with durability for 10,000 cycles and recyclable adsorption for at least 10 cycles, which provides an efficient and economical way for both biomass treatment and water purification.
In our evolution towards a more sustainable society, 'trash to treasure' is an eternal topic of the relevant research areas seeking profits from the recycled wastes. Biomass waste is one of the reliable candidates to be recyclable, offering benefits due to its environmentally friendly and low-cost features, as well as virtually unlimited supplies. For example, chitosan, which mostly results from crustaceans including crabs, shrimp, crayfish, and lobsters, is known to be the second most abundant biopolymer in nature 1 . Worldwide, chitosan can be accumulated for 6-8 million tons every year and contain various bio-functional groups such as -NH 2 and -OH 2 . It has been widely applied in various fields, such as artificial skin 3 , food and nutrition 4 , ophthalmology 5 , textile finishing 6 , batteries 7,8 , drug-delivery system 9 , biotechnology 10,11 , etc.
One of the important applications of chitosan is considered to be used as an adsorbent for the removal or recovery of organic/inorganic substances from aqueous solutions 12,13 . Water pollution is a vital global problem that requires persistent evaluations. The annual discharge of dyes from textile, tannery, pharmaceutical, paper, paint, plastics, petroleum, electroplating, etc. into the environment reaches about 50,000 tons, severely damaging the aesthetic nature of streams. More importantly, the coloured dyes interfere with the sunlight transmission through water, which therefore gives rise to reduction of the photosynthesis. Moreover, dyes are difficult to be biodegraded due to their complex molecular structure, in this regard, undoubtedly, removal of dyes before disposal of the wastewater is extremely important.
With the desirable features of non-toxicity, biodegradability and a high concentration of amino groups, chitosan can offer advantages in adsorbing anionic dyes from the polluted water due to the cationic nature of chitosan derived from the -NH 2 groups. Chitosan adsorbent has been fabricated into the powder-type to enlarge the specific surface area and thus enhance the adsorption capacity, while the powder-type chitosan adsorbent suffers from the retrieving difficulties 14 . To realize the retrievability of adsorbents after the dye adsorption, magnetic materials such as Fe 3 O 4 nanoparticles have been involved into chitosan adsorbents [15][16][17] . However, the complete retrieving is found to be still difficult to achieve and thus causes an unwanted secondary pollution. In addition, elution of dyes and reuse of the retrieved adsorbents still remain significant challenges for the use of the powder-type adsorbents.
In this sense, a monolith type adsorbent is preferable in the dye adsorption, which is generally in a hydrogel, foam or membrane type 18,19 , facilitating the retrievability of the adsorbent. Moreover, the porosity of the adsorbents should be controlled to pursue controllable properties such as adsorption capacity, elasticity, durability, etc. Furthermore, reliable adsorbents should possess a durable recyclability, which will prompt the use in various practical applications of the adsorbents. Chitosan chains can be easily connected with each other and form a large 3D porous scaffold. Given that the issues and essential requirements for dye adsorbents, in this study, hierarchical porous chitosan sponges as adsorbents for adsorption of rose bengal (RB), a toxic anionic dye, are fabricated by a precisely controlled freeze-casting technique. Owing to the abundant -NH 2 groups and proper porosity of chitosan sponges, desirable adsorption performances are obtained and the adsorption mechanisms and kinetics are comprehensively studied by employing statistical analysis. At last, durable recyclability of the robust chitosan sponge adsorbents is also confirmed.
Results and Discussion
Hierarchical porous chitosan foams and sponges. Chitosan powders cannot be dissolved in aqueous strong acid solutions, aqueous base solutions, and organic solutions because of the strong inherent hydrogen bonding. So we firstly dissolve chitosan powders into a 0.3 M acetic acid solution (Fig. 1a) where the ketonic oxygen in the carboxyl group of an acetate molecule can form a hydrogen bond with the hydroxyl group in a chitosan molecule 20 . To obtain robust and porous chitosan foams, freeze-casting method, a popular method to synthesize the foam type materials without damaging the porous structure during the drying process, is used in our study. As shown in Fig. 1b, during the freezing process, the ice crystals are formed and expanded in the solution, therefore, the chitosan chains are concentrated at the interface of the ice crystals and the solution, and aligned along the growth direction of ice crystals until the solution is fully frozen. After the subsequent lyophilization process, the chitosan can form a porous cellular structure with the chitosan chains connected with each other. The chitosan sponges are obtained after washing the residual acetic acid by a 1 M NaOH solution from the chitosan foams developed in the previous step. In this way, the density and morphology of chitosan sponges can be readily controlled by tailoring the concentration of original chitosan solutions.
As presented in the digital pictures in Fig. 2a, all the chitosan foams are developed in a uniform cylindrical shape. The Ch-40 foam exhibits rigid while the other foams become elastic with the decrease of chitosan solution concentrations. Freezing temperature is crucial in determining the morphology of chitosan foams because the temperature difference between the atmosphere and chitosan solution plays a decisive role in the growth speed of the ice crystals. Hence, the relationship between freezing temperature and foam morphology are investigated at Figure S1). Compared with freezing at −196 °C (liquid N 2 ) and −80 °C, freezing at −20 °C results in the most uniform cylindrical shape of Ch-5 foams, as well as the repeatable compressibility. The chitosan foam frozen at −196 °C does not exhibit the mechanically compressible behaviour, whereas neither of the foams frozen at −80 °C nor −196 °C are developed in a desirable cylindrical shape. Here, the freezing temperature of −20 °C is determined to synthesize the chitosan foams in this study.
The hierarchical porous morphology of chitosan foams is examined and identified by the SEM images. For the foams with the high concentrations of chitosan solutions (Ch-40 and Ch-20), the chitosan foams exhibit a cellular structure with regular quasi-sphere pores and "cell walls" in the SEM images in Fig. 2a. As the decrease of the chitosan concentration, the "wall" thickness firstly becomes thinner and cellular structure cannot be formed, and subsequently pores appear on the "walls", due to lack of chitosan chains in the solution. Finally, the "walls" turn into non-continuous and entangled "twigs", which is confirmed in Fig. 2a. Therefore, the chitosan foams become softer, more porous and more unconsolidated, and also have lower density (Fig. 2b).
The FT-IR spectroscopy is used to identify the chemical structure of the dried chitosan foams of Ch-3, Ch-5, Ch-8, Ch-10, Ch-20, and Ch-40. As shown in Fig. 2c, the FT-IR spectra of chitosan foams with different densities indicate the identical characteristic peaks. Chitosan foams contain the peaks at 897, 1027, 1064, 1151, 1256, 1316, 1380, 1405, 1545, 1637, 2870 and 3351 cm −1 and the corresponding chemical associated groups are listed in the Table S1 [21][22][23] . Notably, the N-containing functional groups are observed at 1256, 1316, 1545, and 3351 cm −1 , corresponding to the stretching vibration of C-N amine II, stretching vibration of C-N amine I, bending vibration of N-H amide, and stretching vibration of N-H amide, respectively. Because of the existence of cationic amide groups, chitosan can become a good candidate as an adsorbent for anionic dyes adsorption in the polluted water.
Chitosan sponges as RB adsorbents. The as-prepared chitosan sponges were tested as the adsorbents for the anionic dye due to the cationic nature of chitosan and porous structure. Rose bengal (RB) dye is used as an example of anionic dyes to investigate the adsorption performance, mechanisms, and kinetics of chitosan sponges. A Ch-5 sponge is put into an RB solution (100 mL, 100 mg/L) and the RB is effectively adsorbed by the Ch-5 sponge (Fig. 3a). Comparing the color of the solutions before and after adsorption, barely noticeable slight pink colour remains in the solution, while the Ch-5 sponge has turned into dark red. After drying in air, the FT-IR is checked to detect the adsorption mechanism of the RB (Fig. 3b). Comparing the spectra of a RB adsorbed Ch-5 sponge, a Ch-5 sponge, and RB powders, no new peak appears for the RB adsorbed Ch-5 sponge besides the peaks existed in chitosan or RB, indicating that the adsorption is a physical process ascribed to the electrostatic interaction between anionic RB and cationic chitosan molecules.
The adsorption capacities of the Ch-3, Ch-5, Ch-8, Ch-10, and Ch-20 chitosan sponges with different densities were systematically tested under different temperatures in RB solution with the concentration of 100 mg/L ( Fig. 3c and Supplementary Figure S2). Note that Ch-40 is out of selection due to the over rigidness and high density, which hinders the squeezability and RB diffusion. The adsorption capacity at equilibrium (q e ) is calculated according to the equation as follow: where the C 0 (mg/L) and C e (mg/L) are the concentrations of RB in the beginning and equilibrium, respectively. The C e is determined by the UV-vis spectra at the length of 549 nm due to the linear relation of Abs and RB concentration (Supplementary Figure S3). m (g) is the mass of chitosan foam weighed after freeze-drying process. V (L) indicates the volume of the RB solution. As shown in Fig. 3c, with the decrease in the density of chitosan sponges, the adsorption capacity increases because of more surface areas available for adsorption, which is confirmed by the BET specific surface area of chitosan foams (Supplementary Figure S4). However, the Ch-3 sponge does not provide the highest adsorption capability because during washing by NaOH to neutralize the residual acetic acid, some flabby parts are broken away from the sponge due to the non-continuous structure of Ch-3 foam (Fig. 2a). Therefore, the Ch-3 sponge is out of selection as a durable adsorbent due to its poor resilience. In addition, all chitosan sponges provide the best adsorption capacities at 30 °C and the highest adsorption capacity reaches 535.1 mg/g with a Ch-5 sponge (Supplementary Figure S2). Hence, the Ch-5 sponges are applied to investigate the adsorption mechanism, isotherm, kinetics, and thermodynamics, as well as the recyclability test.
Adsorption isotherm study. In this study, Ch-5 sponges are used to investigate the adsorption isotherm at 30 °C and the capacities are plotted in Fig. 3d. The adsorption capability is dependent on the C 0 . The C e is recorded at the adsorption equilibrium with different C 0 of RB solutions (5, 10, 20, 50, 70, 100, 150, and 200 mg/L). As the increase of the C e , the Ch-5 sponge exhibits higher adsorption capability, which is as high as 601.5 mg/g at the C e of 56.4 mg/L (C 0 = 200 mg/L).
The adsorption isotherm describes the relationship between the amount of dye adsorbed by the adsorbent and the concentration of remaining dye in the solution. Based on the adsorption isotherm at 30 °C, the adsorption mechanism was studied using the Langmuir model and Freundlich model, which can be expressed as follows 24,25 : where q max (mg/g), k L (L/mg), k F (mg/g)(L/mg), and n are the maximum adsorption capacity, the Langmuir constant, Freundlich constant, and adsorption intensity, respectively. The fitted equilibrium data with the Langmuir isotherm model is plotted in Fig. 3e. By plotting C e /q e against C e , the q max and k L can be obtained from the intercepts and slopes 26 . The C e /q e against C e shows an approximate linear relation and the coefficient of determination (R 2 ) is 0.9900. For the Freundlich model, the k F and n are calculated from the plot of ln q e against ln C e (Fig. 3f), which also gives a good linear relation with the R 2 of 0.9937 27 . All the equilibrium parameters obtained according to Langmuir and Freundlich models are listed in Table 1.
Since the R 2 values regarding both Langmuir model and Freundlich model are too close and equal or greater than 0.99, it is hard to determine which model should be used to better represent the RB adsorption process. The normalized percent deviation (P) is evaluated to examine the accuracy of the q e collected in the experiment (Equation 4). The q e(s) is the value of adsorption capacity simulated according to the linear fitted equation. N is the number of observations. The collected data is generally considered to be accurate when the P value is less than 5. The P value calculated in Langmuir model is 7.01, higher than 5, however, the P value calculated from Freundlich model is only 1.21. Considering the low P value in Freundlich model, we can conclude that the RB adsorption of chitosan sponges is better represented by the Freundlich model 28 . Importantly, it is noted that the calculated maximum adsorption capacity of the chitosan sponges based on the Langmuir model is 649.4 mg/g, which is higher than the capacities of RB adsorption by other reported adsorbents 15,29-34 . Adsorption kinetic study. Adsorption kinetic models are applicable to interpret the adsorption data to gain an insight of adsorption efficiency, rate, and rate controlling step. To study the adsorption kinetics of Ch-5 sponge, the adsorption capacities against the contact time of Ch-5 sponges in RB solutions with the concentrations of 5, 10, 20, 50, 70, 100, 150, and 200 mg/L are investigated at 30 °C (Fig. 4a). Two well-known adsorption models, pseudo-first-order model 35,36 and pseudo-second-order model 37,38 , are used to throughout investigate the adsorption mechanism and kinetics of the chitosan sponges toward RB. The equations are defined as follows: where q t (mg/g) is the adsorption capacity at a certain contact time t (h). k 1 (1/h) and k 2 (g/mg·h) are the rate constants of pseudo-first-order and pseudo-second-order models, respectively. In the kinetic study (Fig. 4a), the adsorption capacity increases rapidly in the initial stage attributing to the large active surface area of Ch-5 adsorbent and limited repelling from the adsorbed RB molecules to the forwarding ones. As more RB molecules are adsorbed onto the Ch-5 sponge, the active surface area is diminished and the repelling effect becomes stronger.
The fits of the adsorption kinetic curves based on pseudo-first-order model and pseudo-second-order model are plotted in Fig. 4b and c, respectively. The values of k 1 , k 2 and simulated q e for the adsorption in RB with different C 0 are determined from the slope and intercept of the plots (Supplementary Tables S2 and S3). The average R 2 obtained from the pseudo-first-order model (0.9718) is much lower than the R 2 based on pseudo-second-order model (0.9980). It indicates that the RB adsorption onto chitosan sponges is better represented by the pseudo-second-order model which refers the adsorption rate is limited by the diffusion of RB into the pores of the chitosan sponges 39 . Besides, the simulated q e values by pseudo-second-order model are closer to the experimental data, compared with the capacity values by pseudo-first-order model in Fig. 4d.
However, neither the pseudo-first-order nor the pseudo-second-order models can identify the diffusion mechanism, thereby the kinetic results and the diffusion mechanism during the adsorption process are interpreted and analyzed by the intra-particle diffusion model as follow [40][41][42] : where k id (mg/g·min 1/2 ) is the rate constant of intra-particle diffusion model and C i is the intercept at stage i. It is obvious that three phases appear in the whole range of the plots (Fig. 4e), indicating that three stages influence the adsorption process. What's more, the k id in each step follows the order of k 1d > k 2d > k 3d (Supplementary Table S4), which can be ascribed to the adsorption steps of the exterior surface adsorption or instantaneous adsorption, interior surface adsorption where intra-particle diffusion is controlled, and the final equilibrium step where the solute moves slowly from larger pores to micro-pores causing a slow adsorption rate, respectively 43 . In the first stage, the adsorption rate of RB is highest due to the instantaneous availability of large active adsorption sites on the surface of chitosan sponge and the highest k 1d value indicates the external diffusion plays the dominant role in the adsorption kinetics 28 .
Adsorption thermodynamic study.
Tested under 30 °C, 40 °C and 50 °C (Figs 3c and 4f), the adsorption capacities of chitosan sponges decrease as the increase of temperature. This result can demonstrate the RB adsorption onto chitosan sponge is an exothermic process, coincident with the adsorption process of chitosan/ graphene oxide adsorbent toward fuchsin acid dye 44 . Thermodynamic parameters, changes in the Gibbs free energy (ΔG), enthalpy (ΔH) and entropy (ΔS) during the RB adsorption process, are calculated by the following equations 28,[45][46][47] : where the T (K) is the adsorption temperature and the R (J/mol·K) is the universal gas constant. The plots of ln (q e /C e ) against 1/T of Ch-3, Ch-5, Ch-8, Ch-10, and Ch-20 are presented in Fig. 4f, where the ΔH and ΔS can be calculated from the slopes and intercepts of the fits, respectively. The calculated thermodynamic data are listed in Supplementary Table S5. The negative values of ΔG suggest that the adsorption is a spontaneous process. In addition, the greater negative ΔG value indicates a more favorable adsorption, therefore, all chitosan sponges provide the highest adsorption capabilities at 30 °C over other temperatures investigated in this study.
The negative values of the ΔH refers that the adsorption is an exothermic process and the negative values of the ΔS reveals the decreased randomness during the adsorption process and a less affinity of chitosan sponges and the RB molecules 27 .
Recyclability study. A Ch-5 sponge is loaded in a syringe in order to demonstrate the adsorption ability for filtrating the RB solution with the apparatus shown in Fig. 5a. As compared with the initial RB solution, the outlet solution becomes colourless and the concentration is only 0.12 mg/L determined from the UV absorption spectra (Fig. 5b). Desirably, the adsorbed RB on the Ch-5 sponge cannot be eluted by DI-water (Fig. 5c), indicating a strong electrostatic interaction between the chitosan and RB molecules. However, the dye on chitosan sponge can be easily eluted by squeezing the chitosan sponge under the 0.5 M NaOH solution for several seconds, while RB molecules are dissolved into the NaOH solution and the solution turns into red colour (Fig. 5d). At last, the Ch-5 sponge is washed with DI-water for several times until the pH becomes 7, and afterward, the chitosan sponge is ready for reuse (Fig. 5e).
To demonstrate the recyclability of the chitosan adsorbents, the squeezability of the chitosan sponge is tested by cyclic compressions under DI water using the apparatus in Fig. 5f. The elution process is simulated by compressing the Ch-5 sponge at a strain of up to 90% for 10,000 cycles. The height of Ch-5 sponge is completely recovered back to 100% even after 10,000 cycles of compression, demonstrating an excellent resilience and durability, as shown in Supplementary Video S1. In addition, the recyclability in terms of adsorption capability performance is also examined. A Ch-5 sponge is utilized to adsorb RB for 10 cycles and the results are presented in Fig. 5g. The adsorption capacity decreases in the second cycle noticeably and exhibits more or less constant in the subsequent cycles. After 10 cycles of adsorption, the adsorption capacity of Ch-5 sponge remains 85.1% compared with the first cycle, demonstrating the reliable recyclability of the chitosan sponge adsorbents.
Above all, the porous chitosan sponges as adsorbents offer great advantages in terms of their high controllable, recyclable, and reliable capability, which allows their hierarchal porous features, elasticity, and adsorption capacities to be tailored by varying the concentrations of the original chitosan solutions. Moreover, according to the aforementioned characterizations and analysis of chitosan sponges, we compare the adsorption performances of the Ch-5 sponge with other reported adsorbents for anionic dye adsorption, as summarized in Supplementary Table S6. These results can indicate that the chitosan sponge investigated in this study shows unprecedented outstanding capability over other RB adsorbents reported so far, not only in the adsorption capacity but also in the recyclability with reliability. Even comparing with other chitosan-based adsorbents, our chitosan sponge is also found to be comparable upon anionic dye adsorption (Supplementary Table S6).
Conclusions
In this study, we fabricated hierarchical porous chitosan sponges by a temperature-controlled freeze-casting method, with which the morphology and porosity of the chitosan sponges can be controlled by the freezing temperature and the concentrations of original chitosan solutions. The chitosan sponges with different densities are tested as the RB adsorbents, as a result, the Ch-5 sponge exhibits the best adsorption capability (601.5 mg/g), which is the highest among the reported capacities of RB adsorbents with the best of our knowledge. The study of the responsible adsorption mechanism reveals that the adsorption is shown to be ascribed to the electrostatic interaction between RB and chitosan molecules. In addition, the investigation of the adsorption isotherm and kinetics indicates that the adsorption process is better represented by the Freundlich model for isotherm and pseudo-second-order model for kinetics, which refers that the diffusion of RB molecules into the chitosan sponges is the decisive factor to the adsorption rate. Furthermore, the thermodynamic study confirms that the adsorption is a spontaneous and exothermic process. Finally, the recyclability and durability of the chitosan sponge adsorbents are examined and verified. In conclusion, the aforementioned excellent performances of chitosan sponge can show a great promise for the use as dye adsorbents for wastewater purifications. We believe that this work could significantly contribute to our society that is rapidly facing the environmental issues, by opening up a new solution with the utilization of environmentally friendly biomass materials.
Methods
Preparation of chitosan sponge adsorbents. Firstly, chitosan powders were added into a 0.3 M acetic acid solution at the chitosan concentrations of 3, 5, 8, 10, 20 and 40 mg/mL, respectively. Subsequently, the solution was heated at 50 °C and stirred with a magnetic bar until the chitosan powders were totally dissolved. Then a temperature-controlled freeze-casting method was used to fabricate the porous chitosan foams. The chitosan solutions were put into 10 mL vials and frozen at −20 °C, −80 °C and −196 °C, respectively. Freezing at −20 °C and −80 °C was then carried out with refrigerator and the freezing temperature of −196 °C was achieved with liquid nitrogen. Finally, the lyophilization was carried out in a freeze-dryer at −80 °C for 48 hours to obtain the chitosan foams. The resultant chitosan foams were weighed and the densities of the chitosan foams were determined.
After the chitosan foams were dried, a certain amount of acetic acid still existed in the chitosan foam. A 1 M NaOH solution was used to wash out the residual acetic acid and then the chitosan sponge was washed in a DI-water bath with repetitively squeezing the sponge until the pH of the DI-water bath becomes 7. The water in chitosan sponge was then squeezed out and the sponge was put into the rose bengal solution for adsorption test. All the adsorption processes were carried out under shaking with a rate of 150 rpm using a temperature-controlling shaking incubator. Chitosan foams and sponges prepared in the concentrations of 3, 5, 8, 10, 20 and 40 mg/mL are named as Ch-3, Ch-5, Ch-8, Ch-10, Ch-20 and Ch-40, respectively. The chitosan sponges used in the adsorption processes were derived from 2.5 mL of chitosan solutions.
Characterizations. The morphology of chitosan foams was characterized by a field emission scanning electron microscope (FE-SEM, JSM-7600F, JEOL). The FT-IR spectra of chitosan foams were investigated using an attenuated total reflection infrared spectroscopy (Tensor 27, Bruker) conducted at a resolution of 4 cm −1 with 50 scans. The Brunauer-Emmett-Teller (BET) specific surface areas of chitosan foams were measured from the N 2 adsorption isotherm at 77 K by using the Quadrasorb SI-MP instrument. The chitosan foams were outgassed at 373 K for 12 h before BET measurement. The ultraviolet visible (UV-vis) absorption spectra were recorded using a UV-visible spectrometer (V-650, JASCO, Japan). The compression test of chitosan sponge is undertaken with a uniaxial test machine (UTM, ElectroPuls E3000 Testing System, Instron) for 10,000 cycles at a frequency of 0.1 Hz. | 2023-02-17T14:40:20.688Z | 2017-12-01T00:00:00.000 | {
"year": 2017,
"sha1": "c63679302daf6e9661c4e20c60a0922c9144dc43",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-017-18302-0.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "c63679302daf6e9661c4e20c60a0922c9144dc43",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
} |
42666855 | pes2o/s2orc | v3-fos-license | Stimulation of p53-mediated Transcriptional Activation by the p53-binding Proteins, 53BP1 and 53BP2*
p53 is a tumor suppressor protein that controls cell proliferation by regulating the expression of growth control genes. In a previous study, we identified two proteins, 53BP1 and 53BP2, that are able to bind to wild type but not to mutant p53 via the DNA-binding domain of p53. We isolated cDNAs expressing a full-length human 53BP1 clone, which predicts a protein of 1972 residues that can be detected in the H358 human lung carcinoma cell line. The 53BP1 and 53BP2 genes were mapped to chromosomes 15q15–21 and 1q41–42, respectively. Immunofluorescence studies showed three types of staining patterns for 53BP1 as follows: both cytoplasmic and nuclear, homogeneous nuclear, and a nuclear dot pattern. In contrast, 53BP2 localized exclusively to the cytoplasm, and this pattern did not change upon coexpression of wild type p53. Although our previous study revealed that p53 is not able to bind simultaneously to either 53BP1 or 53BP2 and to DNA carrying a consensus binding site, both 53BP1 and 53BP2 enhanced p53-mediated transcriptional activation and induced the expression of a p53-dependent protein, suggesting that these proteins might function in signal transduction pathways to promote p53 activity.
The p53 protein is the product of a tumor-suppressor gene (1,2), with mutations in this protein being the most common genetic change in human cancer (3,4). The observations that introduction of the wild type (wt) 1 p53 gene into cells leads to growth arrest (5)(6)(7)(8) or apoptosis (9,10) and that DNA damage leads to increases in the level of p53 (11,12) suggest that the protein acts at a checkpoint to regulate cell cycle arrest in the G 1 (13), G 2 /M (14), and G 0 phases (15). The cell cycle arrest in G 1 phase is mediated, at least in part, by the trans-activation function of p53 (16,17), which can induce the expression of p21 (WAF1/CIP1) (18 -21), a cyclin-dependent kinase inhibitor. However, the signal from p53 to the Gas1 gene product, which can result in a G 0 arrest, does not require the trans-activation function of p53 (15). Furthermore, in some cell types, a mutant p53 that lacks this function can induce apoptosis (22).
We have used the yeast two-hybrid system to identify two cellular proteins that bind to wt but not to mutant p53, designated p53-binding protein 1 and 2 (53BP1 and 53BP2) (23). Both 53BP1 and 53BP2 bind to the central domain of p53 which is required for site-specific DNA binding. Although neither 53BP1 and 53BP2 has extensive homology to other known proteins, recent sequence analysis revealed that the C terminus of 53BP1, which is sufficient for binding to p53, has homology both to the C terminus of BRCA1, a tumor suppressor specific for breast and ovarian cancer, and to Rad9, a yeast cell cycle checkpoint protein (24). This BRCT (BRCA1 C terminus) domain is found in other proteins involved in a checkpoint that responds to DNA damage (25, 26), suggesting that it may mediate protein-protein interactions involved in this process.
53BP2 has four ankyrin repeats and a single Src homology-3 domain in its C terminus (23). Structural analysis indicated that 53BP2 binds to wt p53 via its fourth ankyrin repeat and the Src homology-3 domain (27). 53BP2 also interacts with BCL2, an apoptosis inhibitor, indicating a possible role for 53BP2 in apoptosis (28). To analyze further these p53-binding proteins, we have cloned and sequenced a full-length 53BP1 cDNA. Both 53BP1 and 53BP2 have been expressed in mammalian cells, and their cellular localization has been determined. Assaying a transfected p53-dependent reporter gene as well as endogenous p21 expression, we show that both 53BP1 and 53BP2 enhance p53-mediated transcriptional activation. For 53BP2, this effect may be responsible for its partial suppression of oncogene-mediated cell transformation.
EXPERIMENTAL PROCEDURES
Isolation of 53BP1 cDNA-5Ј-Rapid amplification of cDNA ends (RACE) experiments (29,30) were carried out using the 5Ј-Ampli-FINDER RACE kit (CLONTECH) following the manufacturer's suggestions. Two antisense primers were used: A7, 5Ј-CTCGCTCGCCCAGG-TTGAACTGCAAAGACTCTTCACTC-3Ј, and A6, 5Ј-TGGCAACAGAC-TCAGCAACAGCAGTAGATCC-3Ј. These primers hybridize to sites on the sense strand of the reported partial 53BP1 cDNA clone, A70 (23), 186 and 249 bases downstream from the 5Ј end, respectively, of this clone (Fig. 1A). Human skeletal muscle mRNA (CLONTECH) was used to generate cDNA with primer A6. After the mRNA was hydrolyzed with 0.375 M NaOH, an anchor oligonucleotide was ligated to the 3Ј end of cDNA. PCR amplification with the anchor primer and primer A7 resulted in amplification of a 450-bp fragment (R1), containing an additional 260 bp of 53BP1 cDNA sequence (Fig. 1A). R1 was then used to screen a human skeletal muscle cDNA library (CLONTECH) according to standard procedures (31). Clone, P7, which was identified, was also used to screen this library, resulting in the isolation of clone P45 (Fig. 1A). A 1.8-kb fragment was amplified from clone P45 by PCR with the sense primer A29, 5Ј-AAGAAGATACTTCAGGCAATA-3Ј, and the antisense primer A30, 5Ј-CTGGAGTCCTCTGAAGTAGCT-3Ј (Fig. 1A). Finally, a Jurkat cell cDNA library (a gift of Dr. P. Enrietto, State University of New York, Stony Brook) was screened with this 1.8-kb fragment, and the clone P7a was isolated.
Sequencing-The 53BP1 cDNA sequence was obtained by sequencing overlapping cDNA inserts on both strands. All cDNA fragments were sequenced by the dideoxy chain termination method using the Sequenase Kit Version 2.0 (U. S. Biochemical Corp.).
Plasmids-The full-length 53BP1 cDNA was constructed in pBluescript II KS(ϩ) (Stratagene) from the inserts A70, P45, and P7a as follows. EcoRI digestion of clone P7a divided the insert DNA into two fragments, 1.25 and 0.27 kb. The 5Ј 1.25-kb fragment was ligated into the EcoRI site of pBluescript II KS(ϩ) to generate pL6-2. pL6-2 was then digested at the NdeI site in the 53BP1 cDNA insert and at the BamHI site in the polylinker, and the resulting 529-bp NdeI-BamHI fragment was deleted from pL6-2 and replaced by a 2.6-kb NdeI-BamHI 53BP1 fragment from clone P45 to generate pL8Ј-2. The 3.6-kb BglII 53BP1 insert from the plasmid pSE1107A70 (23) was ligated into the BamHI site of pBluescript II KS(ϩ) to generate pBSA70. pBSA70 was digested at the internal BamHI site in the 53BP1 cDNA and at the SpeI site in the polylinker, and the resulting 3.3-kb BamHI-SpeI fragment was ligated between the BamHI site in 53BP1 and the SpeI site in the polylinker of pL8Ј-2 to generate pBS53BP16.6. A 481-bp fragment beginning at the first ATG of the 53BP1 open reading frame was amplified by PCR using the oligonucleotides A43 containing restriction sites for EcoRI and BssHII 5Ј to the ATG, 5Ј-GGGGAGAATTCGGGCGCGCAT-GGACCCTACTGGAAGT-3Ј, and A32, 5Ј-GGGCTCGAGCAGCACCA-AGGGAATGTGTA-3Ј (Fig. 1A). The PCR fragment was digested with EcoRI and XhoI and was inserted between the EcoRI site and the XhoI site in the polylinker of pCITE2C (Novagen) to generate pL17-1. The 428-bp XbaI fragment from pL17-1 was replaced by a 6.3-kb XbaI fragment between the XbaI site in 53BP1 and the XbaI site in the polylinker of pBS53BP16.6 to generate pCITE53BP16.4. The fulllength 53BP1 cDNA fragment was then obtained by digesting pCITE53BP16.4 at the BssHII site in the oligonucleotide A43 and at the XhoI site in the polylinker of pCITE53BP16.4. Both ends of this fragment were filled-in with the Klenow fragment of DNA polymerase I, and the fragment was ligated into the filled-in BamHI site of pCMH6K (32) (a gift of Dr. P. Tegtmeyer, State University of New York, Stony Brook) to generate pCMH6K53BP1.
The protein expressed from pCMH6K contains an N-terminal tag that includes an influenza virus hemagglutinin (HA) epitope and a six-histidine tag. pCMH6K53BP1-N and pCMH6K53BP1-C were generated by ligating a 3.3-kb BamHI fragment of pCMH6K53BP1 and a 3.1-kb BamHI-BglII fragment of pSE1107A70, respectively, into the BamHI site of pCMH6K. pCMH6K53BP1-N and pCMH6K53BP1-C express the N-terminal 1052 and C-terminal 921 residues of 53BP1, respectively. The plasmid p14b (a gift of Dr. L. Naumovski, Stanford) contains the full-length 53BP2 gene in pBluescript SK (Stratagene). pCMH6K53BP2, a plasmid expressing the full-length 53BP2, was generated by inserting a BamHI fragment from p14b into the BamHI site of pCMH6K. The mammalian expression plasmid for wt mouse p53 (pCMH6Kp53) is described elsewhere (32). Plasmids pSP72-ras and pBS-E1A, used for the transformation suppression assay, express activated Ras and the adenovirus E1A protein, respectively (33). The chloramphenicol acetyltransferase (CAT) reporter plasmid, pCAB-PG26TATA, contains 26 direct repeats of a consensus p53-binding site upstream of an E1B TATA promoter and the CAT gene (32). pLB contains the same promoter with a single copy of the p53 recognition sequence controlling the luciferase gene (34).
Isolation of Human Genomic DNA for 53BP1 and 53BP2-A 1.3-kb PvuII fragment of the 53BP1 cDNA and a 1.3-kb HindIII-XbaI fragment of the 53BP2 cDNA were used to screen 1.2 million plaques of a human genomic library (a gift of Dr. P. Enrietto, State University of New York, Stony Brook) according to standard procedures (31). Inserts of positive phage clones were subcloned into pBluescript II KS(ϩ), and a panel of primers that hybridized on the 53BP1 or the 53BP2 cDNA was used to obtain sequences of the corresponding genomic clones. Clones that were confirmed to contain 53BP1 or 53BP2 genomic DNA were used for fluorescence in situ hybridization.
Fluorescence in Situ Hybridization-Plasmid DNAs were biotinylated by nick translation, prehybridized in the presence of human Cot1DNA, and hybridized (at 10 ng/l) to metaphase spreads of a normal male following procedures described in detail elsewhere (35). After hybridization and washing, the hybridization sites were labeled with fluorescein-conjugated avidin, and the chromosomes, which had previously been released from an early S-methotrexate block in the presence of bromodeoxyuridine, were counterstained with 4,6-diamidino-2-phenylindole to produce a QFH-like banding pattern. Digital image processing was performed as described elsewhere (36). The locations of hybridization signals were analyzed in 15-20 well spread, well banded metaphases for each plasmid.
Immunofluorescence Staining-COS-1 and H358 cells were seeded on coverslips and transfected with expression vectors expressing HAtagged 53BP1, 53BP2, or p53. At 36 h post-transfection, cells were fixed in 3.3% formamide and permeabilized with cold methanol acetone (50: 50) for 3 min at room temperature. After a wash in PBS (31), cells were incubated with the anti-HA monoclonal antibody, 12CA5 (Boehringer Mannheim) at 2 g/ml for 1 h at room temperature, followed by incubation with 2.4 g/ml fluorescein isothiocyanate-conjugated goat antimouse antibodies (BioSource International) at room temperature for 1 h in the dark. The coverslips were washed and mounted on slides, and the cells were examined and photographed with Kodak Tmax 400 film.
Western Blot Analysis-H358, MCF7, and COS-1 cells that had been transfected with various expression plasmids were washed once with PBS, scraped, and pelleted by centrifugation. H358 and COS-1 cells were lysed with SDS-sample buffer, boiled for 3 min, and applied to a 7% SDS-polyacrylamide gel. MCF7 cells were lysed with PBS supplemented with 2% SDS and boiled for 3 min. Chromosomal DNA was sheared by passage repeatedly through a 26-gauge needle. The supernatant was obtained by centrifugation at 14,000 rpm for 15 min at 4°C. Protein concentration was determined by the BCA Protein Assay (Pierce), and 400 g of protein was applied to a 12% SDS-polyacrylamide gel. After electrophoresis, the separated proteins were electrophoretically transferred to a nitrocellulose membrane. Blots were probed with 12CA5 (1 g/ml), rabbit anti-53BP1 polyclonal antibodies (5 g/ml), or a combination of mouse anti-p53 monoclonal antibody pAb421 (Calbiochem) (10 g/ml) and mouse anti-p21 monoclonal antibody F5 (Santa Cruz Biotechnology) (1 g/ml), followed by horseradish peroxidase-conjugated goat anti-mouse or anti-rabbit IgG. Proteins were detected by the enhanced chemiluminescence (ECL) method (Amersham Pharmacia Biotech).
␥-Irradiation-MCF cells were irradiated in their culture dishes under subconfluent conditions at room temperature with a 60 Co-␥ irradiator to deliver a dose of 8 Gy. Immediately after irradiation, the cells were cultured at 37°C and then collected 4 h later.
Nuclear/Cytoplasmic Fractionation-Two transiently transfected 10-cm dishes of H358 cells were pooled 40 h after transfection. Cells were lysed, and both nuclear and cytoplasmic fractions were recovered following the protocol of Gashler et al. (38). The nuclear fraction was lysed in 100 l of lysis buffer (100 mM Tris-HCl, pH 9.0, 150 mM NaCl, 1% Nonidet P-40, 0.2 M phenylmethylsulfonyl fluoride, 5 g/ml pepstatin, 2 g/ml leupeptin). Both cytoplasmic and nuclear fractions were incubated with Ni-NTA beads that were equilibrated in the same lysis buffer. After incubation at 4°C for 2 h, the beads were washed and boiled in SDS-sample buffer. Supernatants were fractionated by SDSpolyacrylamide gel electrophoresis and analyzed by Western blotting with the 12CA5 monoclonal antibody.
Transformation Assay-The transformation assay was performed as described by Reed et al. (33). REFs were prepared by passaging Fisher rat embryos three times. For transfection, a mixture of DNA, 2.5 g of pSP72-RAS, and 2.5 g of pBS-E1A, 50 l of Lipofectin (Boehringer Mannheim), and 50 l of 2ϫ HeBS buffer (150 mM NaCl, 20 mM HEPES, pH 7.4) was added to a 10-cm dish containing 3 ϫ 10 5 cells/ dish. For the transformation suppression assay, either 5 g of pCMH6Kp53 or 22.5 g of pCMH6K53BP2 or both were included. Transfected REFs were incubated and refed 20 h post-transfection and every 3 days. At 14 days post-transfection, cells were washed with PBS, fixed with methanol, and stained for 2 h with the Coomassie Brilliant Blue solution for protein detection. Plates were rinsed with water, and foci were counted. Duplicate samples were assayed each time, and each set of assays was repeated at least three times.
Reporter Assay-For CAT assays, H358 cells were transfected with pCAB-PG26TATA (1 g), pCMH6Kp53 (1 g), and varying amounts of pCMH6K53BP1 or pCMH6K53BP2. pCMH6K DNA was used to adjust the total DNA to 10 g. Cells were washed once with PBS 40 h after transfection, scraped, and pelleted by centrifugation. Cell pellets were then resuspended in 150 l of 0.25 M Tris-HCl, pH 8.0, and lysed by 6 rounds of freezing and thawing. The supernatant was obtained by centrifugation at 14,000 rpm at 4°C for 10 min, and its protein concentration was determined by the BCA Protein Assay (Pierce). 10 g of protein were incubated with n-butyryl-CoA and [ 14 C]chloramphenicol at 37°C for 1 h. Butyrylated chloramphenicol products were extracted by a fixed volume of mixed xylenes (Aldrich), and the radioactivity of the butyrylated chloramphenicol was determined by a liquid scintillation counter (Packard). Each assay was repeated three times.
RESULTS
Isolation of cDNAs Encoding 53BP1-Northern blotting experiments indicated that 53BP1 is expressed in all tissues assayed with two transcripts of 11 and 6.6 kb (23). As the initial 53BP1 cDNA clones contained only 3.6 kb of sequence, we sought to identify a full-length 53BP1 cDNA. By using the 5Ј-RACE method (29,30) with human skeletal muscle mRNA, we cloned a 450-bp fragment (R1), containing 260 bp of new sequence (Fig. 1A). The R1 fragment was used to screen a cDNA library, followed by subsequent screenings using newly obtained sequences of two different cDNA libraries (see "Experimental Procedures") to result in the isolation of cDNAs that together spanned 6.6 kb (Fig. 1A). The sequence of the assembled 53BP1 cDNA is available, and the predicted open reading frame is shown in Fig. 1B. The likely translation initiation codon of 53BP1 is preceded by an in-frame stop codon located 18 bp upstream. Although the nucleotide sequence 5Ј of the first ATG (GAGCAGATG) does not resemble the consensus initiation sequence (39), as more than 90% of translation in vertebrates starts from the first methionine (39), we designate this ATG as the putative translation start site. The predicted open reading frame of the 6.6-kb 53BP1 cDNA encompasses 1972 amino acids, a protein with a molecular mass of 217 kDa.
Recent sequence analysis revealed two BRCT domains in the C-terminal 247 residues of 53BP1. However, analysis of the entire protein by BLAST (40) did not show any extensive homology to other known proteins.
Chromosomal Localization of the 53BP1 and 53BP2 Genes-The interactions between 53BP1 and 53BP2 with wt but not mutant p53 raise the possibility that 53BP1 and 53BP2 are involved in some aspect of carcinogenesis in humans. Furthermore, it is possible that 53BP1 is a tumor suppressor based on its sequence homology to BRCA1. Therefore, it was of interest to map the chromosomal locations of these genes to determine whether they are located near cytogenetic locations known or suspected to harbor oncogenes or tumor suppressor genes. To obtain probes long enough for in situ hybridization, we screened a human genomic library with probes derived from the cDNAs of 53BP1 and 53BP2, and we obtained 5 phage clones for 53BP1 and 9 for 53BP2. Two of these clones, clone A2-3 for 53BP1 and clone B1-2 for 53BP2, were confirmed to contain the appropriate DNA and were used for the chromosome localization experiments.
Expression of Full-length 53BP1 and 53BP2 and Detection of Endogenous 53BP1 in Mammalian Cells-To identify the protein expressed by the full-length 53BP1 cDNA we had assembled, we transfected H358 cells with a plasmid expressing HAtagged 53BP1 (pCMH6K53BP1). In parallel, we also transfected expression plasmids for HA-tagged 53BP2 (pCMH6K53BP2) and p53 (pCMH6Kp53). Cells lysates were subjected to Western blot analysis with the 12CA5 monoclonal antibody which recognizes the HA tag. In cells transfected with pCMH6K53BP1, a protein larger than the 220-kDa marker was detected (Fig. 3A, lane 2), indicating that the 53BP1 protein migrates significantly slower than its predicted size. Proteins with apparent sizes of 150 and 53 kDa were detected in cells transfected with the 53BP2 or p53 plasmids, respectively (Fig. 3A, lanes 3 and 4). A construct that expresses native non-tagged 53BP2 was reported to express a protein of 150 kDa, as determined by Western blot using rabbit anti-53BP2 antibodies (28). A background of proteins was detected by 12CA5 in all cells, including those transfected with the expression plasmid lacking an insert (Fig. 3A, lane 1).
We also examined the time course of expression of the transfected 53BP1 and 53BP2 in H358 cells so that cells with peak level expression of these proteins could be used for the reporter assays (see below). Both 53BP1 and 53BP2 were expressed at 30 h post-transfection, and this expression continued at least until 40 h post-transfection (Fig. 3B). Based on these time courses, we used cells 40 h after transfection in our transcriptional activation assays.
In order to detect the endogenous 53BP1 protein in mammalian cells, we raised polyclonal antibodies against the C-terminal 270 residues of this protein (41). These antibodies detected the HA-tagged full-length (Fig. 3C, lane 2) and HA-tagged C-terminal half (Fig. 3C, lane 4) but not the HA-tagged Nterminal half (Fig. 3C, lane 3) of 53BP1 expressed in COS-1 cells. The expression of each HA-tagged protein in COS-1 cells was confirmed by Western blot analysis of the same blot with 12CA5 (Fig. 3C, lanes 7-9). Cell extracts from the human fibroblast cell line, WI38, and from the human cancer cell lines Saos-2, H358, and HepG2 were subjected to Western blot analysis with anti-53BP1 polyclonal antibodies. We detected a band that has the same molecular weight as the protein produced by the full-length 53BP1-expressing plasmid only in H358 cells (Fig. 3C, lane 1), indicating that the assembled 53BP1 cDNA encodes the endogenous 53BP1 protein.
Modulation of the Transcriptional Activation Function of p53 by 53BP1 and 53BP2-Previously, we showed that both 53BP1 and 53BP2 bind to the DNA-binding domain of p53 and that p53 bound to 53BP1 or 53BP2 was not able to bind simultaneously to DNA carrying a consensus p53-binding site (23). These data suggested that the interactions of 53BP1 and 53BP2 with p53 might interfere with the activity of p53 as a sequence-specific transcriptional activator, which is required for its function in tumor suppression. To understand the biological consequences of these protein-protein interactions in cell culture, we assayed the transcriptional activation function of p53 in the presence of overexpressed p53-binding proteins. A CAT reporter gene under the control of p53-binding sites (pCAB-PG26TATA) was transfected into H358 cells together with an expression plasmid for wt p53 (pCMH6Kp53) or a combination of expression plasmids for p53 and 53BP1 or 53BP2. p53-induced trans-activation of the CAT reporter gene increased approximately 25-fold in cells transfected with the reporter plasmid and the p53 plasmid (ϳ2100 cpm) compared with cells transfected with the reporter alone (ϳ80 cpm) (Fig. 4A). When increasing amounts of the 53BP1 plasmid (pCMH6K53BP1) were cotransfected in combination with the p53 plasmid and the reporter, CAT expression increased further in a dose-dependent manner, up to an approximately 10-fold stimulation by 8 g of the 53BP1 plasmid (ϳ23,000 cpm) (Fig. 4A). 53BP2 also stimulated p53-mediated transcriptional activation in a dose-dependent manner. CAT activity was enhanced up to 3.5-fold in cells transfected with the 53BP2, p53, and reporter plasmids compared with cells transfected with only the p53 and reporter plasmids (Fig. 4B). We obtained similar results with a luciferase reporter gene under the control of an E1B promoter containing one copy of the consensus p53-binding site (34) (data not shown).
the trans-activation of an endogenous target of p53 activity, the p21 gene. MCF7 human breast carcinoma cells that have wt p53 were transfected with plasmids containing either the 53BP1, 53BP2, or mouse p53 gene and harvested 24 h later. As a control for p21 induction, MCF21 cells were treated with 8 Gy ␥-irradiation and lysed 4 h later. Western blot analysis was performed with a combination of monoclonal antibodies pAb421, which recognizes mouse and human p53, and F5, which recognizes p21. The antibody mixture detected p21 protein in the irradiated (Fig. 4C, lane 2) but not untreated cells (Fig. 4C, lane 1). Although p53 was detected in cells harvested 2 h post-irradiation (data not shown), it had already disappeared at the 4 h time point when p21 induction was apparent. While cells transfected with the vector alone contained only the basal level of p21 (Fig. 4C, lane 3), both the 53BP1 and 53BP2 plasmids behaved similarly to the p53 plasmid in inducing the expression of the p21 protein (Fig. 4C, lanes 4 -6). In contrast to the high level of p53 protein observed upon transfection of the p53 plasmid, neither the 53BP1 nor 53BP2 plasmid led to detectable p53 protein (Fig. 4C, lanes 4 -6), indicating that both 53BP1 and 53BP2 induce endogenous p21 expression without an apparent change in the level of p53 protein. These enhancements of the trans-activation function of p53 may reflect a direct role for 53BP1 or 53BP2 in the transcription process or their ability to render the p53 protein more competent for transcription.
53BP2 Partially Suppresses Cell Transformation by Oncogenes-Since 53BP1 and 53BP2 stimulate at least one activity of p53 (trans-activation), they may be capable of stimulating its overall tumor suppression function. Wild type p53 can reduce the efficiency of cooperating oncogenes such as ras and E1A to transform primary REFs in culture (42,43), whereas oncogenic mutant p53 cooperates with these oncogenes to transform primary cells (44,45). We tested the possibility that overexpressed 53BP2 could enhance transformation suppression either alone or in the presence of excess p53. REFs, which express a low level of wt p53, were transfected with expression plasmids for Ras and E1A. These cells gave rise to transformed foci 14 days after transfection (Table I). As expected, overexpressed wt murine p53 suppressed this transformation, whereas cotransfection of 53BP2 with the oncogenes and wt p53 further suppressed transformation only slightly in one experiment (Table I). However, in three separate experiments, cotransfection of 53BP2 without p53 resulted in an approximately 30% decrease in the number of foci induced by the oncogenes (Table I). These data indicate that 53BP2 overexpression alone partially suppresses cellular transformation by oncogenes. This effect may be achieved through the tumor suppressor function of p53. By interacting with the endogenous wt p53 in REFs, 53BP2 may stimulate its trans-activation function to activate growth control genes, which in turn suppress transformation.
Subcellular Localization of 53BP1 and 53BP2-Although the two p53-binding proteins enhance p53-mediated transcriptional activation, they are not likely to be present with p53 in the transcriptional complex (23). 53BP1 and 53BP2 could interact with p53 in the nucleus but not be involved in sequencespecific DNA binding by p53. Alternatively, they could interact with p53 in the cytoplasm in a manner that results in a more transcriptionally active p53. We examined the subcellular localization of both 53BP1 and 53BP2 by transfecting HA-tagged proteins into COS-1 cells and using the 12CA5 antibody for immunofluorescence staining. In cells transfected with the expression vector lacking an insert (pCMH6K), no significant background was detected (data not shown). HA-tagged wt p53 was found in the cytoplasm of some cells and in the nucleus of others ( Fig. 5E and data not shown), presumably dependent on 5. Subcellular localization of p53, 53BP1 and 53BP2. COS-1 cells were transfected with pCMH6K, pCMH6K53BP1 (A-C), pCMH6K53BP2 (D), and pCMH6Kp53 (E). At 36 h post-transfection, cells were fixed, permeabilized, and probed with 12CA5. Anti-mouse antibodies conjugated to fluorescein isothiocyanate were used for detection. F, cellular fractionation assay. H358 cells were transfected with 22.5 g of pCMH6Kp53, pCMH6K53BP2, or both pCMH6K and pCMH6K53BP2 as indicated. Cells were harvested at 40 h post-transfection, and cytosol and nuclear fractions were isolated as described under "Experimental Procedures." Each fraction was incubated with Ni-NTA beads to concentrate the histidine-tagged 53BP2 and p53, and the fractions were subjected to Western blot analysis with 12CA5. the stage of the cell cycle at the time of staining. 53BP1 showed more complex staining patterns, being present in both cytoplasm and nucleus in some cells (Fig. 5A) and only in the nucleus in others (Fig. 5, B and C). In addition, there are two nuclear patterns for 53BP1, one homogeneous staining (Fig. 5B) and the other dot staining (Fig. 5C). Possibly the cellular localization of 53BP1 changes during the cell cycle or under different conditions. When H358 cells, which lack the p53 gene, were transfected with the 53BP1 plasmid, 53BP1 showed the same three types of staining pattern (data not shown), suggesting that the translocation of 53BP1 between the cytoplasm and nucleus does not require its binding to p53. In contrast to p53 and 53BP1, 53BP2 was detected only in the cytoplasm of both COS-1 (Fig. 5D) and H358 (data not shown) cells.
We determined whether the subcellular localization of p53 and 53BP2 changed as a result of the interaction between these two proteins using a cellular fractionation assay. We transiently transfected H358 cells with p53 and 53BP2 plasmids that express proteins containing an N-terminal tag of the HA epitope and six histidines. The cytosol and nucleus of the transfected cells were separated, and histidine-tagged p53 and/or 53BP2 proteins were enriched with Ni-NTA beads. The proteins bound to Ni-NTA beads were assayed in Western blot with 12CA5. p53 was detected in both nuclear and cytosol fractions (Fig. 5F, lanes 1 and 2); a protein comigrating with p53 was nonspecifically recognized by 12CA5 (Fig. 5F, lanes 3 and 4). In agreement with the immunofluorescence data, the HA-tagged 53BP2 protein was found only in the cytosol fraction (Fig. 5F, lanes 3 and 4). In the cells expressing both p53 and 53BP2 plasmids, the localization of these proteins did not change (Fig. 5F, lanes 5 and 6). DISCUSSION We have further characterized the 53BP1 and 53BP2 proteins, which bind to the DNA-binding domain of wt p53. A 6.6-kb cDNA for 53BP1 was cloned and sequenced, predicting a protein of 1972 residues. Polyclonal antibodies raised against a C-terminal domain of this protein detected a protein in a human cell line of similar molecular weight to that expressed by the cloned cDNA, indicating that we isolated a cDNA that encodes at least one form of the authentic 53BP1 protein.
Previously, the 53BP2 protein had been characterized as the BCL2-binding protein BBP of 1005 residues (28). We mapped the chromosomal location of the 53BP1 and 53BP2 genes to 15q15-21 and 1q41-42, respectively, which do not correspond to known regions harboring tumor suppressor genes or oncogenes. The 53BP1 protein shows a complex pattern of cellular localization, present either in both the nucleus and cytoplasm or in the nucleus only as homogeneous or dot staining. By contrast, the 53BP2 protein is only present in the cytoplasm. The two p53-binding proteins are able to enhance the transcriptional activation function of p53, suggesting that they may function in a signaling pathway to promote p53 activity.
The 53BP1 protein shows no significant homology to other proteins apart from two copies of a region that is similar to the C-terminal domain of the BRCA1 protein (BRCT domain) (24). The BRCT domain is essential for BRCA1 function, as a deletion of this domain results in loss of tumor suppression by this protein (46). Recently, the family of proteins with the BRCT domain has expanded to nearly 40 members (25, 26), including the yeast checkpoint protein Rad9 and transcription factor Rap1 and vertebrate terminal deoxynucleotidyltransferases. Although these proteins have diverse functions, their common involvement in cell cycle checkpoints and DNA damage response suggests a similar possible role for 53BP1. The presence of BRCT domains in 53BP1 and BRCA1 also raises the possibility that BRCA1 interacts with p53. Additionally, both p53 and BRCA1 bind to Rad51 (47,48), a protein with strand exchange activity that is involved in recombinational DNA repair. These interactions could allow the formation of a BRCA1, Rad51, and p53 complex or, alternatively, a series of sequential interactions, perhaps in response to DNA damage.
Another striking parallel between the 53BP1 protein and the BRCA1 and Rad51 proteins is a cellular localization typified by nuclear dot staining (47). In addition, BRCA1 localization undergoes dynamic change after DNA damage, with the dots becoming dispersed and the protein relocalizing to DNA replicating structures where it may play a role in repair of damaged DNA (49). Besides staining as nuclear dots, 53BP1 also appeared cytoplasmic and homogeneously nuclear, which may correlate with different cell cycle stages or responses to various stresses such as DNA damage.
Both 53BP1 and 53BP2 enhanced p53-mediated transcriptional activation, and this activity may account for the ability of 53BP2 to suppress transformation of fibroblasts by activated oncogenes. Another study has shown that an interferon-induced protein, p202, inhibits trans-activation by p53 and that 53BP1 can bind p202 and relieve this inhibition (41). Thus, a possible mechanism for the 53BP1 stimulatory activity is displacement of an inhibitor bound to p53, resulting in p53 free to participate in the transcription process. The failure of overexpressed 53BP1, as well as 53BP2, to increase the level of p53 is consistent with this idea. 53BP2 appears to localize exclusively to the cytoplasm, as previously noted by others using a different expression system and antibody (28). A possible mechanism for the stimulatory activity of 53BP2 is to promote the nuclear transport of p53. In some tumors, p53 is sequestered in the cytosol (50,51) where it may be bound to an anchoring protein (52), and 53BP2 could release p53 from such an inhibitor. Finally, these proteins might play a role in converting a latent form of p53 into an active one. Recent reports indicated possible regulation of the DNA binding activity of p53 by its redox state (53) or by allosteric change (54). For example, Ref-1, a redox/ repair protein, is a potent activator of latent p53 (53), and p300, a transcriptional coactivator, activates the sequence-specific DNA binding activity of p53 by acetylating its C-terminal domain (55).
The signaling pathway from damaged DNA to p53 activation is still poorly defined. The ability of 53BP1 and 53BP2 to bind to the conformationally sensitive central domain of p53 and to stimulate p53-mediated transcriptional activation suggests that one or both of these proteins play roles in this pathway. Continued analysis of these p53-binding proteins may thus further clarify the key processes by which p53 is regulated. | 2018-04-03T05:23:55.905Z | 1998-10-02T00:00:00.000 | {
"year": 1998,
"sha1": "ce80a85aa3469f57e825620cd9643c694e939e55",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/273/40/26061.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "4eacfdbe13d9ae8b4b9c787c7132836bda00fc51",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
15646165 | pes2o/s2orc | v3-fos-license | Better Online Deterministic Packet Routing on Grids
We consider the following fundamental routing problem. An adversary inputs packets arbitrarily at sources, each packet with an arbitrary destination. Traffic is constrained by link capacities and buffer sizes, and packets may be dropped at any time. The goal of the routing algorithm is to maximize throughput, i.e., route as many packets as possible to their destination. Our main result is an $O\left(\log n\right)$-competitive deterministic algorithm for an $n$-node line network (i.e., $1$-dimensional grid), requiring only that buffers can store at least $5$ packets, and that links can deliver at least $5$ packets per step. We note that $O(\log n)$ is the best ratio known, even for randomized algorithms, even when allowed large buffers and wide links. The best previous deterministic algorithm for this problem with constant-size buffers and constant-capacity links was $O(\log^5 n)$-competitive. Our algorithm works like admission-control algorithms in the sense that if a packet is not dropped immediately upon arrival, then it is"accepted"and guaranteed to be delivered. We also show how to extend our algorithm to a polylog-competitive algorithm for any constant-dimension grid.
Introduction
The core function of any packet-switching network is to route packets from their origins to their destinations, but many fundamental questions about packet routing are far from being well understood. In this paper we consider one of these questions, namely the competitive throughput network model, introduced by [3].
Briefly, the model is as follows. The network consists of n nodes (switches) connected by point-to-point unidirectional communication links, and we are given two positive integer parameters, B and c, called the buffer size and link capacity, respectively. Executions proceed as follows. Packets are input by an adversary over time. Each packet is input at its source node with a given destination node. At each step, each packet is either forwarded over an incident link, stored in its current location buffer, or dropped (i.e., removed from the system). Storing and forwarding are subject to the constraints that a buffer can store at most B packets simultaneously and that a link can carry at most c packets in a time step. These constraints can be met for all input sequences since the model allows for packets to be dropped at any time. The routing algorithm selects, at each step, which packets are forwarded, which are stored, and which are dropped. The goal of the algorithm is to maximize the number of packets delivered at their destination. Since we consider Ref Table 1: Some results for centralized online algorithms for packet routing. The networks are uni-directional grids. In the special case of B = 0 and c ≥ 3, the algorithm in [12,13] is O(log d+2 n)-competitive.
on-line algorithms, we evaluate algorithms by their competitive ratio, i.e., the minimum ratio, over all finite packet input sequences, between the number of packets delivered by the on-line algorithm and the maximum number of packets that can be delivered by any (off-line) constraint-respecting schedule. It is nearly an embarrassment to find that very little is known about this problem, even in the simplest case, where the network topology is the trivial n-node unidirectional line. In this work we provide an improved deterministic algorithm for networks whose topology is a d-dimensional grid.
Our Results
Our main result is a centralized deterministic O(log n)-competitive packet routing algorithm for unidirectional lines with n nodes. The algorithm requires buffer size B ≥ 5 and link capacity c ≥ 5. In addition, both B and c must be O(log n). We show how to extend the algorithm to d dimensions, where the competitive ratio is 2 O(d) · log d n, assuming that B, c ≥ 2 d+1 + 1. Our algorithm is nonpreemptive, namely, packets are dropped only at the time of their arrival (similarly to admission control policies, which "accept" or "reject" requests upon arrival). By contrast, preemptive algorithms may drop packets at any time, i.e., packets are not guaranteed to reach the destination even after they start traversing the network. The best previous deterministic algorithm [12,13] is preemptive. Table 1 provides a summary of our results and a comparison of our algorithm with some previous results along various aspects.
Overview of Techniques
We first explain our approach for the 1-dimensional case.
The high-level idea is to reduce packet routing in a graph G to circuit switching (or path packing, see [15,6]) in the space-time graph G × T , where T denotes the set of time steps. This so-called spacetime transformation has been used extensively in this context [5,2,7,16,11,12,13]. To be effective, the space-time transformation requires an upper bound on path lengths which does not result in losing too much throughput. We use the bound of [12,13] (which extends [7]), that ensures that the loss is at most some constant fraction. After the transformation, we have an instance of online path packing [6,9]. It is known that if the capacities are large enough, i.e., log n ≤ B, c ≤ n O(1) , then online path packing is solvable with logarithmic competitive ratio [6,11,12,13]. We overcome the difficulty that B and c are O(log n) by employing a technique called tiling, i.e., partitioning the network nodes into large enough subgrids. Tiling has been used in the past [15,8,11,12,13]; in our algorithm, we use 4 distinct tilings, and work on each of them independently. Each tiling induces a new graph called the sketch graph whose nodes are the tiles. The capacity of the edges in the space-time graph between adjacent tiles is O(log n) to allow for applying O(log n)-competitive path packing algorithms. Path packing algorithms over the sketch graph produce sketch paths for accepted packets. Thus, after these preliminary simplifications, we arrive at the sub-task of detailed routing, in which coarse sketch paths must be expanded to paths in the original space-time graph.
Fractional Optimum. Key to our application of the path-packing algorithm is the analysis of Buchbinder and Naor [9,10], which bounds the performance of the algorithm w.r.t. the fractional optimum, which may deliver packet fractions. This result allows us to scale buffer sizes and link capacities up and down while keeping the competitive ratio under control.
Combining algorithms. Another central component in the analysis of our algorithm is the combination technique introduced by Kleinberg and Tardos [15]. Loosely speaking, this technique deals with an admission control algorithm that is the conjunction of two competitive algorithms, the state of which depends only on the requests accepted by both. The technique enables one to prove that the competitive ratio of the combined algorithm is the sum (rather than the product) of the competitive ratios of the constituent algorithms.
Previous Work
Algorithms for dynamic routing on networks with bounded buffers have been studied extensively both in theory and in practice (see, e.g., [1] and references therein). Let us first focus on centralized algorithms for d-dimensional grids. We note that while centralized algorithms for packet routing were always relevant for switch scheduling, recently the idea of centralization of network functions, including route computation, gained substantial additional traction due to the concept of software-defined networks (SDN). See, e.g., [14]. The special case of 2-dimensional grids (with or without buffers) is of particular interest as this is the underlying topology of crossbars in switches [17].
Online Algorithms for Unidirectional Lines. There is a series of papers on uni-directional line networks, starting with [3], which introduced the model. In [3], a lower bound of Ω( √ n) was proved for the greedy algorithm on unidirectional lines if the buffer size B ≥ 2. For the case B = 1 (in a slightly different model), an Ω(n) lower bound for any deterministic algorithm was proved by [7,4]. Both [7] and [4] developed, among other things, online randomized centralized algorithms for uni-directional lines with B ≥ 2. In [4] an O(log 3 n)-competitive randomized centralized algorithm was presented for B ≥ 2. In addition, it is proved in [4] that nearest-to-go isÕ( √ n)-competitive for B ≥ 2. For the case B = 1, [4] presented a randomizedÕ( √ n)-competitive distributed algorithm. (This algorithm also applies to rooted trees when all packet are destined at the root.) In [7], an O(log 2 n)-competitive randomized algorithm was presented for the case B ≥ 2. (This algorithm also applies to rings and trees.) In [11], an O(log n)-competitive, nonpreemptive, randomized algorithm was presented. The algorithm in [11] is applicable to a wide range of buffer sizes and link capacities, including the case B = c = 1. In [12], an O(log 5 n)-competitive deterministic algorithm was presented. The algorithm in [12] is applicable for B, c ∈ [3, log n].
Online Algorithms for Unidirectional Grids. Angelov et al. [4] showed that the competitive ratio of greedy algorithms in unidirectional 2-dimensional grids is Ω( √ n) and that nearest-to-go policy achieves a competitive ratio ofΘ(n 2/3 ). In [12], an O(log 6 n)-competitive deterministic algorithm was presented.An extension of this algorithm to d-dimensional unidirectional grids, with competitive ratio O(log d+4 n), is presented in [12].
For more related results, refer to [13].
Organization. The problem is formalized in Section 2. In Section 3 we explain the reduction of packetrouting to path packing, and the construction of sketch graph. In Section 4 we describe the overall algorithm, and in Section 5 we analyze it. Sections 3-5 deal with the 1-dimensional grid (line); extension to the d dimensional case is also discussed in Section 6.
Model and Problem Statement
We consider the standard model of synchronous store-and-forward packet routing networks [3,4,7]. The network is modeled by a directed graph G = (V, E), and by two integer parameter B, c > 0. For the most part of this paper, we consider a network whose topology is a directed line of n vertices, i.e., Execution proceeds in discrete steps. In step t, an arbitrary set of requests is input to the algorithm. Each request represents a packet, and we will use both terms interchangeably. A request is specified by a 3-tuple r i = (a i , b i , t i ), where a i ∈ V is the source node of the packet, b i ∈ V is its destination node, and t i ∈ N is the time step in which the request is input.
In each time step, the routing algorithm removes packets that reached their destination, and decides, for each packet currently in the network, including packets input in the current step, whether (i) to drop the packet, or (ii) to send it over an incident link, or (iii) to store it in the current node. The selection of the action is done subject to the following considerations.
• If a packet is dropped, it is lost forever.
• A packet sent from node u over link (u, v) at time t will be located at node v at time t + 1. The link capacity constraint asserts that at any step, at most c packets can be sent over each link. • A packet stored at node u at time t will be located at node u at time t + 1. The buffer capacity constraint asserts that at any step, at most B packets can be stored in each buffer. We use the following terminology. A packet r i = (a i , b i , t i ) is said to be input (or arrive) at a i at time t i . We say that r i is rejected if it is dropped at time t i , otherwise it is accepted. (Our algorithm will guarantee that all accepted packets arrive at their destination.) Given a set of requests, the throughput of a packet routing algorithm is the number of packets that are delivered to their destination. We consider the problem of maximizing the throughput of an online centralized deterministic packet-routing algorithm. By online we mean that by time t, the algorithm received as input only requests that have been input by time t. By centralized we mean that the algorithm receives all requests and controls all packets currently in the system without delay. By nonpreemptive we mean that every accepted packet reaches its destination.
Competitive Ratio. Let σ denote an input sequence. Let ALG denote a packet-routing algorithm. Let ALG(σ) denote the throughput obtained by ALG on input σ. Let OPT(σ) denote the largest possible subset of requests in σ that can be delivered without violating the capacity constraints. We say that an online deterministic ALG is ρ-competitive, if for every input sequence σ, |ALG(σ)| ≥ 1 ρ · |OPT(σ)|. Our goal is to design an algorithm with the smallest possible competitive ratio.
First Steps
In this section we present preliminary simplifications we apply to the problem. They include reducing the packet routing on a line problem to path packing on grids, and then path packing on sketch graphs.
From Packet-Routing on a Line to Path Packing in a Grid
Let G = (V, E) denote a directed line with link capacities c and buffer sizes B. The space-time grid The capacity of all edges in E 0 is c, and all edges in E 1 have capacity B.
The transformation. We transform a request The correctness of the reduction is based on a one-to-one correspondence between paths in G st and a routing of a packet in G. Each vertical edge Embedding in the plane. The naïve depiction of G st maps vertex (v i , t) to the point (t, i) in the plane (i.e., the x-axis is the time axis and the y-axis is the "vertex-index" axis). This embedding of G st results with a lattice of vertices in which edges are either horizontal or diagonal. We prefer the embedding in which the edges are axis parallel, which means that vertex (v i , t) is mapped to the point (t − i, i). In the axis-parallel depiction, all the copies of a vertex v i ∈ V still reside in the ith row. However, column j corresponds to a traversal of the complete line, starting at v 0 at time j and ending at v n−1 at time j + n − 1.
From One Grid to Four Sketch Graphs
Given a grid generated by the transformation above, we apply another transformation to produce a coarsened version, called the sketch graph. Specifically, we use tiling. Tiling is a partition of the grid nodes into h × v subgrids, where h and v are parameters to be determined later. We also add dummy nodes to the spacetime grid G st to complete all tiles. This augmentation has no effect on routing because a dummy vertex does not belong to any route between real vertices. The tiling is specified by two additional parameters φ x and φ y called offsets. The offsets determine the positions of the corners of the tiles; namely, the left bottom corner of the tiles are located in the points We denote these four tilings by T 1 , . . . , T 4 . Proposition 1. For every vertex (v, t) of the space-time grid G st , there exists exactly one tiling T j such that (v, t) is in the south-west quadrant of a tile of T j . Proposition 1 suggests a partitioning of the requests.
The Sketch Graphs. Each tiling T j induces a grid, called the sketch graph, each vertex of which corresponds to a tile. The sketch graph induced by T j is denoted by All edges in the sketch graph are assigned unit capacity.
Online Packing of Paths
We use the sketch graphs to solve path packing problems. Intuitively, the path packing model resembles the packet routing model, except that there are no buffers, and that each link e may have a different capacity c(e). In addition, we generalize the notion of a request to allow for a set of destinations (similar to "anycast") as follows. Usually, the destination of a request consists of a single vertex. If G is a directed graph, then it is easy to reduce the case in which the destination is a subset to the case in which the destination is a specific vertex. The reduction simply adds a sink node that is connected to every vertex in the destination subset. In our setting of space-time grid, the destination subset is a row. Thus it suffices to add a sink node for each row (as in [7]).
Formally, a path request r i in G is a pair (a i , D i ), where a i ∈ V is the source vertex and D i ⊆ V is the destination subset. Let P (r i ) denote the set of paths that can be used to serve request r i ; namely, every path p ∈ P (r i ) begins in a i , ends in a vertex in D i , and satisfies some additional constraint (e.g., bounded length, bounded number of turns, etc.). Given a sequence R = {r i } i∈I of path requests, we call a sequence P = {p i } i∈J a partial routing of R if J ⊆ I and p i ∈ P (r i ) for every i ∈ J. The load of an edge e ∈ E induced by P is the ratio . A partial routing of a set of path requests is called a β-packing if the load induced on each edge is at most β. The throughput of P is simply the number |J| of paths in P .
Integral and Fractional Partial Routings. In the integral scenario, a path request is either served by a single path or is not served. In fractional routing, a request r i can be (partially) served by a combination of paths p 1 , . . . , p k . Namely, each path p j serves a fraction λ j of the request, where λ j ≥ 0 for all j and j λ j ≤ 1. We refer to j λ j as the flow amount of request r i . The load of an edge e ∈ E induced by request r i is the ratio j:e∈p j λ j /c(e). A fractional solution is β-packing if the total load on in each edge, from all requests, is at most β. The throughput of a fractional routing is the sum of the flow amounts of all requests. Given a fractional routing g, we use |g| to denote its throughput. Trivially, the maximum throughput attainable by a fractional β-packing is an upper bound on the maximum throughput attainable by an integral β-packing. An optimal-throughput fractional β-packing can be computed off-line by solving a linear program.
Online Path Packing: Problem and Solution. In the online path packing problem, the input is a sequence of path requests R = {r i } i∈I . Upon arrival of a request r i , the algorithm must either allocate a path p ∈ P (r i ) to r i or reject r i . An online path packing algorithm is said to be (α, β)-competitive if it computes a β-packing whose throughput is at least 1/α times the maximum throughput over all 1-packings. Note that for online path packing, we assume that all edges have capacity at least 1.
The online path packing algorithm in [6] (analyzed also by [9]) assigns weights to the edges that are exponential in the load of the edges. This load is the load incurred by the paths allocated to the requests that have been accepted so far. The algorithm is based on an oracle that is input r i and the edge weights, and outputs a lightest path p i in P (r i ). If the weight of p i is large, then request r i is rejected; otherwise, request r i is routed along p i . We refer to the online algorithm for online integral path packing by IPP. The competitive ratio of the IPP algorithm is summarized in the following theorem.
Theorem 3 ([13], following [6,9]). Consider an online path packing problem on an infinite graph with edge capacities such that inf e c(e) ≥ 1. Assume that, for every request r i , the length of every legal path in P (r i ) is bounded by p max . Then algorithm IPP is (2, log(1 + 3 · p max ))-competitive online integral path packing algorithm. Moreover, the throughput of IPP for any request sequence is at least 1/2 the throughput of any fractional packing for that sequence.
Bounded Path Lengths. The load obtained by the IPP algorithm is logarithmic in the maximum path length p max . This suggests that p max should be polynomial in n. Lemma 4 states that limiting the number of store steps per packet by a polynomial in n decreases the fractional throughput only by a constant factor. We use the following notation. Given a request sequence R = {r i } i , let f * (R) denote a maximum throughput fractional 1-packing of R, and let f * (R|p max ) denote a maximum throughput fractional 1packing with respect to R under the constraint that each path is of length at most p max .
Routing paths across 2-d Grids
Consider the following special case of routing in grids. Suppose that each path request has a specific source vertex which resides on either the south of the west side, and the destination is either the north or the east side (i.e., we can route to any vertex on the requested side). For X ∈ {S, W } and Y ∈ {N, E}, let req(X →Y ) denote the set of path requests whose source is in the X side and whose destination is the Y side (see Figure 1). The following claim establishes sufficient and necessary conditions for satisfying such path requests. We refer to the routing algorithm used in this case as crossbar routing. Proof. The "only if" part is obvious. We now present a distributed algorithm that proves the "if" part. Without loss of generality, we may assume that req(S→N ) and req(W →E) are empty. This assumption is satisfied by routing such requests along straight paths and giving them precedence over other requests. Thus a a b Figure 2: Satisfying the path requests in the a × b two dimensional directed grid, where a ≤ b.
we may ignore these lines henceforth, and we are left with the task of routing req(S→E) and req(W →N ) under the assumption that |req(S→E)| ≤ a and |req(W→N )| ≤ b. These requests are served as follows. Order the rows from bottom to top and the columns from left to right. Assume, w.l.o.g., that a ≤ b (the case that a > b is solved analogously).
Requests whose source vertex is in the first a rows or columns turn in the vertex along the diagonal emanating from the SW corner. For example, a request in req(W →N ) whose source is in row i is routed eastward for i hops, and then north for a − i hops (i.e., to the north side of the grid). See Figure 2.
The requests whose source vertex is in the last b − a columns are routed northward until they reach a vertex that does not receive an east-bound path from its west neighbor. Once such a vertex is found, the path turns east and continues straight until it reaches the east side of the grid. Indeed, such a right turn is always possible because a ≥ |req(S→E)| and hence a "vacant row" is always found. Remark 6. Proposition 5 extends to the case of capacitated edges assuming all horizontal edges have the same capacity and all vertical edges have the same capacity. In this case, the requests can be routed iff the number of requests for each destination side is bounded by total capacity of edges crossing that side.
The Packet Routing Algorithm
We now present the routing algorithm. Pseudo-code is provided in Algorithm 1. The algorithm works as follows. First, in lines 1-3, an initial filtering of the requests removes requests if too many requests originate in the same space-time vertex (see Definition 9). Then each remaining new request is processed. In lines 4-6, it is classified as either Near or Far, based on its source-destination distance (see paragraph on packet classification). Near requests are routed by the ROUTE-NEAR algorithm, described in Section 4.4. Each Far request is associated with the tiling T j in which its source vertex belongs to a south-west quadrant of a tile. Each tiling is processed separately by three procedures: (i) The IPP algorithm, which performs online path packing over the sketch graph S j (line 7). The outcome sketch i is either "REJECT" or a path in a sketch graph S j , i.e., a sequence of tiles from the initial tile to the destination tile. (ii) The INITIAL-ROUTE procedure looks for a routing within the SW-quadrant of the first tile of r i : its outcome is either such a path denoted init i or "REJECT". Only IPP and INITIAL-ROUTE may reject a far request. If both procedures are successful, then DETAILED-ROUTE is called (line 8). Detailed routing computes a path in the space-time graph, i.e., a complete schedule for each packet. In our algorithm, the sketch path for each accepted request is computed once and it is fixed, but the future part of a detailed route of a request may change due to the insertion of new packets. Therefore, the procedure DETAILED-ROUTE not only computes a path for r i in Algorithm 1 Top-level algorithm for packet routing in the 1-dimensional grid. Code for step t.
1: Let R t be a list of new requests, sorted by source-destination distance. 2: For each vertex v, let R t (v) the first B + c requests in R t whose source is v.
filter requests 3: for each request r i ∈ v R t (v) do 4: if r i ∈ Near then ROUTE-NEAR(r i ) 5: end if 15: end for G st , but may also alter the detailed routes of other requests (without changing the high-level sketch-graph routes).
An important property of IPP and INITIAL-ROUTE is that their state is determined by the requests that are actually in the system, i.e., accepted by both. (Rejected requests by either do not affect the state of the system.) This property enables us to employ the combination technique of [15]. The listing emphasizes this property by explicitly managing the sets of accepted requests for each class (denoted by accepted j ). These sets are arguments of IPP and INITIAL-ROUTE and determine their states. We now proceed to explain the algorithm in detail.
Packet classification. A request
and far otherwise. We denote the sets of near and far requests by Near and Far, respectively. The far requests are further classified into four classes denoted by Far j , where Far j Far ∩ SW j . Namely, Far j is the set of far requests whose source node is in the SW-quadrant of a tile s in the tiling T j . Tiling Parameters. Tile side lengths are set so that the trivial greedy routing algorithm is O(log n)competitive for requests that can be satisfied within a tile. Each tile has length h and height v , defined as follows. Recall that the maximum path length p max = 2n · (1 + B c ) (cf. Lemma 4).
Definition 7. We use the following parameters.
5B
We summarize with the following claim.
Proposition 8. If B/c is bounded by a polynomial in n, then the tiling parameters satisfy the following properties.
2. The sum of the edge capacities along each tile side is Θ(k).
3. For each track, the sum of the track capacities along a tile side is at least 6k.
Proof. Clearly h + v = O(k). If B/c is polynomial in n, then k = O(log n). The sum of the edge capacities along a vertical side is v · B = Θ(k). The sum of the track capacities crossing a vertical side of a tile is at least v · 5B ≥ 6k. The capacities along a horizontal edge is bounded similarly.
Filtering superfluous simultaneous requests with identical sources. Since we do not impose any restriction on the requests, it could well be that many requests arrive at the same source vertex in a single time step. To deal with that, we use that fact that for each node v and step t, no more than c + B requests can leave (v, t) in any routing. The partition of link capacities for tracks imposes a stricter limitation in the sense that within each class, no more than c + B paths can have the same source vertex.
Definition 9. Given a sequence R of requests, let R denote the subsequence of R defined as follows. For each source vertex (a i , t i ), choose c + B packets whose destination is closest to the source node. (If at most B + c requests originate at the same node, then all of them are kept in R .) Proposition 14 shows that rejecting the requests in R \ R reduces the fractional optimal throughput only by a constant factor.
Routing Rules
The routing at the high level (sketch path) is determined by the IPP algorithm.We now explain the ideas behind refining these rough paths (in the sketch graph) into actual paths (in the space-time grid). Throughout this section we consider, w.l.o.g., a single tiling T j . Fix a tile s IN t J . We distinguish between the following three types of requests in Far j (we deal with Near requests in Section 4.4).
• Initial requests: requests whose source vertex is in the south-west quadrant of the tile s.
• Traversing requests: these are requests that enter s from a specific vertex on one side (either west or south) and must leave through any vertex of another side (either east or north). The entry vertex is determined by a previously-invoked detailed routing, and the exit side is determined by the sketch path. • Final requests: these are requests whose sketch path ends in tile s. The destination of a final request is the north side of s. 1 Each tile is partitioned into 4 quadrants, denoted NE, SE, SW and NW. We constrain the way requests are routed within a tile using the following rules (see Figure 3; no request may cross a thick line).
1. Initial requests always start in the SW-quadrant and are routed to the north or east side of the SWquadrant along a straight path. The SW-quadrant of each tile is reserved for routing of initial requests. 2. Traversing requests whose source and destination sides are opposite (e.g., from the south to the north side) are routed along a straight path. This means that incoming traffic (of earlier requests) continues uninterrupted along a straight path. Only remaining capacity along edges that emanate from a vertex (i, j), if any, is used for routing the requests that originate in (i, j).
Procedure DETAILED-ROUTE
The goal in detailed routing is to compute a detailed path p i in the space-time graph G st given a sketch path sketch i in the sketch graph S j and the initial part of the route init i . The sketch path specifies the sequence of tiles to be traversed by the detailed path. In addition, the sketch path specifies the tile sides through which the detailed path should enter and exit each tile. Requests that have been assigned a sketch path and an initial route must be successfully routed by detailed routing. Detailed routing is computed by applying crossbar routing (cf. Proposition 5) to the NW, SE and NE quadrants. This routing is computed based on the present requests. As new requests arrive, the future portions of the detailed routes may change dynamically so that all requests which are "in progress" will reach their destination. Below we argue that crossbar routing indeed succeeds. Proof. By Proposition 5, to ensure successful routing it is sufficient to bound the number of paths that need to traverse a quadrant by the capacity of the quadrant side. By Proposition 8, the track capacity of each quadrant side is at least 3k. We now prove upper bounds on the number of paths that traverse each quadrant side (see Figure 3). The IPP path packing algorithm is a k-packing over the sketch graph (whose edges have unit capacity). It follows that at most k paths traverse each side of the tile. As every request that originates in the SW-quadrant of a tile must exit the tile, there are at most 2k paths that traverse each side of the SWquadrant (although their sum is also bounded by 2k). Hence the upper bounds depicted in Figure 3 follow. We need to elaborate more on the NE-quadrant because it is also used for routing final requests (i.e., requests that do not exit the tile, but do want to reach its top row). Consider the north side of the NE-quadrant. There are at most k traversing requests that wish to exit the tile. In addition, there are at most 2k final requests that wish to reach to top row (as each final far request must have entered the tile). Thus, in total there are at most 3k paths that wish to reach the top side of the NE-quadrant. To summarize, the number of paths that wish to reach any quadrant side is bounded by the side's capacity, and hence by Proposition 5, detailed routing succeeds.
Finally, we note that in order for DETAILED-ROUTE to be well defined, we compute it in tiles in columnmajor order, i.e., we start with the bottom tile of the leftmost row and go up, then the bottom tile of the second-from left column and go up etc. This ensures that when we reach a tile, all input vertices are fixed. We remark that detailed routing can be executed in a local distributed manner; in each time step, each vertex needs only to know the initial paths the sketch paths of the incoming packets.
Procedure ROUTE-NEAR
Finally, we describe the algorithm for the near requests. The ROUTE-NEAR Algorithm is extremely simple: it never stores a packet (i.e., it uses only vertical edges in G st , and gives precedence to older requests). In more detail, upon arrival of a request r i ∈ Near, the algorithm checks the number of requests already routed along the outgoing vertical edge (from (a i , t) to (a i + 1, t + 1)). If this number is less than c , then the algorithm routes r i along the vertical path in G st from (a i , t) to (b i , t + (b i − a i )). Note that these edges occur in the future, and hence cannot have been saturated by ROUTE-NEAR if the edge outgoing from (a i , t) is not saturated. If there is no free capacity in the outgoing vertical edge, r i is rejected. Note that if r j is accepted, then it guaranteed to reach its destination.
Analysis of Competitive Ratio of the Routing Algorithm
Our goal is to prove the following theorem for a directed line G of n vertices with buffer sizes B and link capacities c, where B, c ∈ [5, log n].
Theorem 11. Algorithm 1 is O(log n)-competitive with respect to the throughput of a maximum fractional routing.
We translate the problem to a path packing problem over the space-time graph G st . Let f * G st (R) denote a maximum throughput fractional routing, and let |f * G st (R)| denote its throughput. Let |ALG(R)| denote the throughput of the online packet algorithm. Theorem 11 follows directly from the following lemma.
Lemma 12. For every sequence of requests R, |f * G st (R)| ≤ O(log n) · |ALG(R)|. We outline the proof of Lemma 12. We scale the capacities down by a factor of Θ(k) = Θ(log n) in the sketch graph. By linearity, this reduces the optimal fractional throughput by the same factor (see Proposition 13). We show that the filtering stage in Line 2 incurs only a constant factor reduction to the optimal fractional throughput (see Proposition 14). The filtered requests R are partitioned into near requests and far requests (which are further partitioned into 4 classes, one per tiling). The far and near requests are analyzed separately. The analysis of the throughput for far requests builds on the competitive ratio of the IPP algorithm and the INITIAL-ROUTE algorithm (see Claim 15 and Claim 16). By applying the combining analysis of Kleinberg and Tardos [15], we show that the competitive ratio for the combined algorithm is the sum of the two algorithms (see Claim 17). In Theorem 18, we show that the ROUTE-NEAR algorithm succeeds in routing a logarithmic fraction of the filtered near requests. In Section 5.4, the parts of the proof are combined together to prove Lemma 12.
Scaling and Filtering
One advantage of working with fractional routings is that, by linearity, the throughput scales exactly with the capacities. Let f * S j (R) denote a maximum throughput fractional routing in the sketch graph S j . Recall that the sketch graph has unit capacities. Coalescing of vertices of G s t in each tile results with edge capacities that are Θ(k) = Θ(log n). Hence, we obtain the following proposition.
Recall that by Definition 9, in the input sequence R , at most B + c requests originate in each space-time vertex.
Consider the flow f /9. For every vertex v, the amount of flow that originates in v is bounded by (B + c)/9 ≤ B + c . Divert flow from in f /9 from R v \ R v to R v along shorter paths, to obtain a flow g with respect to R such that |g| = |f |/9. Since |f * G st (R )| ≥ |g|, the proposition follows.
Far Requests
Two algorithms determine whether a far request is rejected: (i) the IPP path packing algorithm over the sketch graph, and (ii) the INITIAL-ROUTE algorithm that deals with routing in the initial SW-quadrant of the source tile. We begin by showing that, if invoked separately, each of these algorithms accepts at least a constant fraction of the maximum fractional throughout over the sketch graph. Let R j denote the subsequence of requests in R that are in the class Far j . Suppose we invoke the IPP algorithm in isolation over the sketch graph S j with the input sequence R j . By isolation we mean that the accepted requests are determined solely by IPP. Let |IPP S j (R j )| denote the number of requests that are accepted by this invocation.
Claim 15. |f * S j (R j )| ≤ 2 0.31 · |IPP S j (R j )|. Proof. By Lemma 4, the restriction of the path lengths by p max only reduces the fractional throughput by a factor less than 0.31. By Theorem 3, the IPP algorithm is (2, k)-competitive, and hence its throughput is half the optimal fractional throughput with bounded path lengths.
Let |INITIAL-ROUTE(R j )| denote the number of requests that are accepted by INITIAL-ROUTE if invoked in isolation with the input sequence R j .
A far request must exit the tile in which it begins. The edge capacities in the sketch graph are unit. Hence, the amount of flow in f * S j (R j ) that originates in each tile is at most 2. On the other hand, if a positive amount of flow originates in a tile s, then at least one request starts in the SW-quadrant of s. Hence INITIAL-ROUTE(R j ) accepts at least one request that begins in s.
A naïve analysis of the requests accepted by the conjunction of the IPP and INITIAL-ROUTE algorithms implies that the accepted requests are in the intersection, which might be empty. However, in our algorithm the subsequence of accepted requests is determined by both algorithms, and this set of accepted requests determines the state of both algorithms. Hence, by applying the combining analysis of Kleinberg and Tardos [15], the combined competitive ratio is shown to be the sum of the isolated competitive ratios.
Near Requests
In this section we analyze the competitive ratio of the ROUTE-NEAR algorithm with respect to near requests. Recall that: (1) A request is a near request if the distance from the source to the destination is at most v . Note that v = Θ( log n B ) and B < log(1 + 3p max ) = O(log n).
(2) The incoming requests are filtered so that at most B + c requests originate in every space-time vertex.
The following theorem states that ROUTE-NEAR succeeds in routing at least a logarithmic fraction of the filtered near requests. This theorem implies that the throughput is at least a logarithmic fraction of the optimal fractional routing of the filtered near requests.
Proof. It suffices to prove that |ALG(Near)| ≥ Ω 1 log n · |Near \ ALG(Near)|. Consider the following bipartite conflict graph. Nodes on side L are the requests of ALG(Near), and nodes on side R are the requests of Near \ ALG(Near). There is an edge (r i , r j ) ∈ L × R if r j is rejected by the ROUTE-NEAR Algorithm and the vertical route of r i traverses the source vertex (a j , t j ) of r j . A request r i ∈ L conflicts with at most B + c requests in each vertex. Hence, the degree of r i in the conflict graph is at most (B + c ) · v . On the other hand, the degree of r j ∈ R equals c (where c is the capacity of the track reserved for the near requests). By counting edges on each side we conclude that Hence, We conclude that As (B +c ) v c = Θ( log n c + log n B ) = O(log n), and the theorem follows.
Putting Things Together
In this section we prove Lemma 12. We partition the input sequence R into Near and R j , for j ∈ {1, 2, 3, 4} (recall that R j = R ∩ Far j ). By subadditivity, In order to bound the ratio |f * G st (R )|/|ALG(R )|, it suffices to separately bound the ratios of the terms. Indeed, by Theorem 18 |f * G st (Near)| ≤ O(log n) · |ALG(Near)|.
Extension to d-Dimensional Grids
The following theorem is proved by extending Algorithm 1 for a line network to d-dimensional grid.
Theorem 19. For B, c ∈ [2 d+1 +1, log n], there is a deterministic 2 O(d) ·log d n-competitive online algorithm for the throughput maximization problem.
(sketch). As in the one-dimensional case, perform a space-time transformation on the d-dimensional n-node grid G to obtain the (d + 1)-dimensional space-time grid G st . Partition G st to 1 × . . . × d+1 subgrids (or subcubes). The side length of a subgrid equals v for directions that correspond to forward steps and h in the direction that corresponds to store steps. There are two offsets per dimension, resulting with 2 d+1 tilings. The number of tracks equals the number of offsets plus one (the extra one is for the near requests), hence we require that B, c ≥ 2 d+1 + 1. Similarly to the 1-dimensional case, a request is classified as a near request if the distance from the source to the destination is at most d · v . Detailed routing within a tile is successful by the following observation. Every time a packet cannot turn to the direction that is dictated by its sketch path, there is a packet that did turn to its desired direction. Since the number of path emanating from each tile is bounded by the quadrant-side capacity, we conclude that every packet will eventually turn, if needed, within its quadrant, thus respecting its sketch path.
Since the link capacity to track capacity ratio is O(2 d ), this scaling of capacities incurs an O(2 d ) factor to the competitive ratio. The sketch graph is obtained in the same way, with the exception that edge capacities are set to 1 d+1 (so that that the number of paths that IPP routes out of a (d + 1)-dimensional tile is at most O(log n)). The ratio of edge capacities in G st between adjacent faces of tiles and the capacity of the edge in the sketch graph is O(log n) d . This incurs an additional factor of O(log n) d for routing far requests due to capacity scaling. The routing of near requests succeeds in routing at least a fraction of d · log n of the near requests. We conclude that the competitive ratio is determined by the fasr requests, and hence the theorem follows.
Conclusion
In this paper we presented an online deterministic packet routing algorithm. For the one dimensional grid (with constant-size buffers and constant-capacity links), this algorithm closes the gap with the best throughput achieved by a randomized algorithm. This closes a problem which was open for more than a decade, but still leaves open quite a few problems. The most urgent one is to reduce the gap between the upper and lower bounds on the competitive ratio. Currently the best upper bound is O(log n) for the line, and we are not aware of any no non-trivial lower bound. We note that reducing the upper bound to o(log n) seems to require new techniques, as the reduction to online path packing introduces a logarithmic factor in the competitive ratio.
Another important question is to come up with reasonable distributed algorithms. Even though, as mentioned above, the SDN model shifts many network operation tasks to the centralized setting, it is very interesting to find out what can be done without a central coordinator.
A √ log n-competitiveness of initial routing Lemma 20. Fix a tile s and let q denote its SW quadrant. Suppose that the sources of m path requests are in q.
Then Ω( √ m) path requests are served by the initial routing in q.
Proof. We restrict attention to rows of q which contain at least B sources and columns of q which contain at least c sources of requests which cannot be routed horizontally. Since all other packets are trivially routed by the algorithm, we may assume w.l.o.g. that there are no other packets with sources in q.
Let y denote the number of rows that contain a source vertex of an initial request, and let x denote the number of columns that contain a source vertex of an initial request that is not routed horizontally. Clearly, m ≤ xy(B + c ). On the other hand, detailed routing in q serves yB + xc requests. We now prove that yB + xc = Ω( xy(B + c )).
Without loss of generality, assume that c ≥ B . Thus it suffices to prove that We proceed with case analysis. If y ≤ x, then y x · B + x y · c ≥ c , as required. Otherwise y > x. We further distinguish between two cases: 1. If y/x ≥ c /B , then y x · B + x y · c ≥ c B · B ≥ √ c , as required.
2. If x/y > B /c , then y x · B + x y · c ≥ B c · c ≥ √ c , as required.
Note that, if a single requested is input to a SW-quadrant, then intial routing accepts it. | 2015-01-25T03:22:58.000Z | 2015-01-25T00:00:00.000 | {
"year": 2015,
"sha1": "3ee12305b74e5235b7567587da30d5eb7b933afa",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "b7695602b71d5eac2059ed0e8cb9281088f4818c",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
55559535 | pes2o/s2orc | v3-fos-license | Design and Numerical Study of Micropump Based on Induced Electroosmotic Flow
Induced charge electroosmotic flow is a new electric driving mode. Based on the Navier–Stokes equations and the Poisson– Nernst–Planck (PNP) ion transport equations, the finite volume method is adopted to calculate the equations and boundary conditions of the induced charge electroosmotic flow. In this paper, the formula of the induced zeta potential of the polarized solid surface is proposed, and a UDF program suitable for the simulation of the induced charge electroosmotic is prepared according to this theory. At the same time, on the basis of this theory, a cross micropump driven by induced charge electroosmotic flow is designed, and the voltage, electric potential, charge density, and streamline of the induced electroosmotic micropump are obtained. Studies have shown that when the cross-shaped micropump is energized, in the center of the induction electrode near the formation of a dense electric double layer, there exist four symmetrical vortices at the four corners, and they push the solution towards both outlets; it can be found that the average velocity of the solution in the cross-flowmicrofluidic pump is nonlinear with the applied electric field, which maybe helpful for the practical application of induced electroosmotic flow in the field of micropump.
Introduction
e huge technological advances have made people's demand for technology products continue to move toward the direction of portability, miniaturization, and intelligence.With the improvement of living standards, more and more attention is being paid to health problems and the accuracy of the results, and the monitoring methods have also been set at higher standards.At this moment, the microfluidic chip is a good choice for further development and popularization of real-time diagnostic technology.
e so-called microfluidics chip refers to a chemical or biological lab built on a chip that is only a few square centimeters.It integrates the processes of biological and chemical reactions and separation and detection into microchannels, while using the design of microchannel networks for microfluidic control and transport and ultimately enables various functions in chemical or biological laboratories, that is, lab on a chip.Much research has been made under the leadership of a large number of scholars such as Lin Bingcheng, Qin Jianhua, and so on, all of which contributed to the development of microfluidic chip technology in China and even in the world.
e internationally renowned magazine Lab on a Chip even published a special album titled "Focus on China" on the 10th anniversary of its founding, which is to affirm the important contribution made by Chinese scholars to the research of microfluidic technology.
e delivery mixing, reaction, separation, and control of microfluidics are key components of microfluidics system.Because of its small characteristic scale, most of the fluid flowing in the microchannels is laminar.Particles, droplets, or bubbles generally within the microchannels belong to the field of low Reynolds number flow theory [1][2][3].Due to the sharp decrease of the volume to surface area ratio, the study found that the laws and phenomena of fluid movement at the microscale are different from the macro environment; the continuity equation in the three equations of hydrodynamics may no longer be suitable for use in microfluidics [4].With the reduction of the characteristic scale of the study of fluid motion, a new flow effect was found.e coupling of the electric field force, the flow field, the temperature field and the ion motion within the microchannel, the electroosmotic flow, electrophoresis, induced electroosmotic flow, and other electrical phenomena can be used to achieve microfluidic (microparticle) transport and control [5,6].Electrokinetic phenomena in microfluidics are caused by the interaction between the applied electric field and the diffusion layer in the electric double layer.More electrokinetic phenomena studied at present include electrophoresis, dielectric electrophoresis, electroosmotic flow, and induced electroosmotic flow.Electrokinetic phenomena can be divided into linear (electrophoresis and electroosmotic flow) and nonlinear (dielectric electrophoresis, electrophoresis, and induced electroosmotic flow) electrokinetic phenomena according to whether the zeta potential in the electrokinetic phenomenon changes with an applied electric field.In this paper, the phenomenon of induced electroosmotic flow is used.
Induced electroosmosis (ICEO) is a phenomenon driven by electrostatic forces under applied electric field and is a variant of the electroosmotic phenomenon [7].e phenomenon of induced electroosmotic flow mainly depends on the interaction of polarizable solids with an applied electric field to generate an electromotive phenomenon.e induced potential on the polarizable surface is critical to the induced charge electroosmotic flow.e magnitude of its zeta potential is related to the applied electric field.
e earliest induced electroosmotic flow was discovered by Romans et al. at the end of the 20th century.Subsequently, in 2004, Bazant and Squires perfected the relevant theory and formally proposed the concept of inducing electroosmotic flow.And the study of the mixing [8] and transporting of the fluid in the simple microchannel is accomplished by using this theory.By 2005, Levitan used experimental methods to confirm the correctness of the basic model of induced electroosmotic flow.
e induced charge electroosmotic flow (ICEOF) has been studied and applied to the microfluidic systems extensively in the last two decades.e phenomenon is used by Wu and Li to realize the function of fluid mixing and flow regulation in microfluidic chips; Zhao and Bau used induced electroosmotic flow to enhance chaotic flow to improve the mixing efficiency of microfluidics; Yariv, Bau, and Li et al. gave attention and conducted preliminary studies on inducing particle-wall effect in electroosmotic flow; Peng then experimentally found that the higher the zeta potential of the electrical double layer around the surface of the polarizable solid, the more particles agglomerated; demonstrating the feasibility of using micronanoparticle manipulation to induce electroosmosis.Harbin Institute of Technology, Peng and Jia innovated the use of ITO conductive glass as the electrode, based on the principle of induced electroosmotic flow and implementation of micronanoparticle manipulation.
Compared with the classical electroosmotic flow, the induced electroosmotic flow can obtain a higher driving speed under the same voltage, so we design a micropump [9][10][11] based on it, which can be applied to the driving of microfluidic chip.e model can successfully predict new phenomena when the applied voltage is too small to disrupt the salt concentration.
Theoretical Analysis
As the flow is considered steady and incompressible, the governing equations are shown below: where λ 2 0 � ε f kT/(2z 2 e 2 a 2 p n ∞ ), Pe � Ua p /D, Re � ua p /v � PeSc.e other variables are the characteristic speed u, the characteristic length a p , the kinematic viscosity v, the dielectric constant ε, the valence of ions z, the absolute temperature T, the ion concentration n ∞ , the diffusion coefficient D, and Boltzmann's constant k.
Equations ( 2)-( 4) are solved to obtain ion concentration and density distribution, and then ( 5) and ( 6) are solved to get the information of flow field.e zeta potential in the electroosmotic flow is induced by an applied electric field, and the magnitude of the potential depends on the applied electric field.According to the relevant theoretical study, it is found that the induced tangent slip velocity of the electric double layer on the polarizable solid surface in the electroosmotic flow is where ε is the dielectric constant, ε 0 is the dielectric constant of vacuum, r is the radius, μ is the dynamic viscosity, and E is the applied electric field strength.
Zeta Potential Verification.
Induced charge electroosmosis flow (ICEOF) phenomenon, which is caused by the interaction between the applied electric field and the electric double layer formed on the polarizable surface, and zeta potential changes on the polarizable solid are shown in Figure 1.
Journal of Nanotechnology
Under the two-dimensional uniform electric eld, the analytical formula of zeta potential at ideal polarizable cylindrical surface is shown below: ).In this simulation, the ow eld, the applied electric eld, and the zeta potential control equation of the wall surface of the polarizable obstacle are shown in (4).e water used in the solution medium is related to the physical parameter:
Results and Discussions.
First of all, the electric eld of the cross channel is analyzed.In the simulation, an additional electric eld is added to the two inlets to generate an electric eld from the positive electrode to the negative electrode in the solution medium in the microchannel, as shown in Figure 4.At the same time, under the action of an applied electric eld, the centrally located polarizable electrode is polarized, and the opposite ion in the adsorption solution forms a close-packed charge layer on the surface, eventually producing an electric double layer near the surface.
e potential is the zeta potential, and the charge density around the polarizable solid is shown in Figure 5.In the program, the negative terminal defaults to zero, so the potential and charge in the positive direction will be more dense, but after the power is applied, an electric eld will be generated between the positive and negative electrodes.erefore, when the center of the polarizable solid surface produces an electric double layer under the action of an applied electric eld, the ions in the solution are attracted by the electric double layer, and nally the liquid is driven to form an induced electroosmotic ow. Figure 6 shows the micropump ow diagram of induced electroosmotic ow in the cross channel under di erent electric eld intensities obtained from simulation.What can be seen from the diagram is that some of the uids will ow along the polarizable solid surface from the left and right inlet to the outlet.Fluid at a distance farther away from the polarizable solids does not enter the exit channel but instead creates vortices around the polarizable solids.is is mainly due to the fact that the ion concentration in the di usion layer in the electrical double layer is smaller in distance from the polarizable solid and less in drag force on the uid driven by the external electric eld, so that the uid can not ow out from the outlet but do swirling movement in volatile solids around.
As the applied electric eld increases, the shape of the vortex around the polarizable solid can also be found from Figure 6 above.When the voltage at the inlet is φ a 10 V, the four vortices are basically at four corners and distributed evenly.With the increase of voltage, the four vortices around the polarizable solid gradually move toward the exit channel.When the voltage at the inlet is φ a 300 V, it can be clearly seen that the four vortices basically entered the interior of the exit channel.At the same time, the distance between the two vortices of polarizable solids increases with increasing voltage.e above results show that the greater the voltage, the more easily the uid ows into the outlet channel and also can result in a more e cient driving e ect.
In general, the performance of a micropump is mainly measured by its micro uidic driving ability, which can be compared with the uid velocity at the exit.
is paper mainly simulates cross induced electroosmotic micropumps with a two-dimensional structure.erefore, it is necessary to study the speed of its outlet.According to the simulation results, under the ideal conditions, micropump at the upper and lower exit has the same speed.erefore, the speed of one of the outlets will be studied separately in this paper.Figure 7 shows the velocity pro le at one outlet, where the vertical axis is the exit speed v (mm/s) and the abscissa is the distance between the solution and the exit distance l (mm).As can be seen from the gure, the velocity at the outlet is parabolic, and the larger the voltage is, the greater the velocity is, and the driving e ect of the micropump is better.When the voltage at the inlet is 100 V, the maximum uid velocity at the outlet of the electrical microchannel reaches 10 mm/s, and as the voltage increases, the drive speed increases faster and faster.is shows that the use of the micropump can produce a good driving e ect.
Figure 8 shows the relation between the average velocity of single outlet and applied electric eld strength, where the ordinate is the average speed at a single exit v (mm/s) and the abscissa is the voltage at the power source U (V).It can be seen from the gure that the average speed at a single outlet is a quadratic nonlinear relationship with the power supply voltage, and when the power supply voltage is higher, the average speed of the micropump increases faster.When the voltage is greater than 100 V, the average speed of the microchannel outlet at this time is already close to 10 mm/s.At this point, we linearly t the numerical simulation results to get the cross structure of the micropump single-exit average velocity and applied electric eld curve: y 0.001x 2 − 0.081x + 2.0618.
Conclusions
In summary, the mechanism of induced electroosmotic ow is studied in depth.e analytical solution of induced zeta potential at polarizable solid surface is proposed by analyzing the governing equations of induced electroosmotic ow.Based on this theory, a UDF program suitable for induced charge electroosmotic ow simulation is developed.At the same time, the cross micropump driven by induced charge electroosmotic ow was designed, and the voltage, potential, charge density, and ow eld of the induced micropump were obtained.e results show that the cross induced charge electroosmosis micropump has a nonlinear relationship with the applied electric eld, which is more powerful than that of the traditional electroosmotic pump.
Conflicts of Interest
e authors declare that they have no con icts of interest.
2. 1 .
Governing Equations and Boundary Conditions.In this study, the theoretical model is based on the Navier-Stokes equation[12] of viscous fluid flow, combined with the Poisson-Nernst-Planck (PNP) ion transport equation.
3. 1 .( 2 ) 3 )
Model and Boundary Conditions.As shown in Figure2of the cross-shaped induced electroosmotic micropump, a cylindrical polarizable solid is embedded in the middle of the cross-shaped channel.e distance between the energized electrodes is L 2000 μm, the width of the microchannel is W 200 μm, and the diameter of a circular polarizable solid is ϕ 100 μm.Using the Gambit software to mesh the 2D micropump model and pass the grid independency veri cation, the cross geometry model of micropump is shown in Figure3, and the total number of grids nally con rmed is 20,000.e model's boundary conditions are set as follows: (1) Boundary conditions of the surface potential Inlet and outlet:φ inlet−1 φ a , φ inlet−2 0 V; φ outlet−1 φ outlet−2 0 V;e surface of polarizes solid: zφ/zn Boundary conditions of ion concentration Inlet and outlet: c 2; Side wall: c 2; e surface of polarized solid: zc/zn Boundary conditions of ion density Inlet and outlet: q 0; Side wall: q 0; e surface of polarized solid: zq/zn ⇀ −c(zφ/zn ⇀
Figure 4 :Figure 5 :
Figure 4: Voltage diagram in the cross channel. | 2018-12-13T19:29:40.897Z | 2018-05-09T00:00:00.000 | {
"year": 2018,
"sha1": "8f92479f8ecd100b81f5510c62916569ef7cae81",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/jnt/2018/4018503.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8f92479f8ecd100b81f5510c62916569ef7cae81",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
18102356 | pes2o/s2orc | v3-fos-license | Comparison of the Diagnostic Accuracy of Three Rapid Tests for the Serodiagnosis of Hepatic Cystic Echinococcosis in Humans
Background The diagnosis of cystic echinococcosis (CE) is based primarily on imaging, in particular with ultrasound for abdominal CE, complemented by serology when imaging results are unclear. In rural endemic areas, where expertise in ultrasound may be scant and conventional serology techniques are unavailable due to lack of laboratory equipment, Rapid Diagnostic Tests (RDTs) are appealing. Methodology/Principal Findings We evaluated the diagnostic accuracy of 3 commercial RDTs for the diagnosis of hepatic CE. Sera from 59 patients with single hepatic CE cysts in well-defined ultrasound stages (gold standard) and 25 patients with non-parasitic cysts were analyzed by RDTs VIRapid HYDATIDOSIS (Vircell, Spain), Echinococcus DIGFA (Unibiotest, China), ADAMU-CE (ICST, Japan), and by RIDASCREEN Echinococcus IgG ELISA (R-Biopharm, Germany). Sensitivity, specificity and ROC curves were compared with McNemar and t-test. For VIRapid and DIGFA, correlation between semiquantitative results and ELISA OD values were evaluated by Spearman’s coefficient. Reproducibility was assessed on 16 randomly selected sera with Cohen’s Kappa coefficient. Sensitivity and Specificity of VIRapid (74%, 96%) and ADAMU-CE (57%, 100%) did not differ from ELISA (69%, 96%) while DIGFA (72%, 72%) did (p = 0.045). ADAMU-CE was significantly less sensitive in the diagnosis of active cysts (p = 0.019) while DIGFA was significantly less specific (p = 0.014) compared to ELISA. All tests were poorly sensitive in diagnosing inactive cysts (33.3% ELISA and ADAMU-CE, 42.8% DIGFA, 47.6% VIRapid). The reproducibility of all RDTs was good-very good. Band intensity of VIRapid and DIGFA correlated with ELISA OD values (r = 0.76 and r = 0.79 respectively, p<0.001). Conclusions/Significance RDTs may be useful in resource-poor settings to complement ultrasound diagnosis of CE in uncertain cases. VIRapid test appears to perform best among the examined kits, but all tests are poorly sensitive in the presence of inactive cysts, which may pose problems with accurate diagnosis.
Introduction
Cystic echinococcosis (CE) is a parasitic zoonosis caused by the larval stage of the dog tapeworm Echinococcus granulosus complex.The parasite is transmitted between canids (definitive hosts harboring in the intestine the adult stage of the tapeworm), and livestock, particularly sheep (intermediate hosts becoming infected by fecal-oral route with eggs shed with dog feces).In the intermediate host, the larval stage develops as an expanding fluid-filled cyst, which can infect the definitive host eating infected organs.Humans behave as accidental intermediate hosts, where CE cysts develop mostly in the liver, followed by lungs.The infection is prevalent worldwide especially in rural livestock-raising areas such as the Mediterranean, Eastern Europe, North and East Africa, South America, Central Asia, China and Australia.The most recent estimates indicate 1.2 million people affected worldwide with 3.6 million Disability Adjusted Life Years lost due to human disease and over 2,190 million USD lost yearly in animal production [1].
Human CE is a chronic, clinically complex and neglected disease [2].The spectrum of clinical manifestations range from asymptomatic to serious, even life-threatening conditions.Most cases remain a-or pauci-symptomatic for years or even decades and maybe diagnosed accidentally.The diagnosis of human CE is mainly based on imaging.Ultrasound (US) is the imaging technique of choice for the diagnosis of abdominal CE [3].The current international WHO-IWGE (Informal Working Group on Echinococcosis) classification of CE cyst stages is based on the pathognomonic features of cysts on US, and guides their clinical management [4,5].
Serology should complement imaging-based diagnosis when imaging features are unclear, although currently available serology tests are burdened by the lack of standardization and by unsatisfactory sensitivity and specificity [6,7].
In underserved rural endemic areas, the diagnosis of CE poses important problems as expertise in US diagnosis and management of CE may be scant and/or difficult to access, and conventional serology techniques are unavailable or unreliable due to the lack of laboratory equipment.These conditions may not only cause under-diagnosis of CE in patients requiring therapy, but also result in poor differential diagnosis and unnecessary or inappropriate treatments.This is particularly true when serology is used alone without visualization of a compatible lesion by imaging, as the positive predictive value of CE serodiagnosis is very low [8], and when lesions do not show pathognomonic signs of a parasitic origin, such as young CE1 cysts or inactive CE4-CE5 cysts.Unfortunately, these stages are also those with the broader differential diagnosis (e.g.simple cysts, neoplastic lesions) whose serology results are also difficult to interpret and often negative [9].
The use of Rapid Diagnostic Tests (RDTs) is particularly useful is resource-poor settings, and in the context of CE they may be suitable to complement imaging where diagnosis is uncertain.Several reports described the performance of commercial and experimental RTDs in the diagnosis of CE [10][11][12][13][14][15]; however, no study so far compared the performance of commercially available RDTs.Here we performed a comparison of the diagnostic performance and reproducibility of three commercially available RDTs for the diagnosis of CE and compared them with those of a commercial ELISA test routinely used in the parasitology diagnostic laboratory of San Matteo Hospital Foundation, Pavia, Italy.Our results show that the evaluated RDTs have an overall comparable performance to the ELISA test in the diagnosis of hepatic CE in well-defined stages, although significant differences exist among them.If confirmed in a bigger cohort, these results would support the use of RDTs instead of conventional techniques to complement imaging in the diagnosis of CE.
Patients and sera
Sera included in the analysis were frozen (-80°C) stored samples from patients with hepatic CE and non-parasitic hepatic cysts seen between 2010 and 2015 in the Ultrasound Diagnostic Service of the Division of Infectious and Tropical Diseases, San Matteo Hospital Foundation, Pavia, Italy, where the WHO Collaborating Centre for Clinical Management of Cystic Echinococcosis is based.Clinical information related to each patient and sample was retrieved retrospectively in March 2015 from the electronic database of patients visited in the Centre.Patients included in the study formed a convenience series.Selection criteria were presence of a single cyst, located in the liver, of non-parasitic nature (controls) or with a well-defined CE stage according to the WHO-IWGE classification, as assessed by abdominal US by an experienced sonographer (EB) (gold standard).When possible, sera were collected from people who had never received treatment for CE or whose treatment ended > 12 months before serum collection.Patients with non-parasitic hepatic cysts were used as controls because non-parasitic cysts represent the most common differential diagnosis of hepatic CE cysts.
Cysts classification
Cysts were classified according to the WHO-IWGE classification.For the analysis, CE cysts were grouped into active (CE1, CE2, CE3a and CE3b) and inactive (CE4 and CE5).Experimental and clinical data prove that CE3b are biologically active (i.e.viable) cysts, while CE3a cysts can be both biologically active or not [16][17][18].However, in our analysis, we grouped CE3a cysts with the other active stages as disruption of the integrity of the cyst wall, irrespective of the viability of the cyst, allows parasite antigens to stimulate antibody production.Therefore, it can be speculated that cyst wall integrity is likely a more important condition than the biological viability per se to influence serological responses.Patients with small CE1 cysts are often seronegative, although cysts in this stage are unequivocally active [19]; thus this stage should likely be grouped independently in serology analysis.However, not enough samples were present to carry out this sub-analysis.CE4 cysts that reached inactivation spontaneously but recently (or only temporarily inactivated after unsuccessful treatment) should likely also be grouped with "active cysts", while stably inactive CE4 and CE5 cysts constitute the real "inactive cysts" group [20,21].However, this more precise classification at present is not possible in the absence of either long-term follow-up of active cysts without therapy or performing invasive sampling for the assessment of biological activity of cysts, both options burdened by practical and ethical constraints.Therefore, CE4 and CE5 cysts are grouped here in the inactive group.These considerations are at the basis of the choice of cyst grouping used in this study.
Diagnostic tests
Selected sera were analyzed using the following three commercially available immunochromatographic rapid diagnostic tests: VIRapid HYDATIDOSIS (based on purified antigen B and antigen 5; Vircell, Salamanca, Spain), Echinococcus Dot Immunogold Filtration Assay (DIGFA, based on purified cyst fluid, protoscolex antigen, antigen B and antigen Em2 of E. multilocularis; Unibiotest, Wuhan, China), and ADAMU-CE (based on recombinant antigen B; ICST, Saitama, Japan), following the manufacturer's instructions.The sera were also tested in double with RIDASCREEN Echinococcus IgG ELISA (R-Biopharm, Darmstadt, Germany), routinely used in the parasitology diagnostic laboratory of San Matteo Hospital Foundation, Pavia, Italy, following manufacturer's instructions.For the ELISA test, Optical Density (OD) results were used to calculate and interpret a Sample Index (SI), as per manufacturer's instructions.ELISA results were considered positive for SI 1.1, negative for SI <0.9, and border line for 0.9 SI<1.1.Borderline results were considered negative for the analysis of results.In this work, "OD" will always refer to Sample Indexes, not to raw OD values; the terminology "OD" was preferred due to more immediate understanding.All tests were performed in parallel in a single session in April 2015.Each test was read by a single operator experienced in laboratory procedures (FT for ELISA and VIRApid, MM for DIGFA, IC for ADAMU-CE).Readers were blind to cyst stage and to results of other tests at the time of reading.Results were recorded as positive or negative and the semiquantitative colorimetric reading of tests was also recorded for VIRapid HYDATIDOSIS and DIGFA tests, as well as OD values for the ELISA test.For DIGFA test, positivity was considered when either "Echinococcus spp" or"E.granuolosus or E. multilocularis" or "E.granulosus" indicators were present and the semiquantitative reading was based on the color intensity of the least intense spot.Examples of RDTs results are shown in Fig 1.
Statistical analysis
The sample size was constrained by the procurement of tests.With a sample of 84 subjects, the study had 80% power for the pairwise comparison of the Area Under the ROC Curve (AUC) of the RDTs, calculated according to the method of Obuchowski [22] and based on the diagnostic performances of the ELISA test, as assessed in a previous work [9].Difference in AUC was set at 15% and the correlation between two tests at 0.3.Alpha value was set at 0.01 to account for multiple comparisons.The Shapiro-Wilk test was used to test the normal distribution of quantitative variables.When quantitative variables were normally distributed, the results were expressed as the mean value and standard deviation (SD), otherwise median and interquartile range (IQR; 25th -75th percentile) were reported.Qualitative variables were summarized as counts and percentages and differences were analysed with Chi-square test or Fisher exact test, as appropriate.For each test, overall and group-specific (active vs inactive) Sensitivity and Specificity values, as well as AUC, were calculated together with their 95% Confidence Interval (CI).US classification of cysts was considered the gold standard.The performance of the RDTs was compared to those of the ELISA test using McNemar and t-test as appropriate.The semiquantitative reading values of VIRapid HYDATIDOSIS and DIGFA tests were correlated with the ELISA OD values using Spearman's rank correlation coefficient.Sixteen sera were randomly selected using an electronic random numbers generator and re-analyzed with the three RDTs for assessment of result reproducibility using Cohen's Kappa coefficient.P<0.05 was considered statistically significant.A Bonferroni-Holm correction was applied for multiple test.All tests were two-sided.The data analysis was been performed with the STATA statistical package (release 13.1, 2014, Stata Corporation, College Station, Texas, USA).
Patients and sera characteristics
Eighty-four sera from 84 patients fulfilling inclusion criteria were available for the study.Of these, 59 were patients with single CE cysts of the liver, while 25 had single non-parasitic hepatic cysts.Of the 59 CE patients, 38 had active and 21 inactive cysts, according to the WHO-IWGE classification.Eleven (18.6%)CE patients had received medical treatment with albendazole before sample collection (median 19.4 months before; IQR 10.6-51.6;range 3.1-113.0).The size of the cyst (larger diameter) was not significantly different between active and inactive CE cysts (p = 0.82), while non-parasitic cysts were significantly smaller than CE cysts (p<0.001).Clinical and demographic characteristics of included patients and sera are summarized in Table 1.
Sensitivity and specificity of the tests
In one case VIRapid HYDATIDOSIS gave an invalid result (absence of the control band) and was therefore excluded from the analysis.In no case did the DIGFA test give a univocal "E.multilocularis" result.In 19 (38%) cases (13 out of the 43 [30.23%]CE cysts with positive serology and in 6 out of the 7 [85.71%]non-parasitic cysts with positive serology) DIGFA test failed to individuate E. granulosus, but provided an "Echinococcus spp" result or an "E.granulosus or 2. The performance of VIRapid HYDATIDOSIS was not statistically different from those of the ELISA test, while those of DIGFA (p = 0.045) and ADA-MU-CE (p = 0.074) showed a borderline significant difference.
When we analyzed Sensitivity and Specificity of the tests within groups (active, inactive, and non-parasitic), we found that ADAMU-CE was significantly less sensitive in the diagnosis of active cysts (p = 0.019), and DIGFA was significantly less specific when applied on samples from patients with non-parasitic cysts (p = 0.014), compared to ELISA (Table 2).Although a statistical analysis by individual CE stage was not possible due to the limited number of samples, results are indicated in Table 2.
To explore the discrepancies between ELISA and RDTs results, we analyzed the percentage of positive and negative RDTs results of sera from CE patients stratified by ELISA OD groups, set as follows: negative OD < 1.1; low-positive 1.1 OD 5.0; high-positive OD > 5.0.The threshold between low-positive and high-positive OD values was set arbitrarily.As shown in
ROC curves
ROC AUC characteristics and results of comparison between ROC curves are shown in Table 4 and Fig 2 .In this analysis a statistically borderline significant difference was seen only between VIRapid HYDATIDOSIS and DIGFA (p = 0.042).
Semiquantitative reading
When we examined the correlation between ELISA OD values and the visual semiquantitative reading of band/dots color intensity of VIRapid HYDATIDOSIS and DIGFA, respectively, we found a significant positive correlation in both cases (p < 0.001), as shown in Fig 3.
Discussion
In rural underserved areas, where CE is most prevalent and health systems are basic and/or difficult to access, the availability of RDTs to help in the differential diagnosis of suggestive US lesions would be very useful.Although several reports described the performance of experimental RDTs in the diagnosis of CE, studies assessing and comparing the diagnostic accuracy of commercially available tests are very scant [10][11][12][13][14][15].Feng and colleagues [12], using DIGFA with sera from China, reported a sensitivity of 83.4% for hepatic CE and a specificity of 93.4% when sera came from hospitalized patients.In our centre, the DIGFA test gave clearly inferior results; while our results were comparable with those found by the authors when sera from US screening campaigns were used (Se 71.8% for abdominal CE; Sp 78.1%).Feng and colleagues attributed these differences to the presence, in the field setting, of subjects exposed to the parasite without developing detectable lesions, or the presence of old lesions not accompanied by positive serology.However, sera from hospitalized patients were collected less than 2 years after surgical treatment for CE.Moreover, the authors did not mention the distribution of CE stages in the two patient cohorts.It is therefore likely that the different performance between the two cohorts, and with our results, are at least in part due to the difference in these variables, known to affect serology results.Santivanez and colleagues [23] using a previous form of the ADAMU-CE test on a panel of sera from surgically confirmed CE patients, found a better Se (80% on sera from liver cysts) and same Sp (100% if sera from patients with alveolar echinococcosis were excluded-89.8%if included) compared to our results.In their work, however, they do not provide details of the cyst stages.Therefore, the differences with our results may be at least in part due to differences in these conditions, although different performances between the two "versions" of the kit may not be excluded.Similarly, Tamer and colleagues [14], evaluating the performances of the VIRapid test, reported a better Se (96.8%) and same Sp (96% if sera from patients with other parasitoses were excluded -87.5% if included) compared to our results, but they did not provide data on cyst characteristics; all CE patients included in their cohort where surgically confirmed, suggesting that predominantly active CE stages were included.Finally, Chen and colleagues [11], using sera from hospital cases, reported that the use of recombinant antigens in the DIGFA test might improve the specificity of the test, but at the expense of sensitivity.
In our centre, the VIRapid HYDATIDOSIS test showed the overall best diagnostic accuracy among the three RDTs, although it did not result statistically significant better than the ELISA test.On the contrary, in comparison with the ELISA test, the ADAMU-CE test was significantly less sensitive in the diagnosis of active cysts, while the DIGFA test was significantly less specific.These results are in line with the literature, reporting overall better sensitivity for tests based on native antigens and better specificity for those based on recombinant antigens [6,7,19].Not surprisingly, all RDTs were as poorly sensitive as the ELISA test in the diagnosis of inactive cysts.These results confirm the limits of serology in the diagnosis of CE and in supporting the differential diagnosis of CE1 and CE4-CE5 cysts from other hepatic lesions.Evidence exist that patients with CE have both common and stage-specific serology profiles, indicating that the development of both infection-and stage-specific immunoassays is possible [24,25].Ahn and coworkers [24] showed that antigen 5 seems to be immunoreactive in every stage, as opposed to antigen B, whose proteoforms revealed a reduced antibody capture in CE1, CE4 and CE5 stages.Unfortunately, so far, conventional methods used for antigen discovery such as 2D gel electrophoresis of cyst fluid and immunoblot using sera from infected patients did not allow the identification of stage-specific antigens to be used, alone or in a cocktail, for a more sensitive and stage-specific diagnosis and follow-up of patients.Clearly this should be the focus of high-priority work in the filed.
As mentioned previously, many variables are known to influence CE serology results [9,19].In this study, only sera from subjects with a single cyst located in the liver have been included, excluding number and location of the cyst influencing results.Similarly, the size of the cyst was not significantly different between active and inactive cysts; therefore this variable should not have significantly influenced the results.Finally, only 3 out of the 11 CE patients who were treated with albendazole before serum collection ended drug intake less than 12 months prior to sampling.Therefore, the influence of this variable should have only marginally influenced our results given that previous research has shown that treatment that has ended more than 1 year before sampling does not have a significant impact of ELISA test results [9].
The samples size of this study was constrained by the strictness of the inclusion criteria and by the procurement of tests.However, it is pivotal that the first evaluation of diagnostic tests is performed on well-characterized and homogeneous samples.This, unfortunately, is very rarely done, with consequent problems in the interpretation and reliability of the results.The limitation of the number of samples that could be included in this work to comply with this principle was therefore weighted against the quality of baseline data on the evaluation of the RDTs that such an approach could provide.In this work, we included sera from patients with non-parasitic cysts as controls because non-parasitic cysts represent the most common differential diagnosis of hepatic CE cysts.Surely further work should thoroughly evaluate the specificity of the tests with sera from patients with other parasitoses, in particular alveolar echinococcosis.However, it must be stressed that serology for CE should be performed only after lesions compatible with echinococcosis are found by imaging, to increase the pre-test probability of the presence of infection.Indeed, due to the low prevalence of infection (and consequent very low Positive Predictive Value of any test), the generally low specificity of serodiagnostic tests (especially an issue in areas where contact with the parasite without cyst development and number of other diseases affecting the population may be significant), and the very low sensitivity of serodiagnosis in extra-hepatic CE (limiting the use of serology to diagnose CE in organs not explorable by ultrasound), the value of serological screenings is limited and should be conducted only after careful evaluation of the scientific question such studies want to answer.
To conclude, our results show that RDTs have overall comparable performance to the routine ELISA test in the diagnosis of hepatic CE in well-defined stages, although significant differences in diagnostic accuracy exist among them.These results support their use in resourcepoor settings to complement ultrasound diagnosis of CE in doubtful cases.However, all tests are poorly sensitive in the presence of inactive and CE1 cysts, which are cyst stages that may pose considerable problems of differential diagnosis.Furthermore, further studies are warranted to explore the performance of RDTs in the follow-up of CE patients, often extremely difficult to perform with regular US examinations in endemic areas.VIRapid HYDATIDOSIS appeared to perform best among the examined kits and deserves further testing with a larger cohort including other control groups (e.g. with other parasitoses) and sera from patients with extra-hepatic CE cysts and with CE cysts of different parasite genotypes.The test also deserves further evaluation in the field and with the use of whole blood from fingerprick sampling.Finally, benefit studies on the use of RDTs, and serology tests in general, are lacking in the field of CE and deserve future efforts.
Table 1 .
Characteristics of patients and sera included in the analysis.Size of cysts is expressed as the median of the largest diameter in mm with IQR.# N ABZ (n<1y): number of subjects having received albendazole before sample collection and number of patients who ended albendazole intake less than 1 year before sample collection (in brackets).doi:10.1371/journal.pntd.0004444.t001E. multilocularis" result.General test sensitivity and specificity and comparison with the results of the ELISA test are shown in Table *
Table 2 .
Tests sensitivity and specificity.Results are compared with those of the ELISA test.US diagnosis was used as the gold standard.Significant differences are indicated in bold.
Table 3 ,
we found that for all RDTs the percentage of positive results increased passing from negative to low-positive to high-positive ELISA OD groups, with discrepancies between ELISA and RDTs tests being most frequent in the low-positive ELISA OD group.
Table 3 .
Positive and negative results of RDTs stratified by ELISA OD values. | 2018-04-03T04:55:51.545Z | 2016-02-01T00:00:00.000 | {
"year": 2016,
"sha1": "0f3ecb52cb8ebaafca9a001093abc8df2f0f1886",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosntds/article/file?id=10.1371/journal.pntd.0004444&type=printable",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0f3ecb52cb8ebaafca9a001093abc8df2f0f1886",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
12423301 | pes2o/s2orc | v3-fos-license | Mesenchymal stem cell transformation and sarcoma genesis
MSCs are hypothesized to potentially give rise to sarcomas after transformation and therefore serve as a good model to study sarcomagenesis. Both spontaneous and induced transformation of MSCs have been reported, however, spontaneous transformation has only been convincingly shown in mouse MSCs while induced transformation has been demonstrated in both mouse and human MSCs. Transformed MSCs of both species can give rise to pleomorphic sarcomas after transplantation into mice, indicating the potential MSC origin of so-called non-translocation induced sarcomas. Comparison of expression profiles and differentiation capacities between MSCs and sarcoma cells further supports this. Deregulation of P53- Retinoblastoma-, PI3K-AKT-and MAPK pathways has been implicated in transformation of MSCs. MSCs have also been indicated as cell of origin in several types of chromosomal translocation associated sarcomas. In mouse models the generated sarcoma type depends on amongst others the tissue origin of the MSCs, the targeted pathways and genes and the differentiation commitment status of MSCs. While some insights are glowing, it is clear that more studies are needed to thoroughly understand the molecular mechanism of sarcomagenesis from MSCs and mechanisms determining the sarcoma type, which will potentially give directions for targeted therapies.
Introduction
MSCs have been under intensive research and application efforts since their first establishment by Friedenstein and his colleagues in 1968 [1]. Standard criteria developed by the International Society for Cellular Therapy define MSCs by three characteristics: 1) plastic adherence under standard culture conditions, 2) expression of CD105, CD73 and CD90 and no expression of CD45, CD34, CD14, CD11b, CD79b, CD19 and HLA-DR and 3) capacity to differentiate into osteoblasts, chondroblasts and adipocytes in vitro, termed trilineage differentiation potential ( Figure 1) [2].
Owing to the ease of isolation, expansion, the multilineage differentiation potential and a variety of physiological functions, MSCs are applied in a wide range of experimental and medical applications. Among them are the enhancement of hematopoietic stem cell engraftment, the amelioration of acute graft-versus-host disease, cardiac diseases and regenerative medicine approaches for especially bone and cartilage [2].
Cell transformation is a process during which genetic changes occur, resulting in cells with the ability to grow indefinitely and anchorage-independently and with tumorigenic properties upon transplantation [3][4][5][6][7]. Senescence has been overcome in these transformed cells [4,[8][9][10]. On one hand, the potentials of MSCs to transform, to initiate sarcomas and under some conditions to facilitate tumour progression are calling for caution for MSC-based applications [5,11]. On the other hand, the transforming property of MSCs and their possible role as sarcoma progenitors make these cells useful for studying sarcomagenesis and progression. In this review we present an overview of the roles of MSCs in sarcomas, with a specific focus on tumorigenic transformation and sarcomagenesis.
Spontaneous mouse MSC transformation
Mouse MSCs have been consistently demonstrated to spontaneously undergo tumorigenic transformation after long term ex vivo culture [4,6]. This transformation process can also be induced by certain manipulations, including both gene targeting and drug or chemical treatment to affect crucial pathways (Table 1) [11][12][13][14]. In contrast, human MSCs do not spontaneously transform in vitro, even after long term culturing, which will be discussed later in more detail [9,13,15].
Mouse MCSs are reported to spontaneously undergo changes in morphology, proliferation rate, migration ability, cell surface marker profile, genomic constitution and most importantly tumorigenicity after long term in vitro culture [4,9,20,21]. Meanwhile, one study has also revealed that mMSCs could transform even after short term in vitro culture. Injection of passage 3 mMSCs into mice resulted in formation of tumours comparable with soft tissue sarcomas [19]. Transformed mouse MSCs always show a higher proliferation rate than the native cells [3,4]. These transformed cells exhibit tumorigenicity, as shown by anchorage-independent growth assay and xeno-transplantation in mice and zebrafish, while this is not observed with low passage mouse MSCs before their transformation [3,4,6,22]. Interestingly, the readiness of in vitro tumorigenic transformation seems to be a unique property of mouse MSCs since it is absent in most other mouse stem cells, including hematopoietic stem cells and embryonic stem cells [23]. This readiness can be probably ascribed to the genetic instability already shown in mouse MSCs very shortly after isolation from bone marrow, although the cytogenetic abnormalities in low passage mouse MSCs are considerably less in number than in transformed mouse MSCs [23]. Interestingly, MSC spontaneous transformation happens much less frequently in vivo, as shown by the low incidence of spontaneous sarcomagenesis in mice. This can be possibly explained by the different microenvironment of in vitro and in vivo conditions for MSCs. Solid research on the role of the in vivo niche of BMMSCs in guarding its genomic stability is needed to answer this question more exactly.
Induced mouse MSC transformation
Transformation of mouse MSCs has been induced by an array of manipulations, including knockout of tumour suppressor genes, overexpression of oncogenes and drug administration to affect signaling pathways. The pathways targeted by these manipulations are mostly involved in cell cycle checkpoint control, cell survival, proliferation and apoptosis ( Table 2) [14]. In one study, loss of tumour suppressor P21 and Tp53 in mouse adipose derived MSCs (AMSC) induced in vitro transformation Abnormal karyotype [6] Fibrosarcoma p53 mutations [24] Chromosomal instability + TERT and c-myc expression [16] Undiff. soft tissue sarcomas Aneuploidy + chromosomal translocations [17] Short term culture Soft tissue sarcomas Aneuploidy [18] and in vivo so-called fibrosarcoma formation after transplantation [24]. In another study, both Tp53−/− Rb−/− and Tp53−/− mouse AMSCs were generated through Cre mediated excision of loxP flanked loci. Leiomyosarcomalike tumours were developed in the in vivo tumorigenicity assays of these 2 types of mouse AMSCs [8].
The combination of Cdkn2a loss and C-myc overexpression in mouse BMMSCs gave rise to osteosarcomas accompanied by the loss of adipogenic differentiation capacity in transformed mouse BMMSCs [16]. Besides directly targeting in vitro cultured MSCs, several genetically engineered mouse models have been developed to investigate the effects of genes on transformation process. A conditional mouse model with Tp53 homozygous deletion has been created by crossing Prx1-Cre transgenic mice to mice bearing alleles of Tp53 flanked by loxP. Prx1 is specifically expressed in the early mesenchymal tissues of embryonic limb buds [17]. In these P53deficient mice many types of sarcomas occurred in the mesenchymal cells of limb buds and osteosarcoma was the most common type. A mouse model with loss of RB generated also through Cre-loxP system was not found to display tumorigenesis. However, loss of RB accelerated tumorigenesis in P53-deficient mice [17]. These induced transformation studies established the importance of the P53 pathway in preventing mouse MSC transformation.
Besides, in spontaneous transformation studies of mouse MSCs, defects in Tp53 or Cdkn2a genes were frequently found [18]. P53 and P14, proteins encoded by these two genes, are both important members of P53 pathway, further corroborating the crucial role of P53 pathway in mouse MSC transformation [4,25]. Upregulated oncogenic pathways have also been shown to induce or potentiate mouse MSC transformation. Fos is an oncogene encoding a transcription factor downstream of many growth factor pathways. The Fos overexpression transgenic mice resulted in the development of bone tumours, with chondrosarcomas as the main type [26]. This is puzzling as the driver mutation in human central chondrosarcoma is IDH1 or IDH2 [27], while in peripheral chondrosarcomas this is not known [27,28], but no indication for involvement of Fos is found [28,29]. The PI3K-AKT pathway is crucially involved in apoptosis and proliferation. In a study a mouse model with homozygous loss of Pten, a negative regulator of the PI3K-AKT pathway in smooth muscle lineage cells developed leiomyosarcomas [30,31]. The MAPK pathway is principally responsible for mitosis.
Human MSC transformation
Human MSCs have not been shown to undergo spontaneous transformation in vitro [9,15,43]. There have been few reports on spontaneous human MSC in vitro transformation, of which two turned out to be caused by contamination by tumour cell lines and were retracted afterwards [34,35,44]. Meanwhile, there are several studies demonstrating that human MSCs did not go through transformation in spite of long term in vitro culturing [12,15]. For the possibility of in vivo spontaneous transformation, there have been few cases of osteosarcoma genesis in patients infused with bone marrow MSCs for other diseases [45][46][47]. The majority studies of human MSC transformation are based on genetic approaches to knock out important tumour suppressor genes and overexpress certain oncogenes Table 3) [14]. In contrast to mouse MSC studies, four of the induced human MSC transformation studies consist of the exogenous expression of hTERT in human cells [38,[48][49][50]. This may be attributed to the much shorter telomeres in human MSCs than their mouse counterparts, the much shorter life span of mice than human and the difference in telomere damage signaling pathways between mouse and human [41,50,51]. Consistent with mouse MSC studies, the disruption of cell cycle control machineries, exemplified by P53 and RB pathways are also important for human MSC transformation. For instance, the introduction of SV40-LT, which perturbs both P53 and RB proteins potently promoted human MSC transformation [38]. Furthermore, the overexpression of some oncogenes has also been shown to contribute to the transformation, such as H-RAS [5][6][7]. Although the definite spontaneous transformation capacity of mouse MSCs is not a mimicry of human MSCs, the signaling pathways underlying their tumorigenic transformation show high consistency, including the P53 pathway, RB pathway, PI3K-AKT pathway and MAPK pathway and so on.
MSCs as the origin of sarcomas and tumour type specificity
There is substantial evidence supporting a MSC origin of a spectrum of sarcomas, both pleomorphic as well as translocation driven subtypes. In the non-translocation -driven sarcoma types, the correspondence between the differentiation capacity of MSCs and the histological spectrum of different types of sarcomas is reflected (Figure 1). Approaches and methods have also been used to investigate this hypothesis, including differentiation assays, expression profiling and Immunohistochemistry [52][53][54]. Based on the site of presentation, sarcomas can be categorized into bone tumours and soft tissue tumours. Based on genetic profiles, sarcomas can be categorized into two groups, one with relatively simple genetic alterations, either being associated with point mutations or reciprocal translocations, and the other with extensive genetic changes. Examples of the cytogenetically relatively simple group are alveolar rhabdomyosarcoma, myxoid liposarcoma, Ewing sarcoma and synovial sarcoma. Examples of the other group are leiomyosarcoma, undifferentiated pleomorphic sarcoma and osteosarcoma [55]. MSC differentiation towards a defined and differentiated cell type is a process with a lot of different signaling pathways and differentiation stages involved ( Figure 2). The sarcoma type arising from in vitro transformed MSCs after inoculation into mice seems to be dependent on many factors, including the originating tissue of the MSCs, the differentiation commitment status of the targeted cell and also the targeted molecular pathways. In most cases with bone marrow derived mouse MSCs (BMMSC) or osteochondro progenitors osteosarcomalike tumours were formed. With AMSCs or smooth muscle cell progenitors leiomyosarcomas were mostly formed (Table 4) [8,16,24]. BMMSCs from aged mice tend to spontaneously give rise to so-called fibrosarcomas instead of osteosarcomas as in most spontaneous transformation studies [25]. It must be added that according to the present view fibrosarcomas is a poorly defined histological entity. It is necessary to perform large scale studies to specifically address the relationship between tissue origin, targeted pathways and the sarcoma type generated, which is currently lacking.
Bone sarcomas Ewing sarcoma
Ewing sarcoma arises predominantly in bone but in soft tissues as well. It is a type of a poorly differentiated tumour known to be associated with a the expression of hTERT + SV40-LT + H-RAS [13] hTERT + H-RAS + BMI-1 [43] Tumors with smooth muscle and bone properties hTERT4 [33] Undifferentiated pleomorphic sarcomas DKK1 + SV40-LT [14] EWSR1-ETS fusions or rarely other chimeras [59][60][61][62]. The exogenous expression of the fusion gene EWS-FLI1 alone in mouse MSCs has been shown to transform these cells, demonstrated by in vitro immortalization and in vivo sarcomatous tumour formation after inoculation in immunocompetent mice [63]. In another study a secondary genetic alteration was needed for the induced transformation of mouse MSCs [64]. Similar manipulations have been also applied on human MSCs. Human MSCs with exogenous EWS-FLI1 expression transformed and these transformed cells expressed neuroectodermal markers [65]. Moreover, the knockdown of EWS-FLI1 expression in Ewing sarcoma cell lines restored the in vitro trilineage differentiation ability of the cells [52]. In a - [ 6] C-myc overexpression and Ink4a/Arf knockout [16] Mouse osteoblast precursors Tp53, Rb double knockout [56] Mouse osteoblasts Tp53knockout [57] Tp53 and Rb double knockout [57] Tp53 knockout [17] Leiomyosarcomas Mouse AMSCs Tp53 knockout [14] Tp53 knockout [24] Tp53 and Rb double knockout [8] Mouse smooth muscle lineage cells Pten knockout [30] "Fibrosarcoma" Mouse BMMSCs - [58] Aged mouse BMMSCs - [25] Mouse AMSCs P21 knockout, Tp53 heterozygous knockout [24] Pleomorphic rhabdomyosarcoma Mouse skeletal muscle cells K-ras overexpression and Tp53 knockout [32] K-ras overexpression and Tp53 heterozygous knockout [32] transgenic mouse model, by expressing EWS-FLI1 gene specifically in the mesoderm-originated tissues in limbs and simultaneous Tp53 knockout, sarcomas with similar characteristics as Ewing sarcoma occurred while with only Tp53 knockout the primary sarcoma type was osteosarcoma [66]. In brief, Ewing sarcoma, originally considered as tumours arising from the neuroectodermal lineage and not considered of mesenchymal origin could be experimentally derived directly from MSCs, but only upon introducing the typical translocation. This strongly supports an MSC origin of Ewing sarcoma [67].
Osteosarcoma
Osteosarcoma is the most common primary malignant bone tumour among children. It is characterized by the production of osteoid and extensive cytogenetic instability [36]. Different studies have supported the MSC origin of osteosarcoma [4,16]. Both spontaneous and induced MSC models for osteosarcoma have been discussed above. Osteosarcomas mainly arise in the metaphyses of long bones and the peak incidence is in the second decade of human life, correlating with the rapid bone growth during puberty, a process in which MSCs are crucially involved [37]. In both human osteosarcoma cells and transformed MSCs, frequent aberrations in genes encoding components of P53 pathway have been identified [4,39]. In Tp53 knockout mice many types of sarcomas developed and osteosarcoma was the main type [17].
Chondrosarcoma
A study compared the gene expression profiles of chondrosarcomas of different differentiation degree [53]. Less differentiated chondrosarcomas were shown to have more similarity with MSCs of pre-chondrogenic stages and more differentiated chondrosarcomas share more similarity with fully differentiated chondrocytes. This suggests that chondrosarcoma progression probably parallels deregulated chondrocytes differentiation process of MSCs [40,53].
Soft tissue sarcomas Synovial sarcoma
In synovial sarcoma, exogenous expression of SYT-SSX2 fusion gene in the skeletal-muscle-specific Myf5 expressing lineage induced the formation of synovial sarcomas in vivo. Remarkably, when this fusion gene was introduced into cells more differentiated than myoblasts synovial sarcoma did not occur [68]. This fact emphasizes the important role of cell status in the genesis of specific type of sarcomas. On the other hand, fusion gene silencing in primary synovial sarcoma cells restored both the trilineage differentiation capacity and the MSC marker expression, strongly suggesting cells of MSC lineage as the origin of synovial sarcoma [69]. This may be explained by the fact that although considered as muscle specific Myf5 can also be expressed in some MSCs during development.
Other soft tissue sarcomas Similar results as described above were seen in a mouse model of liposarcoma, where FUS -CHOP was able to induce liposarcoma genesis in MSCs, whereas no liposarcoma was formed when FUS-CHOP gene was manipulated to be only expressed in differentiated, aP2-expressing adipocytes. This study again underscores the exact cell status as a crucial factor in sarcomagenesis [42,70]. However other studies show that there is considerate plasticity in the different lineages since rhadomyosarcoma, an aggressive skeletal muscle tumour can be generated from adipocytes by activation of Sonic Hedgehog signaling [71]. A third soft tissue sarcoma model is that of clear cell sarcoma, characterized by melanoma-like features and an EWSR1/ATF1 translocation. Conditional expression of the human EWSR1/ATF1 fusion gene in mouse gives rise to tumorigenesis with extreme brief latency. The most stem-like MSCs give rise to fully melanoma-like lesions, whereas more differentiated cells result in a less clear cell sarcoma phenotype [72].
Discussion
Until now there have not been many studies addressing the effect of MSCs of different tissues and different ways of preparation on the role of MSCs as a model for sarcoma genesis. The conspicuous difference between mouse and human MSCs in spontaneous transformation can be possibly explained by many factors. In human cells, the telomeric DNA is often 5-10 kb long and mouse cells have a telomeric DNA length of 30-40 kb [41,51]. The longer telomeres in mouse MSCs allow cells to proliferate many generations before reaching reaching the telomere length limit, giving higher chance for cell to acquire aberrations [41,51]. Since mice have a shorter life span than humans, the genome maintenance in mouse cells is also less stringent than in human cells [73]. Niche is one of the most important factors in the determination of stem cell characteristics. The function of niche in stem cell differentiation and pluripotency maintenance is well known. There has also been research showing that the low oxygen tension is important for multipotency maintenance of MSCs, while normal oxygen level will induce differentiation [74]. Besides, niche has also been indicated to be involved in tumorigenesis [75]. This suggests the important role of niche in genomic instability and therefore tumourigenetic ability. One special feature of the bone marrow niche is the partnership of MSCs and haematopoietic stem cells, which deserves further exploration [76,77].
Future considerations
The numerous but well documented studies on MSCs giving rise to sarcomas in experimental set-up provide excellent models to study this devastating malignancy in a systematic and controlled way. This offers opportunities for preclinical testing of experimental therapies, thereby providing convincing data that may facilitate application in actual clinical trials despite small patient cohorts.
Conclusions
Although mouse MSCs have exhibited definite readiness to transform in vitro, human MSCs do not go through transformation in ex vivo expansion and need additional manipulation before progression into sarcomas. Therefore, although there are few cases of osteosarcoma genesis in patients infused with bone marrow MSCs [45][46][47], it is considered generally safe to use human MSCs in clinic. | 2016-05-12T22:15:10.714Z | 2013-07-23T00:00:00.000 | {
"year": 2013,
"sha1": "31d73440c78213925cc5a81d645d6cd0d91ef5fb",
"oa_license": "CCBY",
"oa_url": "https://clinicalsarcomaresearch.biomedcentral.com/track/pdf/10.1186/2045-3329-3-10",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "31d73440c78213925cc5a81d645d6cd0d91ef5fb",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
5038185 | pes2o/s2orc | v3-fos-license | The Usefulness of Dual-Layer Spectral Computed Tomography for Myelography: A Case Report and Review of the Literature
We describe a case of lumbar stenosis in which retrospective spectral analysis using dual-layer spectral detector computed tomography (CT) had the ability to expand the evaluable region in the spinal canal. Spinal canal stenosis is a common condition whose symptoms (such as lower back and leg pain with walking) deteriorate the quality of life. Generally, magnetic resonance imaging (MRI) and CT myelography are performed to diagnose canal stenosis. Dual-layer spectral detector CT can yield virtual monochromatic imaging and retrospective on-demand spectral analysis without a prescan setting. Spectral analysis could expand the evaluable region in the spinal canal for increasing the contrast enhancement in the canal.
Introduction
Lumbar canal stenosis leads to compression of the thecal sac and may also involve the nerve roots because of narrowing of the intervertebral foramina. Computed tomography (CT) enhanced by myelographic contrast and magnetic resonance imaging (MRI) are well-known diagnostic tools for lumbar canal stenosis [1,2].
In recent years, MRI has become the "gold standard" in the diagnosis of lumbar spinal stenosis because of its potential in visualizing radiolucent soft tissues and the absence of radiation exposure. However, MRI may not be useful for the evaluation of postoperative lumbar canal stenosis due to susceptibility artifact. In addition, MRI may be contraindicated in patients with pacemakers [3]. CT myelography is useful in these instances.
Recently, dual-energy CT has become clinically available [4,5]. By means of the high-and low-energy X-ray spectra, dual-energy CT acquisition facilitates a greater degree of material characterization than the conventional single-energy acquisition. A previous report suggested the usefulness of dual-energy CT to improve image quality when assessing bone marrow edema because of the synthesizing virtual monochromatic images [6]. Virtual monochromatic images are particularly useful for evaluation of the di erences in contrast enhancement between the spine and spinal canal. In CT myelography, the spinal cord is evaluated using an intrathecal contrast agent. However, the image quality in cervical and upper thoracic CT myelography is suboptimal in some cases [7]. Previous report suggested the usefulness of low-keV virtual monochromatic images for increasing the contrast enhancement for vascular and hepatic parenchyma [8]; therefore, dual-energy CT might increase the contrast enhancement in the canal. A previous report suggested the usefulness of dual-energy CT for the reduction of artifact and radiation dose [9] in CT myelography. However, it did not evaluate the utility of dual-energy CT in increasing the contrast enhancement of the spinal canal in CT myelography.
Recently, the rst commercially available dual-layer spectral detector CT (IQon Spectral CT; Philips Healthcare, Best, Netherlands) has been introduced for clinical use. e scanner has a single X-ray source and two layers of detectors. Two layers collected the low-energy data and the high-energy data. is scanner enables us to acquire the lowand high-energy data simultaneously.
Here, we describe a case of lumbar canal stenosis in which retrospective on-demand spectral analysis using dual-layer spectral detector CT allowed a better evaluation of the thoracic and lumbar canal compared with conventional CT.
Case Report
A 73-year-old Asian male complained of bilateral buttock pain radiating into his thighs and calves. He could not walk for more than 10 min or 2-3 blocks due to pain. Initial patient consultation was made by the orthopedic department in our hospital. Laboratory data were unremarkable.
On physical examination, he was bilaterally positive for Lasègue's sign. MRI in our hospital showed multiple compression fractures (T10, T12, and L1) and spinal stenosis (L1-L2). e patient underwent CT myelography for a preoperative evaluation. e CT myelogram was performed following lumbar puncture at the L2-L3 level under uoroscopy in the prone position and injection of 15 mL Omnipaque ® 300 (iohexol) contrast. CT myelography was performed using a dual-layer spectral detector CT with a routine scan protocol. e scanning was started 10 min after contrast material injection. e scan parameters were as follows: detector conguration, 64 × 0.625 mm; gantry rotation time, 0.75 s; helical pitch (beam pitch), 0.578; tube voltage, 120 kVp; tube current time product, 162 mAs (e ective mAs) with automodulation; and volume CT dose index, 13.9 mGy. is CT scan led to the diagnosis of lumbar canal stenosis. e CT myelogram also showed compression fractures of L2 and L3 with associated lumbar canal stenosis ( Figure 1). Furthermore, we performed retrospectively spectral analysis using the workstation (Spectral Diagnostic Suite; Philips Healthcare, Best, Netherlands). e contrast attenuation in the spinal canal at 40 and 55 keV is better compared with that of the conventional images ( Figure 1).
Additionally, we performed quantitative image analysis on the conventional CT image and spectral image data. We measured the mean attenuation of the spinal canal CT attenuation at the level of T6 using a circular region of interest (ROI canal ). is ROI canal was expected not to be so large that it included epidural fat or bone and spine. In addition, we also measured the CT attenuation of the spinal cord using a circular region of interest (ROI spinal ) at the same level. Similarly, the ROI spinal was expected not to be so large that it included the spinal canal. e reason why we selected the level of T6 was that the contrast of the spinal canal might be the lowest in the conventional images. In addition, we de ned the standard deviation of attenuation at the iliopsoas muscle as the imaging noise. We measured the imaging noise at three sequential slices and averaged the results to minimize bias from single measurements. We also measured the contrast and the contrast-to-noise ratio (CNR) between the spinal cord and spinal canal. We dened the contrast as follows: ROI canal − ROI spinal . e CNR was calculated as follows: (ROI canal − ROI spinal )/image noise. We use a copy-and-paste function at the workstation to keep all measurements constant among the three kinds of images. e results are shown in Table 1. e ROI canal and CNR of virtual monochromatic images at 40 and 55 keV were signi cantly higher than those of the conventional CT images. In addition, the image noise of virtual monochromatic images at 40 and 55 keV was signi cantly lower than that of the conventional CT images. e conventional and virtual monochromatic images are shown in Figure 1.
We made a diagnosis of lumber stenosis at L1-L2, associated with clinical symptoms, and surgical treatment was proposed. e patient underwent laminectomy of L1-L2. He was discharged with symptomatic improvement after the operation.
Discussion
CT myelography is used to evaluate the spinal canal, spinal cord, spinal nerve roots, vertebrae, and discs when MRI is contraindicated. e amount of the contrast agent collecting in the upper thoracic and cervical thecal sacs may be less than that in the lumbar region. erefore, conventional CT myelography can be suboptimal for the upper thoracic canal and cervical canal. We suggest the clinical usefulness of spectral analysis using the dual-layer spectral detector CT in improving the image quality in the upper thoracic and cervical canals. Previous report suggested that lower energy level (approaching the K-edge of iodine) increases the attenuation of iodine because of the predominance of the photoelectric e ect [10]. erefore, the iodine-containing spinal canal becomes hyperattenuated at lower energy levels on virtual monochromatic imaging [10]. Hence, in CT myelography, we might evaluate the upper thoracic canal and cervical canal more accurately using virtual monochromatic imaging compared with conventional CT.
In this case, the contrast between the spinal cord and spinal cavity in the thoracic canal was signi cantly greater on virtual monochromatic 40 keV and 55 keV images than that on conventional 120 kVp images (548.4 and 284.1 HU versus 192.0 HU). In addition, there was no signi cant di erence in the image noise between virtual monochromatic 40 keV and 55 keV images and conventional 120 kVp images (28.8 and 23.9 HU versus 32.4 HU).
e CNR of virtual monochromatic 40 keV and 55 keV images was signi cantly higher compared with that of conventional 120 kVp images (19.0 and 11.9 versus 4.9). Previous reports have suggested the usefulness of dual-layer spectral detector CT in reduction of beam-hardening artifact and improving the image quality of the coronary artery and abdominal CT scans [11][12][13]. However, to our knowledge, there have been no previous reports about the usefulness of dual-layer spectral detector CT in increasing the attenuation of the spinal canal in CT myelography.
ere are two dual-energy CT systems in clinical use. e rst method uses two orthogonal X-ray tubes set at di erent kVp levels with two separate detectors. e second method uses rapid kVp switching from a single X-ray source and a detector composed of two scintillation layers (the fast kVp 2 Case Reports in Orthopedics switching method). e image of conventional dual energy CT technique cannot evaluate retrospectively adjusting energy level; therefore, it might be di cult in clinical use. However, the introduction of dual-layer spectral detector CT enables us to generate prospective and retrospective spectral images in all scans. erefore, in case the contrast agent dose in the spinal canal during CT myelography is too low, retrospective spectral data analysis can increase the attenuation of the contrast agent in the spinal canal and might increase the diagnostic performance of the study. e introduction of dual-layer spectral detector CT for CT myelography yields some clinical utility. First, lower energy levels on virtual monochromatic imaging can increase the CT attenuation of iodine in the spinal canal and might reduce the required contrast agent dose and concentration for CT myelography. e CT attenuation of the canal of 40 kVp image was 2.7 times higher compared with that of the conventional CT image (630.8 HU versus 230.4 HU). We supposed that CT attenuation of spinal canal is directly proportional to contrast agent dose. e contrast agent dose of dual-layer CT might be represented by the following formula: contrast agent dose (dual-layer CT) � contrast agent dose (conventional CT) × (230.4/630.8). erefore, we might be able to reduce about 60% of the contrast agent while preserving the CT attenuation of the canal using the dual-layer spectral computed tomography compared with the compared conventional CT. Moreover, the reduction of contrast agent concentration might enable us to use a smaller gauge needle to inject the contrast agent. e smaller gauge needle might decrease technical complications such as dural tear, nerve root damage, CSF leak, and hemorrhage.
Conclusion
In conclusion, this case study suggested that dual-layer spectral detector CT increased the attenuation of the spinal canal in CT myelography and improved the image quality compared with conventional CT.
Conflicts of Interest
e authors declare that they have no con icts of interest. | 2018-05-03T00:19:16.288Z | 2018-03-04T00:00:00.000 | {
"year": 2018,
"sha1": "b3bddd5ca2f95c6c0830bcdbfc816074c40712b4",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/crior/2018/1468929.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b3bddd5ca2f95c6c0830bcdbfc816074c40712b4",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
225083211 | pes2o/s2orc | v3-fos-license | Optimization of Viable Glioblastoma Cryopreservation for Establishment of Primary Tumor Cell Cultures.
Background: Technologies related to the establishment of primary tumor cell cultures from solid tumors, including glioblastoma, are increasingly important to oncology research and practice. However, processing of fresh tumor specimens for establishment of primary cultures on the day of surgical collection is logistically difficult. The feasibility of viable cryopreservation of glioblastoma specimens, allowing for primary culture establishment weeks to months after surgical tumor collection and freezing, was demonstrated by Mullins et al. in 2013, with a success rate of 59% that was not significantly lower than that achieved with fresh tumor tissue. However, research targeting optimization of viable glioblastoma cryopreservation protocols for establishment of primary tumor cultures has been limited. Objectives: The objective of this study was to optimize glioblastoma cryopreservation methods for viable cryobanking and to determine if two-dimensional (2D) or three-dimensional (3D) culture conditions were more supportive of glioblastoma growth after thawing of frozen tumor specimens. Methods: Portions of eight human glioblastoma specimens were cryopreserved by four different protocols differing in the time of enzymatic digestion (before or after cryopreservation), and in the type of cryopreservation media (CryoStor CS10 or 10% dimethyl sulfoxide and 90% fetal calf serum). After 1 month, frozen tissues were thawed, enzymatically digested, if not digested before, and used for initiation of 2D or 3D primary tumor cultures to determine viability. Results: Among the tested cryopreservation and culturing protocols, the most efficient combinations of cryopreservation and culture were those associated with the use of CryoStor CS10 cryopreservation medium, enzymatic digestion before freezing, and 2D culturing after thawing with a successful culture rate of 8 out of 8 cases (100%). Two-dimensional cultures were in general more efficient for the support of tumor cell growth after thawing than 3D cultures. Conclusions: This study supports development of evidence-based viable glioblastoma cryopreservation methods for use in glioblastoma biobanking and research.
Introduction
G lioblastoma is the most common malignant primary brain tumor and is the focus of extensive basic and clinical research efforts targeting improved therapies. 1,2 Technologies related to the establishment of primary low passage tumor cultures are increasingly important to oncology research and practice. [3][4][5][6][7] However, processing of fresh tumor specimens for the establishment of primary cultures on the day of surgical collection is logistically difficult. Importantly to tissue biorepository practices, viable cryopreservation of glioblastoma specimens allowing for primary culture establishment months to years after surgical tumor collection and freezing has been demonstrated with a success rate not significantly lower than that achieved with fresh tumor tissue. 8 Specifically, the success rate of primary culture establishment after cryopreservation in that study (59%) was similar to the success rate when using fresh tissue (63%). Furthermore, no relevant molecular or phenotypic differences between cell lines established from fresh or viable frozen tissue were observed. 8 Following this study, research targeting optimization of viable glioblastoma cryopreservation protocols for establishment of primary tumor cultures has been limited. The aim of our study was to optimize glioblastoma cryopreservation methods for viable cryobanking relative to two critical factors, digestion conditions, and cryopreservation media, in the scope of twodimensional (2D) or three-dimensional (3D) post-thaw cultures. We explored the use of two different cryopreservation media (CryoStor CS10 medium and 10% dimethyl sulfoxide [DMSO] and 90% fetal calf serum [FCS]), and the timing of tissue digestion, before or after freezing. We assessed the impact of these two critical parameters on the success rate of 2D or 3D post-thaw primary cultures from eight glioblastoma specimens.
Cryopreservation
With written patient consent, portions of fresh human glioblastoma specimens that were not needed for pathology diagnosis from eight patients (Table 1) were either snapfrozen in liquid nitrogen without cryopreservation media or were processed for cryopreservation by four cryopreservation protocols with different specifications in the use of enzymatic digestion before or after cryopreservation and in the type of cryopreservation media (Fig. 1). Protocols used were approved by the Institutional Review Board (IRB) of the University of Illinois at Chicago and were in accordance with the Declaration of Helsinki 1975, as revised in 2008. All tissue specimens were first placed in DMEM/F-12 medium (Sigma-Aldrich) with 2 mM glutamine (Gibco, Gaithersburg, MD) and penicillin-streptomycin (Sigma-Aldrich) and minced with the use of sterile #10 scalpels into *3 mm pieces within 30 minutes of specimen resection (Fig. 1). For the specimens submitted to enzymatic digestion before freezing, media were then replaced with 2 mL per 0.5 g minced tissue of freshly prepared Enzymatic Tissue Dissociation Media solution (5 mL 0.05% Trypsin/EDTA, 2.5 mL Hank's Balanced Salt Solution [Sigma-Aldrich]calcium and magnesium free, and 2.5 mL Collagenase IV [ThermoFisher Scientific] stock solution [2000 U/mL in HBBS with calcium and magnesium]). The tissue specimens were digested under rotation for 10 minutes at 37°C in a tissue culture incubator. Digestion was stopped by addition of two volumes of stop solution (5 mL trypsin inhibitor solution, 5 mL DMEM/F12, 2 mL of 5000 U/mL DNase I [Sigma Aldrich; made in HBBS, calcium and magnesium free]) followed by filtering out of undigested material with a 100 mm strainer (BD Biosciences), centrifugation at 800 g for 5 minutes at room temperature, and suspension and washing of pellets in DMEM/F-12 medium.
Tissue specimens with or without prefreeze enzymatic digestion were suspended in either CryoStor CS10 (BioLife Solutions) or 10% DMSO (Sigma-Aldrich) and 90% FCS (Fisher, Ontario, Canada) cryopreservation media, frozen in these media with the use of Nalgene Mr. Frosty Cryo 1°C Freezing Containers (Thermo Scientific) at controlled rate cooling, -1°C per minute until reaching -80°C, and then stored long-term in liquid nitrogen vapor phase until further studies ( Fig. 1).
Two-dimensional and 3D cultures were observed for cell growth under an inverted microscope. Two-dimensional cultures with cell attachment and growth ( Fig. 2A) were trypsinized and transferred to new wells on 24-well tissue culture plates and cultured for additional passages. Threedimensional cultures demonstrating cell growth (Fig. 2B) were treated with 20 U/well of Dispase (Fisher Scientific) and cells were transferred to new wells on 24-well tissue culture plates and cultured for additional passages in 2D and 3D. The acceptance criteria for successful establishment of primary cultures were samples with cell growth in secondary plates covering at least 50% of the plate. The percentage of cell growth coverage of plates was established by viewing cultures daily under an inverted microscope and recording the average of the estimated coverage of the surface of ten 10 · microscopic fields by cells. pathology report of the original tumor specimen, cells were removed with a sterile cell scraper, fixed in 10% formalin, centrifuged at 1200 rpm, embedded in paraffin, and sectioned. Five micron sections were processed for immunostaining using a Leica Bond Automated Immunostainer to determine the expression of GFAP, the R132H mutant form of the IDH-1 protein, and p53 in the cells.
Results
Success rate of establishment of primary glioblastoma cultures following the use of different cryopreservation protocols and 2D or 3D culture conditions Portions of eight human glioblastoma specimens were either snap-frozen in liquid nitrogen without cryopreservation media or were cryopreserved by four different protocols differing in the use of enzymatic digestion before or after cryopreservation and in the composition of cryopreservation media (CryoStor CS10 or 10% DMSO and 90% FCS) (Fig. 1). After 1 month, frozen tissues were thawed, enzymatically digested if not digested before, and used for initiation of 2D or 3D primary tumor cultures to determine viability (Fig. 1). No primary cultures could be isolated from glioblastoma specimens that had been snap-frozen without the use of cryopreservation media. Among the tested cryopreservation and culturing protocols, the most efficient combination of cryopreservation and culture was that associated with the use of CryoStor CS10 cryopreservation medium, enzymatic digestion before freezing, and 2D culturing after thawing, with a successful culture rate of 8 out of 8 cases (100%) ( Table 2 and Fig. 2). The combination of cryopreservation with the use of CryoStor CS10 medium, enzymatic digestion after freezing, and 2D post-thaw culturing showed slightly lower successful culture rate of 7 out of 8 cases (87.5%) ( Table 2). Protocols associated with the use of 10% DMSO and 90% FCS cryopreservation medium, enzymatic digestion before or after freezing, and 2D culturing after thawing were associated with a lower successful culture rate of 5 out of 8 cases (62.5%) ( Table 2). Twodimensional cultures were more efficient for the support of tumor cell growth after thawing than 3D cultures (Table 2). Cryopreserved samples expressed GFAP, the R132H mutant form of the IDH-1 protein, or p53 in a pattern similar to that documented in the diagnostic pathology workup of the original tumor specimens (Table 3 and Fig. 3).
Discussion
This study confirms the feasibility of viable glioblastoma cryopreservation and provides novel information about critical preanalytical determinants of the post-thaw viability of glioblastoma. We show here that no primary cultures could be produced from glioblastoma specimens that had been snap-frozen without the use of cryopreservation media. However, consistent with previous studies, 8 we found that cryopreservation of viable glioblastoma is possible and allows for the establishment of primary glioblastoma cultures following thawing. We tested a series of cryopreservation protocols and found that the most efficient combination of upstream cryopreservation method and downstream culture conditions was the use of CryoStor CS10 cryopreservation medium, enzymatic digestion before freezing, and 2D postthaw culturing, with a success rate of 8 out of 8 cases (100%). The immunohistochemical profiles of the established primary 2D or 3D cultures following cryopreservation were similar to the original surgical specimens. Our findings show that glioblastoma cryopreservation is fit for purpose, with the use of CryoStor CS10 cryopreservation medium after enzymatic digestion of fresh tissue, for establishment of primary tumor cultures.
Why CryoStor CS10 was better than 10% DMSO and 90% FCS is not evident from our experiments; however, several previous studies reported better cell survival fol-lowing cryopreservation in a variety of experimental systems with the use of CryoStor solutions over a number of other cryopreservation media. [9][10][11][12] CryoStor CS10 is a serum-free, protein-free, animal component-free intracellular-like defined cryopreservation medium containing 10% DMSO that has been designed to better maintain the ionic balance of cells at hypothermic and freezing temperatures and allow for more rapid recovery by reducing cryopreservation-induced stress and damage. [9][10][11][12] The design of CryoStor CS10 includes pH buffering, free radical scavenging, oncotic/osmotic support, energy substrates, and ionic concentrations for increased balance at ultralow temperatures. [9][10][11][12] We found that cryopreservation of glioblastoma tissue with enzymatic digestion before freezing was more effective for viable cryopreservation than cryopreservation without enzymatic digestion before freezing. Why enzymatic digestion before freezing was more effective is not evident from our experiments; however, previous studies reported both positive and negative effects of prefreeze enzymatic digestion on post-thaw cell viability using tissue types other than glioblastoma. 13,14 Our findings together with previous observations by others suggest that enzymatic digestion before freezing may have a minor, tissue-specific positive or negative effect on tissue viability during cryopreservation.
Our study indicates that cryopreservation of mechanically minced glioblastoma tissue with enzymatic digestion before freezing was more effective for viable cryopreservation than cryopreservation of mechanically minced glioblastoma tissue without enzymatic digestion before freezing. However, it should be noted that there are limitations for enzymatic digestion before cryopreservation. These limitations primarily relate to the work-and time-intensive nature of enzymatic digestion that may not be easily performed routinely before cryopreservation. Cryopreservation of minced glioblastoma tissue in CryoStor CS10 without prior enzymatic digestion was nearly as efficient as the protocol with enzymatic digestion before cryopreservation. While 3D tumor cell cultures can be very useful to study many aspects of tumor cell growth and therapy response, 15,16 in our experiments, traditional 2D cultures provided a higher success rate than 3D cultures during post-thaw culturing of tumor cells. Although our study does not provide an explanation of this observation, possible causes may be related to growth inhibitory effects of the extracellular matrix in 3D cultures. Our experiments suggest that initial post-thaw culturing of glioblastoma cells should be performed in 2D.
Technologies related to the establishment of primary low passage glioblastoma cultures are increasingly important to neuro-oncology biobanking and research and a wide variety of protocols have been used and reported in the literature, without a clear consensus on optimal conditions. 4 Importantly, to tissue biorepository practices, cryopreservation of viable glioblastoma specimens, allowing for primary culture months to years after surgical tumor collection and freezing, has been demonstrated. 8 Our study points for the first time to some key determinants of the efficiency of viable glioblastoma cryobanking. We report an evidencebased optimized method, which can be fully validated in the future according to the requirements of ISO21899:2020 Biotechnology-Biobanking-General requirements for the validation and verification of processing methods for biological material in biobanks. 17 | 2020-10-28T13:06:01.953Z | 2020-10-27T00:00:00.000 | {
"year": 2020,
"sha1": "aba04a882171b8f8475e854d17a9bcec8a1b9ee7",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1089/bio.2020.0050",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "a75d24038284371ff4197904cf8b57c43dbae8e3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
186876790 | pes2o/s2orc | v3-fos-license | Finances of the Nation: Survey of Provincial and Territorial Budgets, 2018-19
For almost 60 years, the Canadian Tax Foundation published an annual monograph, Finances of the Nation, and its predecessor, The National Finances. In a change of format, the 2014 Canadian Tax Journal introduced a new “Finances of the Nation” feature, which presents annual surveys of provincial and territorial budgets and topical articles on taxation and public expenditures in Canada. This article surveys the 2018-19 provincial and territorial budgets. The underlying data for the Finances of the Nation monographs and for the articles in this journal will be published online in the near future.
Introduction
82 Summary Information 82 Provincial and Territorial Budgets by Jurisdiction 103 British Columbia (Table 12) 108 Tax Highlights 108 Tax Changes 108 Alberta (Table 14) 114 Tax Highlights 114 Tax Changes 114 Saskatchewan (Table 16) 117 Tax Highlights 117 Tax Changes 117 INTRODUCTION This article has two distinct parts. First, it sets out tables and charts that show aggregate figures related to projected 2018-19 budget revenues and expenditures for the various provinces and territories, as well as tables that show corporate income tax rates, personal income tax brackets and rates, and other matters. Second, the article summarizes the projected budget revenues and expenditures in tabular form and also summarizes the tax changes in narrative form, for each province and territory. (Table 17) 121 Tax Highlights 121 Tax Changes 121 Ontario (Table 19) 125 Tax Highlights 125 Tax Changes 125 Quebec (Table 21) 137 Tax Highlights 137 Tax Changes 137 New Brunswick (Table 24) 145 Tax Highlights 145 Tax Changes 145 Nova Scotia (Table 25) 147 Tax Highlights 147 Tax Changes 147 Prince Edward Island (Table 26) 149 Tax Highlights 149 Tax Changes 149 Newfoundland and Labrador (Table 27) 151 Tax Highlights 151 Tax Changes 151 Yukon (Table 29) 155 Tax Highlights 155 Tax Changes 155 Northwest Territories (Table 30) 157 Tax Highlights 157 Tax Changes 157 Nunavut (Table 31) 159 Tax Highlights 159 Tax Changes 159 1 Ontario, 2018 Fall Economic Outlook andFiscal Review, November 15, 2018. finances of the nation n 83 precipitous drop in the price of oil in 2014 became part of history as Alberta's economy started to recover, although there was a lag in the impact on government revenues. The price of oil has been volatile since that time, but until 2018 was essentially recovering; late in 2018, however, companies were reported to be buying up wells speculatively owing to another dip in oil prices. Alberta had said in its 2017 budget that it was "just now beginning to recover from the steepest and most prolonged slide in oil prices in recent history." 2 During the hot and dry summer of 2018, there were severe forest fires across the country (and in the United States) that will have unknown and long-term effects (following a massive fire in Fort McMurray, Alberta in 2016). Newfoundland and Labrador was hit hard by the 2014 drop in oil commodity prices and still stumbles toward recovery; the province made significant tax changes across the board in 2016 to increase revenues, but continues to face a serious financial situation. New Brunswick experienced an unrelated and extended downturn in its budgetary position and set about stemming deficits in 2015 and 2016. More than half of the provinces and territories projected a budget deficit (with expenditures exceeding revenue) in this cycle, while 5 of the 13 jurisdictions forecasted a surplus (or basically a flat budget) in the 2018-19 fiscal year based on projected increased economic growth and continued moderate spending restraint. Alberta opted not to make major changes in spending and faced major deficits; the province did not expect to return to a balanced budget until 2022-23. Most jurisdictions that issued projections expected to return to a balanced budget or surplus over long periods (four or more years). For example, Manitoba projected a return to surplus before the end of the government's second term in 2024; Newfoundland and Labrador sees a return to surplus in 2022-23. Because most of the Northwest Territories' budget is funded through federal transfers, in 2017 the territory concluded that it had only a limited capacity to increase taxes or other own-source revenues to ensure operating surpluses. The Yukon government faced similar pressures; in 2016, the surplus projected by the former government had not materialized and the then new government had been forced to make a special borrowing to meet its financial needs. In recognition of its precarious revenues owing to a dependence on resources, Yukon forecasted its first deficit in 2018-19 if no action was taken, and enlisted advice from its populace concerning expenditure pressures. Saskatchewan rolled back planned reductions for mid-2019 in the general and the manufacturing and processing (M & P) corporate tax rates, but increases in the small business limit as a rule continued. The Office of the Parliamentary Budget Officer issued its 2018 report on the sustainability of current provincial and territorial fiscal policies. 3 The report concluded that these policies finances of the nation n 85 cool the housing market in Southern Ontario. 6 It will take some time to gauge whether the downward effect on real estate prices will be permanent or temporary and whether that effect was caused by government measures or by a more natural stabilizing of the market; in 2018, the market was erratic but seemed to hold steady or rise slightly.
SUMMARY INFORMATION
The provinces and territories had by and large developed their own carbon reduction plans before 2019-as required by the federal government-or had asked the federal government to impose a plan. As of the date of writing, only four provinces had done neither: Manitoba, New Brunswick, Ontario, and Saskatchewan. Ontario and Saskatchewan have each filed separate constitutional challenges to the federal imposition of a carbon tax. The outcome of this dispute is unknown. Table 1 aggregates the projected budget revenue and expenditure items in each province and territory for the 2018-19 fiscal year. The figures reflect the budget summaries presented in the second part of this article. The different jurisdictions' budget projections are not strictly comparable, owing in part to accounting differences across the provinces and territories. 7 However, the placement of the various jurisdictions' figures in a single table illustrates trends and distinctions that are intended to stimulate discussion. The provinces and territories are listed in descending order based on each jurisdiction's original budget projection of its expected tax revenue. Figure 1 presents similar information and includes surpluses and deficits at the right of the figure. Each projected revenue source amount is shown as a percentage of total revenues, and the projected surplus or deficit is shown as a percentage of total expenditures. Figure 2 shows projected tax revenues by source as a percentage of total revenues. Figure 3 shows projected expenditures by spending category as a percentage of total expenditures, and health-care expenditures per capita.
The provinces and territories have the primary responsibility for education, health, and social services expenditures. Across all jurisdictions, health-care expenditures averaged about 40 percent of total expenditures, as shown in table 2. For example, for the 2018-19 fiscal year (updated), Ontario projected health-care and long-term-care expenditures of $61,678 million or 38.13 percent of total expenditures ($161,775 million, as shown in a Other sources of revenue included resource royalties; premiums, fees, and licences; commercial Crown corporation transfers; and investment income. b Adjustments included consolidation numbers (in some cases) and transfers to and from reserve funds. c Ontario numbers are from the Fall Economic Outlook and Fiscal Review released by the newly elected government on November 15, 2018. d Newfoundland and Labrador's tax revenue included mining tax revenue and royalties of $80 million and offshore royalties of $974 million. Source: Based on provincial and territorial budget documents cited in the source notes for tables 12, 14, 16-17, 19, 21, 24-27, and 29-31. Differences are due to rounding.
finances of the nation n 87
FIGURE 1 Projected Provincial and Territorial Revenues by Source, as a Percentage of Total Revenues, and Projected Surplus/Deficit as a Percentage of Projected Expenditures, Fiscal Year 2018-19
Source: Based on provincial and territorial budget documents cited in the source notes for tables 12, 14, 16-17, 19, 21, 24-27, and 29-31, and
FIGURE 3 Projected Provincial and Territorial Expenditures by Spending Category as a Percentage of Total Expenditures, Fiscal Year 2018-19
Source: Based on provincial and territorial budget documents cited in the source notes for tables 4, 12, 14, 16-17, 19, 21, 24-27, and 29- c The figure for Saskatchewan reflected a change in accounting: the 2014-15 budget was the province's first budget prepared on a summary basis and included government core operations, other government service organizations, and government business enterprises.
d The figure shown in the Newfoundland and Labrador estimates included an amount for debt servicing. Source: Based on provincial and territorial budget documents cited in the source notes for tables 12, 14, 16-17, 19, 21, 24-27, and 29-31. See those tables for further details.
finances of the nation n 91 32.21 percent for Yukon. However, on a per capita basis, the results of the territories vis-à-vis Ontario appeared to reverse. These trends are reflected in table 3, which sets out the health-care expenditures (as projected in the 2013-14 to 2018-19 budgets) as a percentage of total expenditures and per capita in Ontario and the territories. Table 4 sets out the health-care expenditure projections for all the provinces and territories for 2018-19 as a percentage of total expenditures and per capita. Table 5 shows the provincial and territorial surpluses and deficits since the (revised) budget projections for 2014-15 8 and also shows figures set out in 2018-19 budgets for planned or targeted surpluses or deficits for up to the ensuing five fiscal years. Most jurisdictions that projected beyond the 2018-19 fiscal year planned for a surplus within the following two to four years. Ontario forecasted a flat budget in 2017-18, which materialized, but in 2018-19 planned for a large deficit: the incoming government projected an even larger deficit for 2018-19 and, as noted above, issued an updated economic statement in November 2018. 9 Alberta forecasted a On the basis of budget projections in the tables set out in the second part of this article, projected aggregate income tax revenue in the 2018-19 budgets of all provinces and territories was $99.2 billion from personal income tax and $32.8 billion from corporate income tax, for total revenue of $132.0 billion from income tax. Projected aggregate sales tax revenue was $60.6 billion, for a total of $93.4 billion from sources other than personal income tax (that is, corporate income tax and sales tax). Thus, as has been the case since 2014, in 2018-19 the provinces and territories expected to collect slightly more tax revenue from personal income tax than from corporate income tax and sales tax combined. In comparison, the 2018-19 federal budget projected $161.4 billion of revenue from personal income tax, $47.3 billion from corporate income tax, and $8.3 billion from non-resident income tax, for a total of $217.0 billion from income tax, plus $37.7 billion from sales tax 10 -for a total of $93.3 billion from sources other than personal income tax (corporate income finances of the nation n 93 -18; $8,577 in 2018-19; $47,311 in 2019-20; and $105,114 in 2020-21. The operating $54 million deficit in the budget includes revolving funds and considers accounting adjustments relating to capital. The fiscal deficit is $28 million in 2018-19. Source: Based on provincial and territorial budget documents cited in the source notes for tables 12, 14, 16-17, 19, 21, 24-27, and 29-31.
TABLE 5
Concluded finances of the nation n 95 tax, non-resident income tax, and sales tax). Thus, as was the case in 2013-14, the federal government projected that in fiscal 2018-19 it will raise almost twice as much revenue from personal income tax than from corporate income tax, nonresident income tax, and goods and services tax (GST) combined, although the personal income tax is a declining number as a share of other revenue sources. See table 6 for a tabular and detailed presentation. Table 6A shows the projected tax revenues for each province and territory as detailed in its 2018-19 budget, including total and per capita amounts. Table 7 shows the corporate income tax rates in the provinces and territories for 2018.
From a personal income tax perspective, in recent years three provinces increased their income brackets for high income earners-British Columbia (for 2014, 2015, and 2018), Ontario (from 2012), and Quebec (from 2013)-and Nova Scotia (from 2010) continued with its high rate for taxpayers in the top bracket. Alberta, New Brunswick, Newfoundland and Labrador, and Yukon ushered in new personal income tax rates for high income earners in their 2015-16 budgets. Newfoundland and Labrador increased its tax rates on its tax brackets for 2016; for 2017, in most cases, the province increased those rates more than existing percentage rates. Newfoundland and Labrador also imposed a temporary deficit reduction levy that increased with higher tax brackets until the end of calendar 2019; in 2018, that levy applied only to taxable income over $50,000. In the 2017-18 budget, New Brunswick lowered its top marginal personal income tax rate from 21.0 percent to 20.3 percent for taxable income exceeding $150,000, retroactive to the beginning of 2016; beginning in 2017, the province's tax brackets were indexed for inflation. The newly elected government in Ontario vetoed a March 2018 budget proposal to increase tax rates, add a new tax bracket, and eliminate the province's two surtaxes.
Only a minority of jurisdictions do not specifically impose a higher tax rate on high income earners. In 2017, Alberta increased its credit amounts and bracket thresholds, and those amounts were indexed in 2018. Saskatchewan proposed to reduce its personal income tax rates by 0.5 of a percentage point on each of July 1, 2017 and July 1, 2019, but those rates were frozen at the 2018 level: 10.5, 12.5, and 14.5 percent. In 2017, Manitoba introduced indexation of its personal tax brackets and basic personal amount; in 2018, the bracket thresholds were $31,843 and $68,821, and the basic personal amount was $9,382. In Quebec, the government's November 21, 2017 Economic Plan reduced, retroactively for all of 2017, the lowest tax rate from 16 percent to 15 percent; 11 Quebec also increased its basic personal tax. Nova Scotia increased certain personal income tax credits for a taxpayer earning less than $75,000. Prince Edward Island increased certain personal tax credits; however, it continued to impose a surtax on high income earners. Nunavut and the Northwest Territories had higher brackets, perhaps reflecting the higher cost of finances of the nation n 97 living in those territories. Saskatchewan already had a tax bracket that could be said to impose a higher rate on high income earners, as does British Columbia, and several provinces imposed a high rate at a low level of taxable income. (The BC Budget 2017 Update imposed an even higher rate on taxable income over $150,000 starting in 2018.) 12 A higher rate planned for high income earners did not materialize in Manitoba when the former government was not re-elected in 2016. Surtaxes are sometimes applied in addition to regular provincial or territorial personal income tax. All federal, provincial, and territorial marginal personal income tax rates on ordinary income and interest, as well as surtaxes, are shown in graphic form in figure 4 as a function of taxable income. Table 8 sets out the provincial and territorial personal income tax brackets and rates for 2018. Table 9 shows the sales tax rates in each jurisdiction for 2018. British Columbia, Saskatchewan, and Manitoba imposed a separate provincial sales tax (PST). Ontario and the Atlantic provinces-Newfoundland and Labrador, Nova Scotia, New Brunswick, and Prince Edward Island-are harmonized sales tax (HST) participating provinces that harmonize sales taxes with the federal GST. Quebec has its own Quebec sales tax (QST), which applies in a manner similar to the GST. Alberta and the three territories do not impose sales taxes. In 2016, each of New Brunswick, Prince Edward Island, and Newfoundland and Labrador increased the provincial portion of its HST so that the combined HST rate in each province was 15 percent, a The threshold is reduced straightline if the Canadian-controlled private corporation (CCPC) and associated corporations had taxable capital between $10 million and $15 million in the preceding year. Ontario adopted the clawback effective May 1, 2014. b British Columbia's general rate increased from 11 percent to 12 percent in 2018. c Saskatchewan restored its general rate to 12 percent and its M & P rate to 10 percent as of July 1, 2017, and the small business threshold was raised to $600,000 after 2017; the combined federal and Saskatchewan rate applicable to income between $500,000 and $600,000 is 17 percent for the taxation years ending December 31, 2018 and 2019. d In Ontario, the M & P rate applies to income from manufacturing, processing, farming, mining, logging, and fishing operations carried on in Canada and allocated to the province. e Effective January 1, 2018, as announced in the 2017 budget; enacted December 14, 2017. f New Brunswick's small business rate applies to a small business whose taxable capital does not exceed $15 million. Effective April 1, 2017, the small business rate was lowered from 3.5 percent to 3.0 percent. The government committed to lowering that rate to 2.5 percent over the course of its mandate; the rate dropped to 2.5 percent effective April 1, 2018. g In Yukon, the 1.5 percent rate applies to M & P income of a CCPC up to the small business limit.
Source: Based on provincial and territorial budget documents cited in the source notes for tables 12, 14, 16-17, 19, 21, 24-27, and 29-31. finances of the nation n 99 a Surtax calculations assume that the only credit claimed reflects applicable basic personal amounts. b For Quebec, federal income tax has been reduced by the 16.5% provincial abatement. Source: Based on provincial and territorial budget documents cited in the source notes for tables 12, 14, 16-17, 19, 21, 24-27, and 29-31. Table 10 summarizes the various dates for the 2018-19 budgets in each province and territory, the name and title of the person who announced the budget, and the announced estimated surplus or deficit. Table 11 sets out the research and development (R & D) tax credits in each province and territory, as updated for the 2018-19 budgets. The table details rates and whether the credit is refundable and otherwise eligible for a carryforward period. In some cases, the credit is also available to an individual. An article in this feature in 2017 focused on the policy behind these subsidies from an economics viewpoint. 13 The second part of this article shows, for each province and territory, selected fiscal figures, highlights of tax changes, and a narrative summary of tax changes with GST = goods and services tax; HST = harmonized sales tax; PST = provincial sales tax; QST = Quebec sales tax.
PROVINCIAL AND TERRITORIAL BUDGETS BY JURISDICTION
a The rates shown do not yield comparable tax burdens for all jurisdictions. For example, GST and HST allow input tax credits for underlying taxes, eliminate sales tax on exports, and also cover a wider range of goods and services than PST. b Saskatchewan increased its PST rate from 5 percent to 6 percent and also eliminated some exemptions in 2017. c Newfoundland and Labrador reinstated its point-of-sale rebate of PST for printed books for 2018 and subsequent years.
Source: Based on provincial and territorial budget documents cited in the source notes for tables 12, 14, 16-17, 19, 21, 24-27, and 29-31. (2019) 67:1 accompanying tables. The figures for any particular jurisdiction are difficult to compare across jurisdictions. Where relevant, and where the information is accessible, notes that refer to differences in accounting and/or presentation are appended to the tables; it is beyond the scope of this article to analyze differing accounting practices of each jurisdiction and the differences in those practices between jurisdictions. Notes to each table also refer to the jurisdiction's significant resource revenue, if any. The "tax highlights" section at the beginning of each section contains some of the more important tax changes and, where possible, lists them in order of precedence. The narrative summaries of tax changes are categorized under the following eight headings: 1. Corporate income tax: rates, credits, deductions, inclusions, reporting, business income matters, and other items.
Personal income tax: rates, credits, deductions, inclusions, and other items.
This category may include the taxation of unincorporated businesses. Source: Based on provincial and territorial budget documents cited in the source notes for tables 12, 14, 16-17, 19, 21, 24-27, and 29-31. finances of the nation n 105 a Provincial and territorial tax credits are government assistance for federal tax purposes and thus reduce expenditures eligible for the federal R & D deduction and federal tax credit. b Alberta's R & D credit is 10 percent of the lesser of (1) eligible Alberta R & D expenditures and (2) the maximum expenditure level of $4 million (to a maximum annual credit of $400,000). c When R & D is carried on by a partnership, the R & D credit can generally be claimed by corporate partners except in Newfoundland and Labrador, Quebec, and Yukon, where an individual partner can also claim the credit. However, the credit cannot ever be claimed from a partnership that carries on its R & D in other provinces, such as Alberta and Ontario (except for certain programs). d British Columbia's refundable R & D tax credit is 10 percent of the lesser of (1) eligible BC R & D expenditures and (2) the federal R & D expenditure limit (to a maximum annual credit of $300,000). e Manitoba's credit is (1) fully refundable for eligible R & D expenditures incurred in Manitoba by a corporation that has a Manitoba permanent establishment and a contract with a qualifying research institute, and (2) 50 percent refundable for in-house R & D expenditures. f The Ontario innovation tax credit is available on up to $3 million of expenditures for a corporation that has taxable income under $500,000 and taxable capital under $25 million (to a maximum annual credit of $240,000). A corporation is eligible for a partial credit if its taxable income is over $500,000 but less than $800,000 or its taxable capital is between $25 million and $50 million. All current expenditures are eligible. Taxable income and taxable capital thresholds are set in the previous year on a worldwide associated basis. g The Ontario business research institute tax credit applies to 20 percent of qualifying payments (up to $20 million annually on an associated basis) to an Ontario eligible research institute (to a maximum annual credit of $4 million). h For all Quebec R & D tax credits, the following rates and conditions apply: 1. Quebec Canadian-controlled corporations that have fewer than $50 million in assets can claim the 30 percent rate on up to $3 million of R & D wages and/or eligible R & D expenditures for each credit; if assets held are between $50 million and $75 million, the rate is gradually reduced to 14 percent, which is the rate for all other taxpayers. The rates are higher in certain cases. Asset thresholds are set in the previous year on a worldwide associated basis (consolidated). These categories have been selected for organizational purposes and for ease of reference only. Some categories may overlap (for example, categories 1, 2, and 5).
2. The tax credit rate is 14 percent for Quebec corporations controlled by non-residents. Asset thresholds do not apply.
An exclusion threshold is allocated among the Quebec R & D tax credits proportionally
to the amount of eligible expenditures of each R & D tax credit. For each R & D tax credit, the eligible R & D expenditures are reduced by the allocated exclusion, which varies depending on the company's assets: the exclusion is a. $50,000 for a corporation whose assets are $50 million or less, b. an amount that increases linearly between $50,000 and $225,000 for a corporation whose assets are between $50 million and $75 million, and c. $225,000 for a corporation whose assets are $75 million or more. Asset thresholds are set in the previous year and are not on an associated basis. i A payment may be eligible for the Quebec R & D wage tax credit if the payment was made to (1) an arm's-length subcontractor (up to 50 percent of the payment) or (2) a non-arm'slength subcontractor (100 percent for wages paid and 50 percent of a payment to an arm's-length subcontractor if the payment was made under the non-arm's-length contract). j Quebec's university R & D tax credit may be available on 80 percent of a payment to an eligible entity such as a university, a public research centre, or a research consortium. k For the Quebec private partnership pre-competitive tax credit, a qualified expenditure may include (1) wages paid relating to R & D, (2) 80 percent of a payment to an arm's-length subcontractor (generally excluding a university, a public research centre, and a research consortium contract), (3) payment for some materials, or (4) payment for an overhead (or proxy) amount. l Saskatchewan's total refundable and non-refundable tax credits are capped at $1 million per taxation year. m Saskatchewan's refundable R & D tax credit is 10 percent of the lesser of (1)
Corporate Income Tax
The general corporate tax rate increased by 1 point, from 11 percent to 12 percent, in 2018 as announced in the 2017 budget.
The small business tax rate decreased from 2.5 percent to 2 percent after March 2017 as announced in the 2017 budget. The small business tax rate for 2018 was 2 percent as announced in the 2018 budget.
The farmers' food donation tax credit was extended for one year, to the end of 2019. The credit and extension apply to individuals too.
The interactive digital media tax credit was extended for corporations for five years, to August 31, 2023.
For expenditures incurred on or after February 21, 2018, the BC film incentive tax credit was expanded to include scriptwriting expenditures on BC labour incurred by a corporation before the final script stage of production was complete. Previously, only scriptwriting expenditures incurred after the final script stage were eligible for a tax credit.
The 2018 budget extended the book publishing tax credit for three more years, to March 31, 2021.
Personal Income Tax
The 2018 budget did not increase the personal income tax rates or brackets. The Budget 2017 Update raised the top marginal rate starting in 2018. 14 Starting in 2018, "the caregiver tax credit and the infirm dependant tax credit were replaced with a new [non-refundable] BC caregiver credit" that "paralleled the Canada caregiver credit announced in the 2017 federal Budget." 15 To be eligible, British Columbians must care for an eligible adult relative dependent on the caregiver because of his or her mental or physical infirmity, whether or not he or she lives with the caregiver. The maximum credit amount was $4,556-a benefit of $230.33-and was indexed for 2019 and subsequent years. A spousal tax credit or an eligible dependant tax credit could be taken instead if available and greater; a single finances of the nation n 109 individual caring for an infirm adult relative could claim the greater of this credit and the eligible dependant tax credit.
The elimination of the BC education tax credit for 2019 and following tax years paralleled the elimination of the federal education tax credit. Carryforwards could be used in 2019 and following years for education amounts from pre-2019 tax years.
The mining flowthrough share tax credit was extended for one year to the end of 2018.
Medical services plan premiums will be eliminated in 2020, following a 50 percent reduction effective in 2018, according to an announcement on December 27, 2017. Individuals were to see annual savings of up to $900; families were to see annual savings of up to $1,800. (The 2017 budget did not increase premiums by 4 percent, as announced in September 2016.) Although at one point a household was required to register for the reduced premium, it seemed that for elimination, registration was not required because the benefit was not determined by income. In November 2017, a task force was established to examine the best replacement policy; a final report with recommendations was released in March 2018. (2019) 67:1 Medical services premiums were to be replaced by an employer health tax (EHT) effective after 2018. The EHT applies to all employers other than small businesses with a payroll of less than $500,000. The tax is phased in gradually, with rates ranging between 0.98 percent and 1.95 percent depending on payroll, as set out in table 13. Future details were promised on instalments and the sharing of exemption limits among associated corporations.
Sales Tax
Effective April 1, 2018, the exemption for avalanche airbag backpacks includes all avalanche backpacks, not just those triggered by compressed air.
On a date to be specified in regulations, the legislation and regulations are amended to enable online accommodation platforms such as Airbnb to register as PST collectors for the collection and remission of PST and the municipal and regional district tax on accommodation. Thus, owners and lessors-hosts of accommodation units-need not register. The platforms enable or facilitate transactions between buyers and providers of short-term accommodation in the province.
From a date specified in regulations, revenue from the municipal and regional district tax collected by municipalities, regional districts, and eligible entities (such as tourism-focused non-profits) can be used to fund affordable-housing initiatives. Currently, those funds can be used only for tourism marketing, programs, and projects.
Effective retroactive to April 1, 2013, the British Columbia Provincial Sales Tax Act clarified that PST applies to software provided in optional as-needed maintenance agreements.
Effective April 1, 2018, the luxury surtax on passenger vehicles with a purchase price of $125,000 to $149,999 increased from 5 percent to 10 percent (from 12 percent to 15 percent for private sales); for vehicles with a cost of $150,000 or more, the surtax increased from 10 percent to 20 percent (from 12 percent to 20 percent for private sales). The new rates applied to new and previously owned vehicles.
Effective on royal assent, services-not just goods and software-may be included in a tax payment agreement between the province and an interjurisdictional railway.
Effective retroactive to April 1, 2013, a retailer on a cruise ship in British Columbia waters was not required to collect PST on sales made during the course of scheduled sailings.
Sin Taxes
Tobacco tax on a carton of 200 cigarettes increased from $49.40 to $55.00 (from 24.7 cents to 27.5 cents per cigarette) effective April 1, 2018, and from that date also increased for loose tobacco from 24.7 cents per gram to 37.5 cents per gram.
finances of the nation n 111
Resource-Related Matters
The Motor Fuel Tax Act refund rates for an international fuel tax agreement licensee will increase to reflect carbon tax increases on April 1 of each year from 2018 to 2021, ensuring that the licensee pays carbon tax only on fuel used within the province.
Effective April 1, 2018, marine diesel fuel used in interjurisdictional cruise ships and ships prohibited from coasting trade under the Coasting Trade Act is exempt from motor fuel tax, paralleling a carbon tax exemption for those ships.
Effective April 1, 2018, motor fuel tax rates on clear gasoline and clear diesel in the Capital Regional District increased from 3.5 cents per litre to 5.5 cents per litre. The tax increase-which is expected to raise $7 million annually-was intended to help finance the Victoria Regional Transit Commission and its share of funding for the Victoria transit system.
Effective retroactive to February 18, 2014, an exemption was provided for security for refiner collectors that acquired fuel for retail sale from other refiner collectors. A refund was available for security paid.
The Budget 2017 Update increased, effective April 1, 2018, the carbon tax rates by $5 per tonne of carbon dioxide equivalent emissions annually until the rates equal $50 per tonne on April 1, 2021. 16
Real Estate Taxes
Effective February 21, 2018, a further 2 percent tax applied to the residential portion (a residential taxable transaction) with a fair market value (FMV) exceeding $3 million. Previously, the tax was 3 percent of the FMV of a residential taxable transaction that exceeded $2 million. Effective February 21, 2018, the additional property transfer tax rate was increased from 15 percent to 20 percent. Newly added areas were the Capital Regional District, the Regional District of Central Okanagan, the Fraser Valley Regional District, and the Regional District of Nanaimo; transitional rules may exempt regional transactions entered into before the effective date, but there are no such rules for Metro Vancouver transactions.
Effective for transactions after February 20, 2018, transfers of a bankrupt's principal residence from a trustee in bankruptcy or a (former) spouse are exempt from the property transfer tax regardless of whether any consideration was exchanged.
Announced on January 2, 2018, the property value threshold for the full homeowner grant was increased from $1.6 million in 2017 to $1.65 million in the 2018 tax year. The grant was reduced by $5 for every $1,000 of assessed value that exceeded the threshold.
Effective for 2019 and subsequent years, the school tax rate will increase for high-value properties in the residential class including detached homes, stratified condominium or townhouse units, and most vacant land. The tax increase, which applies to residential assessed value exceeding $3 million, is 0.2 percent for property valued at over $3 million and up to $4 million, and 0.4 percent on the value over $4 million. The tax will be administered through the existing school tax system, with municipalities and the provincial surveyor of taxes being responsible for collection.
According to longstanding policy, non-residential school property tax rates were increased by inflation plus new construction. Rates were set when revised assessment roll data became available in the spring. However, both the major and light industry classes of school property tax rates were set at the same rate as the business class tax rate, consistent with the policy in the 2008 budget. The Hydro and Power Authority Act was clarified to limit the authority's school tax liability to land that it owned in fee simple and improvements, without affecting Nisga'a lands or taxing Treaty First Nations lands.
The average residential class school property tax increased, in accordance with longstanding policy, by the province's inflation rate in the previous year. Rates were to be set when revised assessment roll data became available in the spring.
Effective for 2019 and subsequent tax years, municipal revitalization property tax exemptions applied to eligible new purpose-built non-stratified rental housing (or substantially renovated with a minimum net gain of five units) if the municipality issued a relevant certificate after February 20, 2018. Terms of the municipal exemption reflect the provincial exemption.
A speculation tax on residential provincial property applied first in 2018 as an annual property tax (in Metro Vancouver and the Fraser Valley, Capital, and Nanaimo regional districts, and in the municipalities of Kelowna and West Kelowna) to target foreign and domestic homeowners who did not pay income tax in the province, including owners of vacant property. Most homeowners were exempt up front, including owners of long-term rental properties and certain special cases. A non-refundable income tax credit was available for those who paid income tax; the finances of the nation n 113 credit could be carried forward. The tax rate was $5 per $1,000 of assessed value in 2018 and $20 per $1,000 of assessed value in 2019. The tax was administered by the province, outside the normal property tax system and its cycle. The reporting form collects information such as the taxpayer's social insurance number (SIN), household information, worldwide income information, information relating to upfront exemptions, and other information useful for audits and enforcement. Relevant information is made available to the Canada Revenue Agency (CRA).
The single rural area residential property tax rate increased for 2018 by the previous year's inflation rate, in accordance with longstanding policy. Similarly, rural area non-residential property tax rates increased by inflation plus the tax on new construction. Rates were set when revised assessment data became available in the spring.
Online accommodation platforms were enabled to collect and remit PST and municipal and regional district tax on short-term accommodation, effective on a date proclaimed by regulation. Municipalities, regional districts, and eligible entities, such as tourism-focused non-profits, that receive revenue from the municipal and regional district tax will be allowed to use revenue to fund affordable-housing initiatives.
The province is examining the property tax treatment of residential property in the Agricultural Land Reserve as part of its broader review to ensure that such land is being used in farming.
Pensions
No changes were announced.
Other
The province introduced several changes to enhance administration and information sharing. The Property Transfer Tax Act was amended to increase the limitation period, enable additional information to be collected, introduce administrative penalties for non-compliance, extend GAAR to the entire Act, and enable tax administrators to access additional information on property transactions, including from the Multiple Listing Service (MLS) database. A fee was to be newly charged to recover the costs of out-of-province audits for the Carbon Tax Act, the Motor Fuel Tax Act, and the Provincial Sales Tax Act, at a date promised to be specified in regulations. The provincial Income Tax Act was amended effective for a transaction entered into on or after February 20, 2018, or a series completed by that date, to introduce a reportable transaction rule paralleling federal rules and requiring proactive disclosure by taxpayers and their advisers of certain avoidance transactions; that act was also amended to parallel federal rules relating to GAAR effective February 21, 2018. Effective on royal assent, the Income Tax Act and the Land Tax Deferment Act were amended to allow information sharing between the two acts, and the Income Tax Act and the Logging Tax Act were also amended, effective on royal assent, to no longer require the lieutenant governor in council to pre approve information-sharing agreements entered into under those acts. Introduction of a 114 n canadian tax journal / revue fiscale canadienne (2019) 67:1 new data collection system to improve the collection and accuracy of oil and natural gas royalty information requires amendments to the Petroleum and Natural Gas Act to ensure privacy of collected information and to allow proper sharing of information. Amendments are effective on royal assent and include changes regarding non-compliance and reporting errors by industry participants, giving tax authorities power to penalize the non-payment of royalties. Work has begun to allow the collection of SINs-expected to begin in 2019-for the homeowner grant application process.
The province intends to require developers to collect and report comprehensive information relating to the assignment of pre-sale condominium purchases.
The province is taking steps to track beneficial ownership information by requiring additional information as part of the property transfer tax form and to establish a publicly accessible registry of the beneficial owners of all property in British Columbia.
Alberta ( Table 14)
Tax Highlights n No new tax increases n Tax credits intended to diversify the economy
Corporate Income Tax
A refundable interactive digital media tax credit was introduced. The credit was 25 percent of eligible labour costs incurred after March 2018, with an additional 5 percent for a company that employed workers from underrepresented groups. Details of the diversity and inclusion enhancement were promised to be provided when regulations were introduced. The annual budget was set to reach $20 million by 2020-21.
The capital investment tax credit (CITC), announced in the 2016 budget as a part of the Alberta Jobs Plan, benefited a corporation that invested in eligible capital assets beginning in 2017 by providing a 10 percent non-refundable credit for up to two years. The credit benefited spending on property or other capital in eligible industries such as value-added agriculture, M & P, tourism infrastructure, and culture. The 2018 budget extended the credit to 2021-22. Support was $30 million annually.
In addition to the CITC and as part of the Alberta Jobs Plan, the government implemented the Alberta investor tax credit (AITC) to support jobs and economic diversification. The 2018 budget also extended the AITC until 2021-22. The AITC was a 30 percent credit for an equity investment in an eligible Alberta business that undertook research, development, or commercialization of new technology, new products, or new processes. The AITC also applied to a business engaged in interactive digital media development, video post-production, digital animation, or tourism.
finances of the nation n 115
An additional 5 percent credit was available for investments in eligible business corporations that met diversity and inclusion criteria; details were promised with the introduction of regulations. The AITC program had an annual budget of $30 million. The tax credit was available via certificate to an eligible individual or corporation that was approved after application. An individual must file the certificate with his or her personal income tax return and can claim a refundable AITC of up to $60,000 per annum, or up to $300,000 over five years. A corporation can claim a nonrefundable AITC on its tax return without any maximum limit on the amount of the credit. Funding was provided on a first-come, first-served basis.
Personal Income Tax
See the description of the AITC above.
As promised in the 2016 budget, income tax brackets began to be indexed as of 2017. Credit amounts and bracket thresholds increased by 1.2 percent in 2018. Notes: The figures showed only net operational revenues and expenditures, including net income of government business enterprises. Debt-servicing costs related to general debt only. "Other revenues" included non-renewable resource revenue of $3,829 million, but were still significantly lower than in 2014-15. The budget was presented on a fully consolidated basis, which includes school boards, universities and colleges, health entities, and the Alberta Innovates corporations. The risk adjustment in the fiscal plan was included to recognize the potential impact of world oil markets on the province's resource revenue.
Sales Tax
No changes were announced. Alberta does not impose a sales tax.
Sin Taxes
Alberta would collect the revenue from the sale of cannabis upon legalization in 2018. In December 2017, the provincial and federal governments agreed to principles to govern tax collection and sharing in the first two years in order to keep prices low and curtail the illegal market. The governments agreed to share tax revenues equal to the greater of $1 per gram and 10 percent of the product price; the provinces would receive 75 percent of the tax room and could also collect additional tax of up to 10 percent of the retail price. The federal government would use the federal excise tax to collect the tax on Alberta's behalf and distribute the revenue to the province. The Alberta Gaming and Liquor Commission (AGLC) collected a markup on wholesale distribution and retail sales of cannabis through the public online system. Markups were limited to costs of the AGLC's cannabis operations and a reasonable profit margin.
Resource-Related Matters
A carbon fee (carbon price) was imposed for large emitters effective after 2016. Table 15 shows the rates on major fuels for 2017 and 2018; a full list was contained in the 2016 budget. Some exemptions apply.
Real Estate Taxes
The total education property tax requisition was frozen for 2018-19. However, the farmland rate increased from $2.48 to $2.56 per $1,000 of equalized assessment and the non-residential rate increased from $3.64 to $3.76 owing to lower assessed values in 2016. The provincial government's share of total provincial-municipal property tax revenue decreased from 51 percent in 1994 (when the province assumed responsibility for this tax) to 25 percent in 2016.
Pensions
No changes were announced.
Other
The Alberta government sought public input into the preparation of its 2018 budget. Scheduled input was received until early February 2018.
finances of the nation n 117 Sask atchewan (
Corporate Income Tax
The 2017 budget announced that the general corporate income tax rate was reduced by 0.5 of a percentage point on July 1, 2017 and is reduced again by that amount on July 1, 2019, for a total reduction from 12 percent to 11 percent. The rate reduction was prorated for straddle corporate taxation years. The M & P income tax rate was thus reduced from 10 percent to 9 percent. The 2018 budget restored the general corporate rate to 12 percent after 2017 and restored the province's M & P rate to 10 percent after 2017. There was no change to the small business threshold, which rose from $500,000 to $600,000 after 2017 in accordance with the 2017 budget. The 2 percent rate for Canadian-controlled private corporations (CCPCs) is still applicable to active business income within that threshold.
To encourage business investment, the 2018 budget introduced a new Saskatchewan value-added agriculture incentive (SVAI) to provide a non-refundable 15 percent corporate income tax credit for qualifying new capital expenditures. Eligible activities were defined as the physical transformation or upgrading of any raw/primary agricultural product or any agricultural by-product or waste into a new or upgraded product. Examples of such activities included pea protein processing, canola seed crushing, oat milling, malt production, and cannabis oil processing; cleaning, bagging, handling, and/or storing of primary products did not qualify. Qualifying projects included new or existing value-added agriculture facilities making capital expenditures of at least $10 million related to new or expanded productive capacity. Applicants were encouraged to contact the Saskatchewan Ministry of Trade and Export Development for further information or for conditional approval; in the Notes: Saskatchewan's summary budget presentation includes government core operations, government service organizations (such as ministries, boards of education, and health regions), and government business enterprises (such as Crown corporations). "Other revenues" included non-renewable resource revenue of $1,482 million for fiscal year 2018-19. Debt servicing is for general debt. The debt servicing from government business enterprises has been netted against the net income from government business enterprises, which is included in the revenue figure above.
finances of the nation n 119 small businesses-that is, early-stage technical startups that (1) develop new technologies in a new way, to create proprietary new products, services, or processes that are repeatable and scalable; (2) have fewer than 50 employees; (3) are incorporated and headquartered in Saskatchewan; and (4) have a majority of staff and operations located in Saskatchewan. Investors may be accredited-including local investment fund managers and financial institutions-or non-accredited (they can invest within the limits of provincial securities legislation). Venture capital corporations may also raise capital and invest under the STSI program's terms. On making an investment, an eligibility certificate from Innovation Saskatchewan may be used to claim a credit of up to $140,000 per annum or $225,000 in total. Credits may be carried forward for four years after making an investment. During the minimum hold period of two years, the investee cannot be acquired, go public, or leave the province. At the end of two and a half years, the program will be evaluated. For further information, interested parties are encouraged to contact Innovation Saskatchewan at 306-933-7389.
Personal Income Tax
The 2017 budget reduced the province's three personal income tax rates, one corresponding to each marginal tax bracket: each rate (11, 13, and 15 percent) was reduced by 0.5 of a percentage point first on July 1, 2017 and again on July 1, 2019, to new rates on the latter date of 10, 12, and 14 percent, respectively. The 2018 budget temporarily froze tax rates before any deduction for 2019 and 2020; thus, personal tax rates were frozen at 2018 levels: 10.5, 12.5, and 14.5 percent. The dividend tax credit for non-eligible dividends was increased in 2018: from 3.367 percent in 2017, to 3.333 percent in 2018, and to 3.362 percent after 2018. The credit increase accounts for an automatic increase in provincial income taxes as a result of a federal change beginning in 2018.
Saskatchewan did not mirror a federal change to consolidate its caregiver-related income tax credits into a single caregiver tax credit. Saskatchewan maintained the existing provincial infirm dependant tax credit and caregiver tax credits for a total maximum credit amount of $9,464, as compared with the federal maximum credit of $6,883.
The STSI, discussed above, was also available to an individual.
Sales Tax
On February 26, 2018, Saskatchewan announced that the following insurance premiums were immediately exempt from the 6 percent PST, retroactive to August 1, 2017 (when insurance premiums became taxable in the province): individual and group life insurance; individual and group health, disability, accident, and sickness insurance; and agriculture insurance, including crop and livestock insurance, hail insurance, and margin/income insurance. Following substantial consultations with Saskatchewan's vehicle dealers, the PST exemption for used light trucks was eliminated effective April 11, 2018. PST continued to apply to all other used vehicles. The 2018 budget announced, effective April 1, 2018, the restoration of the trade-in allowance for PST: the value of a trade-in was now PST-exempt on the purchase of a vehicle.
The budget also announced that in lieu of a $3,000 deduction, a purchaser of a used vehicle acquired through a private sale and registered for personal or farm use (non-commercial) was eligible for a $5,000 deduction for PST purposes.
The budget also announced that PST did not apply to used vehicles that were gifted by a qualifying family member, such as a spouse, parent, legal guardian, child, grandparent, grandchild, or sibling. Rules in place prevented unfair avoidance of tax.
An exemption since 2003 for ENERGY STAR ® certified appliances was intended to encourage the purchase of such items by consumers. The PST exemption was felt to be no longer needed-those appliances now were standard and represented a majority of sales-and the budget eliminated the PST exemption effective April 11, 2018.
PST applied to all sales of cannabis in the province.
In 2018, the government changed its PST legislation so that it could collect the tax from an out-of-province vendor (including streaming services such as Netflix) if the supply was used or consumed in the province.
Sin Taxes
Saskatchewan announced that it intended to enter into a two-year agreement with the federal government concerning the payment to the province of 75 percent of the federal excise duty collected on cannabis sales. Saskatchewan would also receive its share of revenue above the $100 million cap on federal revenues therefor. PST also applied to all sales of cannabis in the province. Uncertainty concerning the date of legalization, the size of the market, and the anticipated retail price meant that no revenue from such sales was included in the 2018 budget.
Resource-Related Matters
No changes were announced. The budget did not introduce a carbon tax.
Real Estate Taxes
No changes were announced.
Pensions
No changes were announced.
Other
There was no discussion in the budget of a Saskatchewan carbon-pricing regime; Saskatchewan said that it will launch a judicial challenge to the federal government's proposed nationwide tax.
Corporate Income Tax
The small business deduction threshold was increased from $450,000 to $500,000 after 2018. The small business provincial rate was 0 percent for active business income up to the threshold.
The book publishing tax credit-for an individual or a corporation-was extended by the 2017 budget for one year, to December 31, 2018. The refundable credit, which was intended to support the development of the province's book publishing industry, was equal to 40 percent of eligible Manitoba labour costs. The 2018 budget again extended the credit for one additional year, to December 31, 2019.
The cultural industries printing tax credit-available to an individual or a corporation-was also extended for one year, to December 31, 2019. The credit was intended to assist in the development of the province's printing industry.
A refundable corporate tax credit for child-care centre development was introduced to stimulate the creation of licensed child-care centres in workplaces. The new credit was available after budget day and before 2021 to a taxable private corporation that created a new child-care centre, for a total credit benefit of $10,000 per infant or preschool space created, up to a maximum of 200 spaces. The corporation must not be primarily engaged in child-care services. The credit could be claimed over five years.
For the small business venture capital tax credit-available to an individual or a corporation-the $15 million revenue cap on an eligible corporation's size was eliminated, and the investment minimum was lowered from $20,000 to $10,000; both changes were effective March 12, 2018. These changes made the credit accessible to larger corporations and also allowed smaller investments by shareholders.
The rental housing construction tax credit would be eliminated after 2018, but this would not affect projects under provincial review or those that were already provincially approved provided that the project was available for use before 2021.
Personal Income Tax
Pursuant to the 2016 budget, the personal income tax brackets and base personal amount were indexed to inflation starting in 2017; the indexing factor of 1.5 percent was set in November 2016. In 2018, the personal income tax brackets were $0 to $31,843, over $31,843 to $68,821, and over $68,821; indexation was expected to continue in subsequent years. The 2018 budget announced a large increase in the basic personal amount for the 2019 and 2020 taxation years, to $10,392 and $11,402, respectively. The basic personal amount increased, to match inflation, from $9,271 in 2017 to $9,382 in 2018.
The 2018 budget announced that the labour-sponsored fund tax credit-virtually unused by Manitobans since its introduction in 1991-92-would be eliminated for shares acquired after 2018. This budget measure was not implemented, and approved shares acquired after 2018 continue to be eligible for the credit.
A claim for the primary caregiver tax credit was simplified. Preapproval by the regional health authorities or by Manitoba Families was eliminated. The credit eligibility process was also changed: in lieu of an application, a caregiver must submit a copy of a completed registration form to Manitoba Finance and claim the credit on his or her income tax return. A caregiver who applied to Manitoba Health or Manitoba Families between January 1 and March 12, 2018 had his or her form forwarded to Manitoba Finance for registration. The process was also simplified by introducing a flat annual $1,400 credit for any caregiver, eliminating the calculation of the credit based on the number of days that care was provided, but eligibility finances of the nation n 123 remained subject to an existing 90-day threshold of care before a credit could be claimed. The education property tax credit was amended effective after 2018; from 2019, the credit would be based on school taxes and the $250 deductible would be eliminated. The seniors' education property tax credit would also be calculated on the school tax portion. With this change, all property tax credits would be based on school taxes effective beginning in 2019.
See the discussion of corporate income tax changes above, some of which apply to an individual.
Sales Taxes
Effective after April 2018, two new sales tax exemptions were introduced, for drill bits designed specifically for oil or gas exploration or development, and for fertilizer bins used in a farming operation.
The 2018 budget confirmed that the PST rate would drop to 7 percent by 2020.
Sin Taxes
Effective at midnight on March 12, 2018, the tobacco tax rate for fine-cut tobacco increased from 28.5 cents per gram to 45 cents per gram. The rate on cigarettes, cigars, and raw-leaf and other tobacco products remained unchanged.
Resource-Related Matters
Effective September 1, 2018, Manitoba's carbon tax imposed a tax of $25 per tonne of greenhouse gas emissions that applied to gas, liquid, and solid fuel products intended for combustion in Manitoba. Existing international fuel tax agreement rules for commercial transportation and trucking that prorate fuel-use charges to a jurisdiction also applied to the carbon tax in Manitoba. Not subject to the carbon tax were certain fuel uses and exemptions provided to protect sectors and industries that are trade-exposed to jurisdictions that do not have a comparable carbon price, to protect the agricultural sector, and to apply only to emissions in the province. The main exemptions were for agricultural process emissions, marked fuels, and output-based pricing system entities that emitted at least 50,000 tonnes of CO 2equivalent per year. (A smaller entity was exempt if government-approved.) The carbon tax was collected and remitted as follows: on transportation fuels, through the existing fuel tax system; on natural gas, by Manitoba Hydro; and on other products, by the purchaser. The carbon tax will be returned to Manitobans in the form of tax cuts in the next four years. The carbon tax rates applied to major fuels are shown in table 18.
Real Estate Taxes
The government promised that the education property tax would be calculated differently, and the administration of the tax would be streamlined to benefit low-income renters and municipalities.
Pensions
No changes were announced.
Other
Effective after 2018, the profits tax of 1 percent applicable to credit unions and caisses populaires with taxable income of over $400,000 was eliminated. The special tax deduction for credit unions and caisses populaires, which allowed for lower tax on part of their income, was phased out over five years starting after 2018, in line with similar measures adopted by some provinces and the federal government. These entities have continued access to the small business deduction.
Administrative and technical updates were made. Tobacco tax enforcement measures and the administration of the insurance corporations tax were streamlined.
The following changes were made to the provincial Income Tax Act: streamlining of the application for the education property tax credit on the property tax statement in order to ensure self-assessment (depending on the timing of notification by the relevant municipality); removal of ambiguity regarding access to the community enterprise development tax credit via regulations; updating of the R & D tax credit to ensure consistency with federal income tax changes; retroactive amendment of the small business deduction for credit unions to ensure that the administration of the deduction reflects provincial policy and legislation; amendment of right-ofrecovery provisions to reflect the federal administration of the deduction of Manitoba tax credits from a taxpayer who owes tax in another province; and amendment of green energy equipment tax credit regulations to allow related retroactive regulations by the minister of finance.
The province promised amendments that would allow chiropractors to provide professional services through professional corporations.
An insurance business must now file and pay its 2018 insurance corporations tax electronically, using the province's online tax system, TAXcess; a 2018 return is due on March 20, 2019. There was no rate change, but the previous requirement to pay quarterly instalments was eliminated. The Ontario innovation tax credit (OITC) was increased by the previous government for expenditures incurred after March 27, 2018 from 8 percent to 12 percent (prorated for straddle years) to encourage small and medium-sized businesses to engage in R & D. If a qualifying corporation had a ratio of R & D expenditures to gross revenues that was (1) at most 10 percent, the company was eligible for the regular 8 percent OITC; (2) between 10 percent and 20 percent, the enhanced rate (12 percent) applied on a straightline basis; and (3) at least 20 percent, the enhanced 12 percent rate applied. Both gross revenues and R & D expenditures were those attributable to Ontario operations and were aggregated for associated corporations. The newly elected government said that it would not proceed with this or (as noted) the preceding initiative, but will "ensure that support provided for research and development is effective and efficient." 17 The 2018 budget extended eligibility (via regulatory amendment) for the Ontario interactive digital media tax credit to film and television websites purchased or licensed by a broadcaster and embedded in its website, applicable to websites that hosted content related to film, television, or Internet productions that did not receive either a certificate of eligibility or a letter of ineligibility before November 1, 2017. newly elected government.) The previous government became aware that business models in the film and television industries often required that a website purchased and licensed by a broadcaster be integrated within the broadcaster's website for a seamless user experience.
In the 2017 fall economic outlook, the previous Ontario government proposed to lower the Ontario corporate income tax rate on small businesses from 4.5 percent to 3.5 percent. This item was not specifically mentioned in the 2018 fall economic outlook.
The 2017 fall economic outlook also proposed that the M & P tax credit would reduce the corporate income tax rate to 10 percent.
The previous government proposed to parallel the federal 2018 budget limit on the small business deduction for passive investment income between $50,000 and $150,000 earned in the taxation year: effective for taxation years beginning after 2018, the federal small business limit is phased out straightline for a CCPC and associated corporations that earn passive investment income within the specified range. (This limit was in addition to the province's phaseout of its small business finances of the nation n 127 deduction if the CCPC or associated corporations had between $10 million and $15 million in taxable capital employed in Canada; the effective limit was the lesser of the limit on taxable capital and the business limit on the basis of passive investment income.) The phasing out of the small business limit was another item that the newly elected government said that it would not proceed with; proposed legislation ensured that Ontario did not parallel the federal change.
The 2018 budget indicated that the province was reviewing different countries' initiatives-patent boxes, tax refunds, deductions, and exemptions-in order to keep ownership in the province of R & D performed in Ontario. The previous government promised to develop an incentive that works best for Ontario. The newly elected government made no specific mention of these initiatives in its 2018 fall economic outlook.
The 2018 Ontario budget promised to parallel 2018 federal budget measures that address sophisticated financial instruments and structured share repurchase transactions of some Canadian financial institutions that realized artificial tax losses. 18 Ontario's newly elected government did not specifically mention these initiatives in the 2018 fall economic outlook.
The EHT (employer health tax) exemption will increase in 2019 from $450 to $490 on the basis of Ontario's consumer price index. The exemption increase will reduce the EHT, on average, by about $690 in 2019 for some 58,000 employers. The 2018 budget proposals to target the EHT exemption and the incorporation of the federal anti-avoidance rules-which would have slightly increased the EHT for 20,000 Ontarians-will not be proceeded with, according to the 2018 fall economic outlook.
The newly elected government promised in the 2018 fall economic outlook to follow any federal initiative that expensed new depreciable assets-in response to the current US tax reform. 19
Personal Income Tax
The 2018 budget proposed, for the 2018 taxation year, to eliminate the personal surtaxes (20 percent and 36 percent) and to amend the personal income tax brackets and rates. The newly elected government said that it would not be proceeding with this proposal. The former government said that the proposed changes were intended to simplify the personal income tax calculation, and the elimination of the surtaxes would ensure that non-refundable tax credits provided the same maximum relief to all taxpayers. The newly elected government agreed that these proposals, now cancelled, would have meant a personal income tax increase of about $200 on average for approximately 1.8 million people. The 2018 budget proposed to enhance support for charitable giving by increasing the tax credit rate on donations exceeding $200 to 17.5 percent. The newly elected government did not include a specific reference to this proposal in the 2018 fall economic outlook.
The 2018 Ontario budget promised to parallel a 2018 federal budget measure that limited income sprinkling through the use of private corporations, effective for 2018 and subsequent years. (Income sprinkling-also referred to as "income splitting"-involved diverting income from a high-income individual to a minor or other family member so that the income would be taxed at a lower combined federal and provincial rate.) Thus, Ontario personal income tax at the top marginal rate would apply to the split income of an adult family member who was not active in the business. The only specific mention of this initiative in the 2018 fall economic outlook was a reference to a federal change that allows payers of the tax on split income to apply the disability credit against that tax; Ontario will parallel that change.
The 2018 fall economic outlook suggested that, starting in 2019, the government would introduce a new non-refundable tax credit for low-income individuals and families (the "LIFT credit") to eliminate or reduce the provincial income tax for low-income taxpayers with employment income (who had not been in prison for more than six months in the year). The maximum credit is the lesser of $850 and 5.05 percent of employment income, reduced by 10 percent of the greater of adjusted individual net income over $30,000 (up to $38,500) and adjusted family net income over $60,000 (up to $68,500), including a spouse's or common-law partner's income at year-end. The credit is limited to the Ontario personal income tax payable, excluding the Ontario health premium. The taxpayer must be a Canadian resident at the beginning of the year and an Ontario resident at year-end. Critics say that more assistance to low-income workers would be offered by reinstating the previous government's increased minimum wage of $15 an hour in 2019.
The newly elected government's 2018 fall economic outlook promised to adjust the non-eligible dividend tax credit calculation to maintain the applicable rate at 3.2863 percent.
The newly elected government's 2018 fall economic outlook promised to parallel a federal change that modified the pension income tax credit to take into account additional federal veteran's benefits.
Sales Tax
Ontario HST would apply to First Nations members who purchased cannabis offreserve, consistent with a status Indian's current off-reserve purchases of alcohol and tobacco. However, it was proposed that a status Indian who was registered to obtain medical cannabis from a licensed producer should be eligible for the point-of-sale rebate of the 8 percent provincial portion of the HST for purchases delivered offreserve.
The newly elected government's 2018 fall economic outlook promised to remove a reference to the Canadian Red Book and the Canadian Older Car/Truck Red Book in regulation 1012 under the Retail Sales Tax Act.
finances of the nation n 129
The 2018 fall economic outlook also promised to remove a spent provision that provided one-time support to a business during the transition to HST in 2010. In addition, certain amendments to regulations were promised that would replace outdated references to the GST and change references to PST to refer instead to HST.
Sin Taxes
The previous government promised to amend the small beer manufacturer's tax credit and the definition of a microbrewer in the Alcohol and Gaming Regulation and Public Protection Act, 1996, to encourage growth of small beer manufacturers and microbrewers. Both amendments were to be effective from March 1, 2018. This initiative was not specifically mentioned in the 2018 fall economic outlook.
On October 18, 2018, the newly elected government proposed not to move forward with the basic tax rate increase on beer that was scheduled by the previous government for November 1, 2018. Subject to legislative approval, from November 1, 2018, the basic beer tax rate would remain at rates that were set to apply from March 1, 2018 to February 28, 2019. Earlier in the year, the newly elected government reduced the minimum beer price to $1 per bottle plus deposit. The government also promised a review that might result in beer and wine being available for sale in corner, grocery, and big-box stores. Increased hours (9 a.m. to 11 p.m., seven days a week) for the Beer Store, Liquor Control Board of Ontario stores, and authorized grocery stores were said to improve choice for consumers.
To provide regulated and restricted access to cannabis, the federal government's 2018 budget proposed a new federal excise duty at a flat rate (imposed at the time of packaging) of $1 per gram or $1 per seedling. A lower rate was imposed for trim versus flower. At the time of delivery to a purchaser, a 10 percent ad valorem rate applied: the licensee was liable to pay the duty at the higher of the flat rate and the ad valorem rate. The federal government agreed with most provinces and territories-and Ontario intended to enter into such an agreement-to pay to a participating jurisdiction 75 percent of the duty raised (and any excess over $100 million otherwise earned in duty by the federal government) for the first two years after legalization. This sharing with Ontario applied to sales intended for Ontario. The newly elected government reduced the estimate of the provincial share of federal excise duty revenue by $18 million, partially offset by a reduction in net costs (primarily from retail storefronts' construction by the Ontario Cannabis Store) of $15 million, for a net reduction of $3 million in revenues from cannabis. The gaming commission will enforce the provincial rules, including the minimum age of over 18. Proximity to children's playgrounds, hospitals, and child-care facilities is also restricted. Permitting the sale of cannabis through private retail stores means that the government will avoid the capital expenditure of a bricks-andmortar province-run store.
As indicated in the 2017 budget and confirmed in the 2018 budget, effective at 12:01 a.m. on March 29, 2018, the tobacco tax rate increased from 16.475 cents to 18.475 cents per cigarette (equal to an increase of $4 per carton) and per gram for (2019) 67:1 tobacco products other than cigars. Tobacco tax on a pack of 20 cigarettes equalled $3.70; on a pack of 25 cigarettes, $4.62; and on a carton of 200 cigarettes, $36.95. The rate of tobacco tax on the taxable price of cigars remained at 56.6 percent. A further increase of $4 per carton of cigarettes was planned for 2019, but the new government said in its 2018 fall economic outlook that it would not move forward with this initiative.
The previous government amended the Tobacco Tax Act in May 2017 to require that, effective for 2018, restrictions existed on the import, possession, sale, and delivery of cigarette filter components to registered tobacco manufacturers, subject to certain exemptions (for example, for transporters of such components). Penalties and offence provisions applied, and authorities were permitted to seize and cause forfeiture. This initiative was not specifically mentioned in the 2018 fall economic outlook, but Ontario issued a release thereon, which is available on the government's website. 20
Resource-Related Matters
The previous government proposed to no longer require First Nation individuals and band councils to apply for and use a certificate of exemption (an Ontario gas card) issued by the Ministry of Finance as proof of entitlement when purchasing gasoline on-reserve, effective in 2019. The regulatory proposal substitutes for the Ontario gas card a certificate of Indian status or a secure certificate of Indian status card from individuals; band councils would use an identifier issued by the government. This initiative was not specifically mentioned in the 2018 fall economic outlook.
Owing to late tax-rate changes, the previous Ontario government said that Florida may require an additional tax payment under the international fuel tax agreement for activity during the first quarter of 2018. This matter was not specifically raised in the 2018 fall economic outlook.
The 2017 Ontario fall economic outlook proposed changes to the Mining Tax Act that expedited a functional-currency election, effective on the day (December 14, 2017) that the omnibus act received royal assent. The 2018 fall economic outlook did not specifically mention this initiative.
Relief from taxes under the Electricity Act, due to expire at the end of 2018, is extended by the 2018 fall economic outlook until the end of 2022. Relief applies to the transfer tax (reduced from 33 percent to 22 percent under the proposal's time extension, and to 0 percent for transfers by municipal electrical utilities with fewer than 30,000 customers) and certain payments in lieu of taxes (PILs) payable on the transfer of electricity assets to the private sector. Capital gains from PILs deemed dispositions are also PILs-exempt. finances of the nation n 131
Real Estate Taxes
The 2018 budget announced that it proposed to pass a regulation to allow land transfer tax for certain unregistered dispositions of beneficial interests in land through certain types of partnerships and trusts, to be payable within 30 days from the end of the calendar quarter of the disposition, rather than within 30 days of the disposition. The change was intended to reduce the administrative burden of reporting and payment. The 2018 fall economic outlook did not specifically mention this initiative.
The previous government planned to post on its website guidance regarding minimum information and documentation that a partnership or the authorized representative of a trust should provide when submitting a consolidated quarterly filing for the land transfer tax. The 2018 fall economic outlook did not specifically refer to this initiative.
The previous government announced that the province continued to review issues raised in earlier consultations regarding the land transfer tax. The 2018 fall economic outlook did not specifically refer to this initiative.
As part of a commitment announced in the 2017 budget, the 2018 budget made further rate adjustments to modernize the property taxation of railway rights-ofway. Ontario reduced further rate inequities by increasing the lowest property tax rates on mainline railway rights-of-way to a minimum of $110 per acre in 2018. (The lowest mainline rate in 2016 was about $35 per acre.) The newly elected government did not specifically refer to this initiative in the 2018 fall economic outlook.
In recognition of challenges faced by the shortline sector of the railway industry, the 2018 budget promised to continue to freeze shortline railway property tax rates at 2016 levels. The 2018 fall economic outlook did not specifically refer to this initiative.
In response to municipal concerns regarding the property tax revenue received in respect of high-tonnage rail lines, the 2018 budget announced that, beginning in 2018, municipalities would have the option to increase rates per acre of hightonnage rail lines in accordance with a new adjusted tax rate schedule. Details were promised to be provided in the spring of 2018. The 2018 fall economic outlook did not specifically refer to this initiative.
The 2018 budget amended the Assessment Act to exempt non-profit child-care facilities that leased space in otherwise tax-exempt properties. This is consistent with the Municipal Property Assessment Corporation's (MPAC's) historical treatment of these facilities. The 2018 fall economic outlook did not specifically refer to this initiative.
The 2018 budget committed to granting the city of Toronto the right to provide a property tax reduction of up to 50 percent to qualifying facilities that offer affordable spaces for the arts and culture sector. The 2018 fall economic outlook did not specifically refer to this initiative.
Businesses on land owned by Victoria University in Toronto benefited from a property tax exemption, unlike other businesses and provincial universities. The 132 n canadian tax journal / revue fiscale canadienne (2019) 67:1 2018 budget committed to legislative amendments to ensure that only lands owned and occupied by the university were exempt. Any property tax increases to tenants would be phased in over several years. The 2018 fall economic outlook did not specifically refer to this initiative.
The 2018 budget promised to review-including consultation with affected municipalities and airport authorities-payments in lieu of property taxes based on airport annual passenger rates (previously calculated in 2001 when the payments in lieu of property taxes were introduced). The 2018 fall economic outlook did not specifically refer to this initiative.
As a result of a review promised in the 2017 budget (of the municipal portion of the education property tax's vacancy rebate and reduction programs), the 2018 budget promised to align the education property tax with recent changes to municipal vacancy programs-related to the 2016 budget-to ensure greater provincial consistency. The changes were to take effect in 2019, to ensure adequate business planning. The 2018 fall economic outlook did not specifically refer to this initiative.
According to the 2018 budget, an adjustment to the education property tax rate calculation from the 2016 budget would be maintained in 2018. Ontario would also continue to monitor this tax, including the ability to verify accurate remittance. The 2018 fall economic outlook did not specifically refer to this initiative.
For the 2016 assessment update, Ontario had introduced an advance disclosure process that allowed businesses to contribute to determining the finalization of assessed values. To strengthen this process, the 2018 budget proposed that for the 2021 taxation year, the valuation date would be January 1, 2019, to encourage a more open exchange of information. The 2018 fall economic outlook did not specifically refer to this initiative.
Ontario also started to review requests for information by property owners. In addition, to ensure that compliant parties were not disadvantaged during valuation or on appeal, the previous government promised to review the format of MPAC's requests and to make amendments in the fall of 2018 to address non-compliance. The 2018 fall economic outlook did not specifically refer to this initiative.
Effective December 16, 2017, Teraview, Ontario's land registration system, was updated to include new land transfer tax statements on the application of the nonresident speculation tax (NRST). These new statements replaced the three statement options available when NRST was introduced. NRST could not be paid at the time of electronic registration until December 30, 2017. Transfers subject to NRST and registered before December 30, 2017 required prepayment to the Ministry of Finance of both NRST and land transfer tax. If documents were required to be registered at the Land Registry Office before or after December 29, 2017, NRST payable must be prepaid (along with land transfer tax) to the Ministry of Finance. The land transfer tax affidavit was amended to incorporate required NRST statements. The 2018 fall economic outlook did not specifically refer to this initiative.
Provincial land tax is property tax paid in unincorporated areas of northern Ontario outside municipal boundaries. A review of the tax was announced in 2013, and finances of the nation n 133 the final phase of reform was announced in 2017, including confirmation that the tax rate would be $250 per $100,000 of assessed value. This initiative was not specifically referred to in the 2018 fall economic outlook. Annual rate changes, starting in 2018, are shown in table 20.
The 2018 fall economic outlook proposed to amend the Assessment Act to create an Ontario property tax exemption for properties occupied by branches of the Royal Canadian Legion.
Pensions
See the discussion under "Other" below.
Other
Ontario promised to follow federal anti-avoidance rules relating to the small business deduction for the EHT exemption; starting in 2019-subject to seeking public comment on these changes-the exemption will be available only for an individual, a charity, a not-for-profit organization, a private trust, a partnership, or a CCPC, and the province will incorporate federal anti-avoidance rules precluding the multiplication of the small business deduction and set a rate for associated employers consistent with the exemption. The newly elected Ontario government seems to have reversed the province's position on exploring measures that target the EHT exemption.
Ontario announced that it continued to work closely with the federal, provincial, and territorial governments and the CRA to combat aggressive tax-planning schemes eroding the common tax base. The 2017 budget created a group of expert tax advisers for this purpose.
In furtherance of its efforts to combat the underground economy, Ontario promised several measures. Regarding electronic sales suppression, Ontario promised to address the practice and also mandate that prescribed businesses update their electronic cash register systems to meet legal requirements that stop businesses from manipulating sales transaction information. Continued consultation was promised for the coming months in order to ensure, inter alia, a reasonable transition period; the province also promised to consider financial and other support. Both initiatives were part of a provincial commitment to make the transition as easy as possible. The newly elected government's website contains information on the issue, but the legislation enacted by the previous government-The Revenue Integrity Act-will not come into force until a date is proclaimed.
To address unregulated tobacco, the 2018 budget committed to a balanced approach of enforcement and partnerships. In addition to work done imposing penalties and seizing tobacco products, Ontario would penalize and thus prevent the diversion of raw-leaf tobacco; implement "track and trace" technology to monitor the movement and location of raw-leaf tobacco; support the Ontario Provincial Police (OPP) in expanding the contraband tobacco enforcement team; expand police services to fund and thus support tobacco investigations; and make legislative 134 n canadian tax journal / revue fiscale canadienne (2019) 67:1 amendments that would allow a court to authorize tracking devices in an investigation, in order to improve the tracking and monitoring of unregulated tobacco. The 2018 fall economic outlook did not specifically refer to these initiatives.
To improve administrative effectiveness or enforcement, to maintain collections, and to enhance legislative clarity and flexibility to preserve policy intent, the 2018 budget proposed to amend various acts, including the Municipal Tax Assistance Act and various statutes administered by Ontario Finance. Additional proposed legislative initiatives included amendments to the Pooled Registered Pensions Plans Act, 2015, to incorporate the federal process for entering into or amending an existing agreement in the federal act, and amendments to the Climate Change Mitigation and Low-carbon Economy Act, 2016, for the reimbursement of expenditures by the Crown for funding initiatives that are reasonably likely to reduce or support a reduction of greenhouse gas emissions. These items were not specifically mentioned in the 2018 fall economic outlook by the newly elected government.
The newly elected government promised to end the former government's capand-trade carbon tax effective July 3, 2018. The 2018 fall economic outlook says that the next stage of the provincial strategy involves achieving transparency for the actual cost of a federal carbon tax in the absence of a provincial carbon tax. A constitutional challenge was later filed by the province.
The newly elected government also took action on so-called hallway health care by reducing pressure on hospitals-for example, by investing more in hospital beds and spaces in hospitals and communities. Increased investment in long-term-care beds will be made over the next five years, along with more investment in the treatment of mental health and addiction issues.
Starting in March 2019, individuals under the age of 25 who are not covered by private plans will have eligible prescriptions covered.
March 27, 2019 will be designated as Special Hockey Day to celebrate the contributions of those involved in this important initiative.
In the 2018 fall economic outlook, the newly elected government also froze driver's licence fees.
The new government announced a provision built into the fiscal plan for tax measures to strengthen Ontario's economy, such as paralleling a potential federal response to the accelerated capital cost allowance for new assets to address US tax reform. finances of the nation n 135 The environmental commissioner, the child and youth advocate, and the French-language services commissioner will become part of the auditor general's office or of the provincial ombudsman's office, according to the 2018 fall economic outlook.
The 2018 fall economic outlook said that the newly elected government plans to reduce electricity bills by 12 percent and will end green energy contracts, close the Thunder Bay generating station, and extend operation of the Pickering nuclear generating station until 2024. The Ontario Energy Board will have its governance modernized to deliver accountability and predictability. The government is planning to publicly review current electricity pricing for industrial users. To encourage and allow more time for consolidation in the electricity distribution sector, the 2018 fall economic outlook extended two time-limited transfer tax incentives and a capital gains exemption under the deemed disposition rules (scheduled to expire at the end of 2018) until the end of calendar 2022. Ontario plans consultations to consider different ways to promote the electricity distribution sector.
Ontario will support its partners who wish to expand oil distribution and also protect their competitiveness from the federal carbon tax.
The Access to Natural Gas Act, 2018, tabled by the newly elected government, proposes a program to provide natural gas to outlying communities that will thereby become more attractive for job creation and new business growth.
The new government promised to expand broadband and cellular projects in rural and northern communities and some urban areas. The 2018 fall economic outlook said that a strategy for achieving these objectives would be released in early 2019.
The minister of agriculture, food, and rural affairs will launch an agricultural advisory group to inform government policies.
The newly elected government plans to dissolve the Ontario College of Trades and create a more modern outcomes-focused system. The new government plans to review support for apprentices and businesses that employ and train them. In addition, the government will review the workers' compensation system to ensure that it remains sustainable. The new government will create efficiencies in the pension sector by supporting mergers and conversions that will reduce costs and increase efficiencies, including some under way in the hospital, municipal, and university sectors. Changes to the Pension Benefits Act also have been proposed that would allow plan administrators to allow electronic beneficiary designations to reduce red tape.
Ontario wishes to support a streamlined capital markets regulatory system and will respect any related Supreme Court of Canada decision. The new Financial Services Regulatory Authority of Ontario (FSRAO) will deter fraud, foster competition and innovation, and streamline the regulatory processes. The new government proposes to amalgamate the FSRAO with the Ontario Deposit Insurance Corporation to simplify the province's regulation.
The new Ontario government is committed to working with willing partners to ensure sustainable northern development and will review the Far North Act, 2010, (2019) 67:1 to ensure that land-use planning aligns with local, First Nations, and provincial priorities. The government will also continue to explore ways to encourage development of northern natural resources and will establish a special mining working group to focus on speedier regulatory approvals and the attraction of major new investments. Algoma will be supported in its business restructuring; Ontario will continue to dedicate the resources necessary to fight forest fires across the province; and highway 11/17 will be enhanced by two additional lanes in certain stretches. The province will review other initiatives to meet northerners' transportation needs, including rail and bus services.
Ontario will develop a plan to assume responsibility for the Toronto Transit Commission in order to rationalize transportation in the Greater Toronto Area (GTA) and Greater Hamilton Area; a special adviser was appointed in August 2018. A review of high-speed rail's future in Southwestern Ontario is under way. The province will resume the environmental assessment for the GTA west highway corridor, suspended in 2015, to relieve congestion in the GTA. A review of the Metrolinx agency will proceed with the aim of developing an efficient regional transit system.
The newly elected government ended the drive clean program for passenger and light-duty vehicles effective April 1, 2019, owing to enhanced automobile industry standards.
A plan to be launched in the spring of 2019 will aim at increasing housing supply through consultation and over the longer term through actions rolled out over the next 18 months. Reintroduction of a rent control exemption for new rental units first occupied after November 15, 2018 is intended to encourage an increase in housing supply.
Increased fairness in automobile insurance rates, reduction of the regulatory burden in automobile insurance, and increased computerization of the industry will be initiated by the new government. Regulatory oversight will also be ensured for financial planners and advisers to give comfort to consumers.
Public consultation began in September 2018 to reform education, including changes to the severity of responses to cases involving sexually abusive teachers. Math skills are important to success in the labour market; the new government thought that supporting a focus on fundamentals was more important to this end than the current discovery-based learning environment. Free speech on university and college campuses will be supported by the development by schools of a policy backed by an annual report; the government set a deadline of January 1, 2019. Noncompliant schools may be subject to a reduction in operating grant funding.
The newly elected government's 2018 fall economic outlook promised to reform social assistance with a view to improving employment outcomes.
Variable benefit accounts planned by the new government will allow retirees with defined benefit plans to receive income directly from their plans.
Additional funding for digital, investigative, and analytical resources is available for fighting criminals, and a new team led by Crown attorneys will ensure that the best evidence is available to detain individuals charged with serious firearm offences. Nine new OPP detachments will replace aging facilities, and the aging Public finances of the nation n 137 Safety Radio Network-a critical resource for frontline and emergency responders-will be replaced. Adjudicative tribunals accountable to the attorney general will be reviewed for efficiency. A public awareness campaign will provide information on the dangers and identification of illegal tobacco.
Completion of a monument to Canadians who served in Afghanistan, promised by the newly elected government, is expected in the fall of 2019.
Green bonds capitalize on low interest rates and enable Ontario to raise funds, for example, for transit initiatives, extreme-weather resistant infrastructure, and energy conservation and efficiency projects such as health-and education-related projects. The 2018 fall economic outlook said that Ontario planned to issue its next green bond by the end of the 2018-19 fiscal year.
The 2018 fall economic outlook promised to list province-wide consultations for the 2019 budget. Individuals and organizations can also e-mail or mail submissions directly to the Ontario minister of finance.
Quebec ( Table 21)
Tax Highlights n Small business rates standardized n Health-care contributions reduced n Sales tax on digital economy
Corporate Income Tax
The threshold entitlement to reduced health-care contributions increased from $5 million in 2018 to $5.5 million in 2019, and will continue to increase in equal annual amounts until the threshold reaches $7 million in 2022. The threshold will be indexed automatically in 2023 and subsequent years.
A small or medium-sized business (SMB) that was an eligible specified employer whose payroll did not exceed $1 million and that was in the primary or manufacturing sector or in the service or construction sector, had its rate of health services fund contributions decreased from 1.5 percent to 1.25 percent and from 2.3 percent to 1.65 percent, respectively, on a straightline basis over five years starting on budget day. The contribution rate also decreased if the eligible specified employer in any of those sectors had an annual payroll between $1 million and $5 million; the rate was gradually reduced and varied from 1.65 to 4.26 percent, and the payroll limit gradually increased from $5 million to $7 million.
The small business income tax rate was gradually reduced for an SMB not in the primary or manufacturing sector and reaches 4 percent in 2021. This change was effective for a taxation year that ended after budget day; the first instalment thereafter can be adjusted. The maximum rate for an SMB in 2021 and subsequent years will be 7.5 percent; the gradually reduced additional deduction for an SMB in the primary or manufacturing sector is then eliminated. A new refundable tax credit for an employee of an SMB was intended to encourage training. After budget day and before 2023, an SMB (a qualified corporation that has a Quebec establishment and carries on business in Quebec) could claim up to 30 percent of eligible training expenditures, to a maximum of $5,460 per annum, if the SMB's payroll for the taxation year or fiscal period did not exceed $5 million. For other SMBs, the total 30 percent rate of credit decreased linearly until payroll reached $7 million. After budget day, on-the-job training credits were enhanced for aboriginal workers, and the maximum weekly qualified expenditure limit and hourly rate increased for all eligible trainees.
An additional capital cost allowance (CCA) of 60 percent replaced the 35 percent additional CCA introduced in the March 2017 Quebec Economic Plan for two years. The new CCA rate is available for two years for new manufacturing or processing equipment and for new general-purpose electronic data-processing equipment, both acquired after March 27, 2018 and before April 2020.
A tax holiday for an investment project carried on after budget day was broadened to extend to the development of an eligible digital platform. An eligible digital finances of the nation n 139 platform meant a computer environment enabling content management or use that served as an intermediary in accessing information, services, or property supplied or edited by the corporation or partnership or by a third party, and that was not a taxexempt platform. The refundable tax credit for the production of multimedia events or environments presented outside Quebec was amended to remove the $350,000 per production limit. An application must be submitted for an advance ruling or a certificate (if no prior advance ruling application had been made) to the Société de développement des entreprises culturelles (SODEC) after budget day.
A temporary refundable tax credit was introduced for expenditures related to the digital transformation of print media activities incurred after budget day and before 2023. A qualification certificate from Investissement Québec was required to the effect that the company produced and disseminated a print or digital information medium containing original written content. The credit was 35 percent of the lesser of eligible digital conversion costs and the annual limit of $20 million. Tax assistance of up to $7 million was provided annually.
For the refundable tax credit for film dubbing, the limit of 45 percent of consideration paid for the performance of a dubbing contract was eliminated effective after budget day. (2019) 67:1 Amendments ensure that a virtual reality documentary may provide fewer than 30 minutes of programming (per episode in the case of a series) for the purposes of the refundable film production services credit. This amendment will apply to qualified productions for which a certificate application was filed with SODEC after budget day.
Personal Income Tax
Pursuant to the November 21, 2017 Economic Plan Update, the tax rate for the lowest tax bracket was reduced retroactive for all of 2017 from 16 percent to 15 percent.
A new first-time home buyer's non-refundable tax credit for a qualifying housing unit acquired after 2017 was available in 2018 to a non-trust individual Quebec resident in an amount not exceeding 15 percent (the current first taxable income bracket) of a $5,000 acquisition cost. The unused portion of this maximum credit of $750 is not transferable to a spouse. The individual (or spouse), or a specified disabled person in need of a more accessible home, must have intended to occupy the home no more than one year after the purchase, and the individual or spouse cannot have owned a housing unit that was occupied by the individual in the fourth preceding calendar year before the acquisition.
The RénoVert refundable tax credit was extended for another year to the end of March 2019 (for qualified expenses paid before 2020) for households that have not reached the $52,500 maximum.
Tax-shield benefits were enhanced; the maximum increase in eligible work income was raised from $3,000 to $4,000 (in the previous year) for each household member as of 2018.
The threshold for the tax credit to encourage experienced workers to stay in the labour market was lowered from 62 to 61 years of age. The new category of 61-yearold workers could claim a tax credit on a maximum of $3,000 of eligible work income in 2018; maximum eligible work income for older workers was increased by $1,000.
The tax credit for a person living alone was broadened to include an individual who ordinarily resided in a self-contained domestic establishment maintained by the individual for himself or herself and for another person who was under 18 or was an eligible student for whom the individual was the parent, grandparent, or greatgrandparent where the individual ordinarily lived in the establishment throughout the year or until death. The individual who maintained such an establishment could claim the tax credit for persons living alone, for 2018 and subsequent years.
The refundable tax credit limit applicable to child-care expenses for a child with a severe and prolonged impairment in mental or physical function and for other children under the age of seven at year-end was $13,000 and $9,500, respectively, for 2018. The limit for impaired children allowed for expenses of up to $50 per day for full-time child care and otherwise up to $36.50 for children under the age of seven. The two limits mentioned above and the other annual limit of $5,000 (for all other cases) are automatically indexed as of 2019.
finances of the nation n 141 The tax credit of up to $6,250 for the first major cultural gift (after July 3, 2013) was extended for five years starting in 2018 and ending after 2022.
After March 2018, the Youth Alternative Program was replaced by the Aim for Employment Program, the benefits received from which were taxable.
Informal caregivers of an eligible relative (not lived with or housed, but regularly and continuously helped) were eligible for a supplementary tax credit that consisted of a basic amount of $652 for 2018 plus, now, a supplement of $533 (indexed after 2018); that supplementary credit was reduced depending upon the relative's income for that year. The supplement was reduced for 2018 at a rate of 16 percent for each dollar of income in excess of a threshold of $23,700. The minimum period in the year of assistance must consist of at least 185 out of 365 consecutive days. An eligible relative, inter alia, must not live in a dwelling in a private seniors' residence or in a public network facility and must have a severe or prolonged impairment. In a conjugal co-residence, the tax credit was a lump sum of $1,015 for 2018. Effective March 27, 2018, a nurse practitioner could certify the impairment that is required for this credit, or certify that the relative could not live alone or needed assistance in carrying out a basic activity of daily living.
The refundable tax credit for volunteer respite for informal caregivers was increased, depending on the number of hours of service. The current system of a $500 credit for 400 hours or more was replaced by a sliding scale: a $250 credit for 200 to under 300 hours; a $500 credit for 300 to under 400 hours; and a $750 credit for 400 hours and more. The annual "envelope" for each care recipient of an informal caregiver was raised from $1,000 to $1,500.
The refundable tax credit for the acquisition or rental of property intended to help seniors to live longer independently was increased for 2018 and subsequent years through a reduction from $500 to $250 of the threshold for claiming the credit and an extension of the qualified property list; additions will include hearing aids and walkers.
Sales Tax
Starting after 2018, the budget implemented mandatory QST registration for a supplier of certain property or services in Quebec who is located in Canada (but outside Quebec) and has no physical or significant presence in Quebec (a nonresident supplier), and does not have taxable supplies in Quebec that exceed $30,000. The supplier must register with Revenu Québec and collect and remit QST on certain supplies in the province to specified Quebec consumers. A specified Quebec consumer for the purposes of these rules is a person who is not a QST registrant and whose usual place of residence is in Quebec. A non-resident supplier located in Canada must collect and remit QST on taxable corporeal movable property supplied in Quebec. The Quebec government will take into account models from other jurisdictions that have similar systems.
These registration rules govern Quebec presence in regard to the digital economy. Mandatory registration also applies to digital property and services 142 n canadian tax journal / revue fiscale canadienne (2019) 67:1 distribution platforms with respect to certain taxable supplies made that control key elements of transactions with certain Quebec consumers. The non-resident was not considered to be a registrant generally and could not claim input tax refunds; registered recipients could not recover tax paid either. However, a qualifying non-resident could register under the general system instead, and must provide security of a value and in a form acceptable to the minister. A non-resident of Quebec and of Canada must collect and remit QST on certain supplies in Quebec to Quebec consumers, if such platforms control the key elements of transactions such as billing, transaction terms and conditions, and delivery terms. A digital platform means a platform that provides a service to a non-resident supplier by means of e-communication (such as an application store or a website) that enables the non-resident to make certain taxable supplies in Quebec to specified Quebec consumers-and enabling digital platforms-from 2019; a non-resident of Canada must do so after August 2019. The non-resident supplier must exceed $30,000 for all taxable supplies to persons reasonably considered to be consumers. A platform is not considered to control key elements of a transaction if it only supplies a transport service (as do digital platforms operated by Internet service providers and telecommunications companies), a service providing access to a payment system, or an advertising service that informs customers of various types of movable property or services offered by the non-resident supplier and links customers to the supplier's website.
An existing agreement requires the Canada Border Services Agency to be responsible for the collection (on behalf of the Quebec government) of QST applicable to property imported by a Quebec individual. In the spring of 2018, Quebec started a plan of cooperation with the federal government to improve tax collection at the borders.
Sin Taxes
Quebec promised to enter into an agreement with the federal government to receive revenue equal to an additional excise duty on cannabis intended for sale in Quebec.
Resource-Related Matters
An allowance for environmental studies was introduced in the Mining Tax Act: deduction of an amount up to an operator's cumulative environmental studies expenses account at year-end for expenses incurred after budget day. Consequential adjustments were made for the fiscal year ending after budget day to the refundable duties credit for losses.
The refundable tax credit for the production of ethanol, cellulosic ethanol, and biodiesel fuel in Quebec was extended for five years until the end of March 2023 to promote their production and consumption in Quebec. After March 2018 and in order to simplify the application of the tax credit and to improve the predictability of the assistance that might be obtained by a qualified corporation, a fixed rate of 3 cents, 16 cents, and 14 cents, respectively, per litre of ethanol, cellulosic ethanol, finances of the nation n 143 and biodiesel fuel was used to calculate the tax credit; the monthly ceiling on the production of each was also then raised to 821,917 litres times the number of days in the particular month.
To modernize and transform the forestry sector and bioenergy, a refundable tax credit was introduced for pyrolysis oil production in Quebec. After March 2018, the refundable tax credit was set for five years and was calculated at the rate of 8 cents per litre up to 100 million litres per year. The credit was granted to a qualified corporation that after March 2018 produced eligible pyrolysis oil in Quebec from residual forest biomass sold in and intended for Quebec.
Real Estate Taxes
No changes were announced.
Pensions
No changes were announced.
Other
Amendments were promised to the legislation constituting the Capital régional et coopératif Desjardins (CRCD) and to related tax legislation to create a new class of shares for the claiming by an individual of a temporary non-refundable tax credit of 10 percent of the value of the shares or fractional shares converted, up to a value of $15,000 for a maximum credit of $1,500. Only current shareholders who have held CRCD shares for at least seven years could acquire this new class through exchange or conversion after February 2018; the shares were redeemable after a new, mandatory retention period. The tax credit for all shares in the existing class (acquired after February 2018) was reduced from 40 percent to 35 percent.
The tax credit rate was maintained at 20 percent for an eligible share in Fondaction (le Fonds de développement de la Confédération des syndicats nationaux pour la coopération et l'emploi, a labour-sponsored fund) acquired in the three fiscal years before June 2021. A limit will be imposed on capital raised by Fondaction to control the expenditure attributable to this new government support.
The holder of a taxi driver's permit was granted a temporary increase, in 2017 and 2018, in the refundable tax credit available of up to $500, from a maximum of $569 and $574 to $1,069 and $1,074, respectively. Some taxpayers are only eligible for one-half of the maximum credit under current rules. A new notice of assessment was sent before June 2018 to all taxpayers for whom Revenu Québec had already determined the 2017 credit.
The refundable tax credit attributing a work premium was enhanced by an increase of 2.6 percentage points over five years (from 2018 to 2022) of the rate for calculating the work premium. Starting in 2018, there was a relaxation of eligibility for the supplement to the work premium.
Annual Quebec Pension Plan (QPP) contributions increased for employers and employees to reflect QPP enhancements phased in from 2019 to 2024. The dividend tax credit rate for the gross-up amount of eligible dividends received was decreased to reflect provincial and federal changes, from 11.9 percent of the gross-up amount to 11.86 percent if received after the budget day and before 2019. The rate was reduced to 11.78 percent in 2019 and to 11.70 percent after 2019. The rate of dividend tax credit for non-eligible dividends received was similarly reduced from 7.05 percent of the dividend gross-up amount to 6.28 percent for dividends received after budget day and before 2019, to 5.55 percent in 2019, 4.77 percent in 2020, and 4.01 percent in 2021 and subsequent years.
The dual-basis compensation tax for financial institutions was extended for an additional five years in 2017. Rates for the new periods are set out in table 23; the compensation tax rates applicable to amounts paid as wages were adjusted after March 2018. The financial institution (throughout the year) must pay a compensation tax on wages paid after March 2018 at the applicable rate times the lesser of wages paid and the maximum taxable for the year, as shown in table 23.
In April 2017, the National Assembly's Committee on Public Finance tabled a report on recommendations to the government for combatting the erosion of its tax base; the Tax Fairness Action Plan was published in November 2017 as Quebec's response to those recommendations. The plan identified 14 measures to prevent that erosion, divided into five strategic areas including the sales tax measures covered above. Quebec also recommended measures to recover personal and corporate income tax owed to it, strengthen tax and corporate transparency, and block access to government contracts for abusive tax avoiders, including those who use tax havens in abusive tax avoidance. The use of the Registraire des entreprises du Québec will receive a boost to information technology development in order to improve the quality of information on the more than 900,000 companies registered and to enable more efficient use of the register. Quebec will also test the reliability of information in registries of which the province supported harmonization during negotiations of the Canadian free trade agreement.
In 2017, Quebec announced its review of the voluntary disclosures program (VDP) to take into account, among other things, December 2017 federal changes to tighten eligibility for the CRA's VDP. Consultations with Revenu Québec regarding changes to the program were promised to be carried out in 2018-19.
A reward would be offered by Quebec for a tax informant to cover significant personal, social, or professional costs if at least $100,000 of tax was recoverable. Certain information must be provided by the informant.
Quebec announced that it would amend its legislation to harmonize with the federal proposals on the tax on split income, and legislation proposed by the federal government on lookthrough rules for partnerships and trusts, as well as proposals to improve anti-avoidance rules to prevent the use of financial instruments to gain a tax advantage by creating artificial losses.
Quebec promised funding to support the abuse of employment agencies to inform employees of their rights and responsibilities.
Quebec also promised to subject food trucks and trailers to mandatory billing procedures through a sales recoding module (SRM) to be implemented by the finances of the nation n 145 summer of 2019. The measure will be similar to the establishment of SRMs in restaurants and bars.
Quebec's Balanced Budget Act is intended to require the province to maintain a balanced budget and requires the establishment of a stabilization reserve to facilitate multi-year budget planning and also to allow sums to be deposited in the Generations Fund. Reporting obligations are required that depend on the size and cause of a deficit. The stabilization reserve is used to balance a projected budget deficit without requiring additional action such as spending reductions or revenue increases. To keep the budget balanced, the government planned to use $3 billion from the stabilization reserve for fiscal years 2018-19 to 2020-21.
New Brunswick ( Table 24)
Tax Highlights n Corporate income tax rate did not increase n Personal income tax rates did not increase n Small business rate reduced to 3 percent
Corporate Income Tax
The small business income tax rate decreased from 3.5 percent to 3 percent, effective April 1, 2017. The New Brunswick government committed to a reduction of that rate to 2.5 percent during its mandate, by the end of 2018. finances of the nation n 147
Pensions
No changes were announced.
Other
The strategic program review process was completed, as announced in 2016. The review process identified measures to reduce the accumulating debt and put the province on track to balance the budget in 2020-21. The initiatives identified continue to be implemented; consequently, the government did not introduce new revenue measures or expenditure restraint in the 2018 budget. New Brunswick was consulting to develop its own carbon-pricing policy to address federal government requirements.
Pre-budget consultations were held with provincial residents.
Nova Scotia ( Table 25) Tax Highlights n No corporate tax rate changes n Some personal income tax credits increased for a taxpayer earning less than $75,000 Tax Changes
Corporate Income Tax
No changes were announced.
Personal Income Tax
Effective for 2018 and subsequent years, some personal income tax credits-a basic personal amount, a spousal amount, an amount for an eligible dependant, and an age amount-increased for an individual whose taxable income was under $25,000; the increase phases out straightline until taxable income reaches $75,000, when it is eliminated. The first three amounts increased from $8,481 to $11,481 and the age amount for low-income seniors increased from $4,141 to $5,606. The 2018 budget removed the upper limit on eligible medical expenses for the tax credit for a financially dependent relative; the cap used to be set at $10,000.
To provide greater access to the caregiver benefit program, the government expanded eligibility criteria, such as moderate to significant memory loss, a high risk of falls, and a high level of physical impairment. Another eligibility expansion was promised for the spring of 2019.
Effective after 2018, a new innovation equity tax credit was introduced. The budget did not contain details but promised a more narrowly focused credit that had a threshold consistent with other provinces' programs. The existing credit was to be phased out over time.
Enhancements were made to income assistance: exclusion of child support payments in calculating eligibility and an increase in the tax-free poverty reduction Notes: Expenditures were reported by department, except for debt expenses. Health expenditure covers "the health-care sector." Debt servicing included "debt charges and financial expenses." Expenditures were a combination of current and capital account expenditures by department in the government reporting entity. Offshore royalties were shown as $975 million and were included in "other" tax revenue, along with $80 million from mining tax and royalties. The total revenue figure includes net income of government business enterprises. | 2018-12-27T04:35:24.878Z | 2019-04-01T00:00:00.000 | {
"year": 2019,
"sha1": "04d6dd37bf84bc75b9a7872a435fb0869be2bb66",
"oa_license": null,
"oa_url": "https://www.ctf.ca/CTFWEB/Documents/CTJ%202019/Issue%201/2019CTJ1_Full_Issue_Public.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2bd6a21c20bcf2c3e49417f439a560b074741b5f",
"s2fieldsofstudy": [
"Economics",
"Business",
"Political Science"
],
"extfieldsofstudy": [
"Political Science"
]
} |
252857584 | pes2o/s2orc | v3-fos-license | Citizen science reveals meteorological determinants of frog calling at a continental scale
Here we investigate the strength of the relationships between meteorological factors and calling behaviour of 100 Australian frog species using continent‐wide citizen science data. First, we use this dataset to quantify the meteorological factors that best predict frog calling. Second, we investigate the strength of interactions among predictor variables. Third, we assess whether frog species cluster into distinct groups based on shared drivers of calling.
| INTRODUC TI ON
The predictability of phenological events can be vital to our understanding of species distributions and population trends. For example, basic information, such as the timing of breeding seasons, is often used to guide biodiversity monitoring, with many species only detected reliably during the breeding season (e.g., Roth et al., 2014;Wilson & Bart, 1985). Accurate information on drivers of phenology is therefore critical to help inform conservation measures and priorities, such as well-timed, cost-effective population monitoring (Canavero et al., 2019;Visser et al., 2010).
Understanding the timing of phenological events with environmental conditions is necessary to understand the impact of climate changes to the species' phenology (e.g., Kellermann & van Riper, 2015). However, the cues driving circannual rhythms are difficult to assess as separate factors, as they often covary with each other over time and space. Seasonal trends in temperature are thought to be the main determinants of phenology for many species (Ficetola & Maiorano, 2016;Mazaris et al., 2013;Simonneau et al., 2016;Visser et al., 2006), though day of year (season-specific day length) may be an overlooked, noise-free factor that best explains temporal patterns. Day of year is intrinsically linked with other factors known to influence a species' phenology (e.g., temperature or rainfall) (Adole et al., 2019;Canavero & Arim, 2009). Understanding how meteorological conditions influence breeding is a critical issue, considering how essential breeding is to the survival of a species, the rapid changes in environmental conditions and stability throughout most of the world (Hällfors et al., 2020;Wadgymar et al., 2018), and -crucially -that historically coupled weather features, such as photoperiod and rainfall, may become uncoupled under future climate scenarios (Peñuelas et al., 2004;Thackeray et al., 2016).
One of the main challenges in understanding drivers of breeding phenology has been the lack of empirical data across the range of a species (Duursma et al., 2017;Elmore et al., 2016;Hurlbert & Liang, 2012). Robust population monitoring is often dependent on detecting breeding behaviour, and understanding optimal survey windows is a critical first step. This is particularly true for frogs, as many species are cryptic when not breeding. Their breeding phenology (Gomez-Mestre et al., 2012;Oseen & Wassersug, 2002;Ospina et al., 2013;Yoo & Jang, 2012) and consequently detection probability (MacKenzie, 2006;MacKenzie et al., 2003;Mackenzie & Royle, 2005) are strongly tied to meteorological cues. These limitations affect our ability to synthesize the results of multiple studies and in turn our ability to monitor meaningful trends in phenology and across populations. With so many logistical challenges to longterm and broad-scale monitoring, there are few monitoring programs of large magnitude.
Specific associations between frog calling and a range of abiotic variables including rainfall, temperature, humidity, vapour pressure, and moonlight have been investigated (see Table 1 for a global summary of previous research). These studies show that meteorological influences, either alone or in combination, frequently stimulate calling behaviour. However, the strength of the associations between calling and these factors varies among species and across studies (Heard et al., 2015;Oseen & Wassersug, 2002;Saenz et al., 2006). Broadly, rainfall has been found to be the strongest predictor of calling in the tropics, where photoperiod appears largely uninformative (Bradshaw & Holzapfel, 2007). In temperate zones, a combination of rainfall and temperature has been found to be most influential (Duellman & Trueb, 1994). In contrast, photoperiod was the most significant factor associated with frog calling in a long-term study in temperate Uruguay (Canavero & Arim, 2009).
The relationship of frog calling to meteorological factors varies among species and is not well documented across taxa, despite frog species being often categorized in the literature based on cues assumed to predict their breeding. The term "explosive breeder", for example, is used for species reliant on ephemeral ponds and impacted by rainfall, but potentially less influenced by abiotic factors once calling has commenced. "Seasonal breeders" are thought to be reliant on permanent ponds and influenced by temperature, while "generalists/prolonged breeders" call throughout the year or the rainy season (Heard et al., 2015;Lemckert & Grigg, 2010;Oseen & Wassersug, 2002;Saenz et al., 2006).
Increasingly, data at the broad spatial and temporal scales that could address these limitations are being collected by citizen science (Bird et al., 2014;Hochachka et al., 2012), though remarkably few citizen science programs (5%) have focused on frogs (Lloyd et al., 2020).
Frogs are one of, if not the most, imperilled vertebrate groups due to compounding threats including disease, habitat loss, competition from invasive species, and climate change (Gillespie et al., 2020).
Critically, evidence shows that frogs are already responding to the direct and indirect impacts of climate change (Blaustein et al., 2010;Cohen et al., 2018;Parmesan, 2006). Here, we use continental-scale citizen science data that document the calling of 100 Australian frogs (42% of species known in Australia) to investigate the strength of the relationships between meteorological factors and calling behaviour, an imperfect but common proxy for breeding (Crouch & Paton, 2002;Dorcas et al., 2009;Pellet & Schmidt, 2005). While many interacting external and internal cues inform phenology (i.e., hormones stimulated by perceived photoperiod), as a large, macroscale analysis, we consider only external meteorological cues. First, we use this dataset to quantify the meteorological factors that best predict frog calling. Second, we investigate the strength of interactions among predictor variables. Third, we assess whether frog species cluster into distinct groups based on shared drivers of calling (i.e., explosive, seasonal, or generalist breeders).
| Overview
We used frog occurrence data across 3 years to quantify the relationship between frog calling -a proxy for breeding activity -and meteorological variables. Our objective was not to predict the "timing" Note: For each study listed, the temporal and spatial scale of sampling is noted, the number of frog species studied, and the predictor variables explored in relation to calling behaviour. A checkmark indicates the study considered the variable indicated by that column. Lagged rainfall indicates the sum of rainfall over the preceding days (ranging from 1 to 7 days). If a latent temporal variable was included, details are provided. of peak breeding activity, but rather to assess the influence of meteorological variables on frog calling behaviour. Because we performed a cross-species analysis, our objective was focused on looking at the strength of relationship for given predictors, as opposed to the direction of the relationship (i.e., positive or negative). To accomplish this goal, we performed the following three overall steps ( Figure 1): (1) aggregated frog occurrence data from a popular citizen science dataset in Australia and spatially filtered these data to quantify whether a frog was calling in a particular grid cell on a given day; (2) integrated these data with meteorological variables (see Table 1); and (3) used boosted regression tree models to quantify which meteorological variables best predicted the likelihood of a species calling. We treat each of these steps in detail in the following sections.
| Frog occurrence data
We used frog occurrence data from FrogID, a national citizen science project led by the Australian Museum (Rowley et al., 2019;. Since its inception in November 2017, FrogID has collected over 600,000 validated observation records from 211 species -84% of frog species known in Australia. These data cover all of Australia (see Figure S1), with a bias towards the more highly populated east coast, particularly the state of New South Wales (10.4% land area of Australia yet 47% of all FrogID records). In addition to some spatial bias, there are temporal biases in submissions, with highest number of frog records in spring/summer, but this peak corresponds with peak breeding times for the majority of frog species (see Liu et al., 2021). A large part of its success is because auditory (call) surveys are one of the most common survey methods used to detect breeding frogs (Crouch & Paton, 2002;Da Silva, 2010;Lepage et al., 1997;Pellet & Schmidt, 2005). Participants submit 20-60-second audio recordings of calling frogs using a smartphone app, and the app adds associated metadata (time, date, latitude, longitude, and an estimate of precision of geographic location) to each submission. After a recording is submitted, a team of experts at the Australian Museum independently identifies any frog species heard calling in the recordings. Recordings with identifiable frog calls F I G U R E 1 (a) Methods illustrated using a single species as an example. Frog call data from FrogID was fitted to 10km 2 grid cells across the known range of each species and (ii) paired with meteorological variables from the same day and grid cell. This allowed us to (iv) pair call data with predictor variables within each species range, as well as infer pseudo-absences from other species detected in each spatiotemporal subsample. (v) We used boosted regression trees to assess the relationship between calling and our predictor variables. (vi) The scaled predictor importance from each analysis output allowed us to (b) compare both the predictor variables' importance to each other on an individual species level, as well as the strength of multiple species relationships to the same set of environmental drivers We define a "submission" as a submitted recording and an "observation" as a single record of a frog species originating from a submission for a particular site/date/time combination.
We used FrogID data from 11 November 2017 through 30 November 2020 (~36 months) and included observations of all species with a minimum of 100 validated observations, to reasonably represent each species calling behaviour and the environmental conditions it experiences. Observations were aggregated to 10 km 2 grid cells across Australia (Chase et al., 2019;Field et al., 2009).
First, for every species, we extracted all records within that species' geographic range (Rowley et al., 2019). And within each grid cell in a species' range, for each day, a species was either recorded as present or "absent." In this way we account for both the spatial and temporal biases commonly present in citizen science datasets (Bird et al., 2014). We inferred "absence," or pseudo-absence, if a species was not recorded in a grid cell on a particular day, but other species were detected (Rowley et al., 2019) (see Appendix S1 for raw data jitter plots displaying in-range presence and absence patterns by variable). We also calculated the total number of submissions for each grid cell to be used as a proxy for sampling effort (see modelling below). After applying these inclusion criteria, we used data from, on average, 22% of the grid cells within a species' known range (SD 0.15) (see Table S1 for the total number of grid cells, observations, and cv AUC for each species model).
| Meteorological variables
We collated data for the following environmental and meteorological variables: maximum temperature, mean of 10-day maximum temperature, minimum temperature, mean of 10-day minimum temperature, humidity, rainfall, cumulative rainfall from the previous 3 days, cumulative rainfall from the previous 10 days, and moon phase. We chose these variables based on a literature review of previously investigated variables, and their significance (see Table 1).
We downloaded Bureau of Meteorology (BOM) data for every day from 11 November 2017 to 30 November 2020 (BOM, 2020) to align with each cell of our aggregated frog occurrence data. Weather variables were aggregated and averaged from BOM 5 km 2 to 10 km 2 grid cells to create a database that paired call observations and weather data to uniform grid cells across the known range of a species for each grid cell and day of the 3-year dataset (see Figure 1 for a visual aid). Moon phase was calculated using the R package Suncalc (Thieurmel & Elmarhraoui, 2019). Aggregate rainfall over the previous 3 and 10 days was calculated as the cumulative total rainfall from BOM daily rainfall data. Although wind has also been investigated as a factor in call probability (Oseen & Wassersug, 2002;Penman et al., 2006), it was excluded because the available data are not accurate enough to associate with daily call activity (Jakob, 2010) as a grid cell scale. We assigned a numerical day of year to each day as an explanatory factor representing day length and harmonic regression (Chatfield & Xing, 2019;Weir et al., 2005) and distinguishing spring and autumn days from one another, unlike photoperiod (see Figure S2 for a plot displaying the correlation between day of year and photoperiod).
| Data analysis
Our objective was to quantify whether meteorological variables (described above) predict frog calling at a continental scale across Australia. We used boosted regression trees to assess the relationship between our binomial response variable of calling (i.e., presence/absence) and the meteorological traits used as predictor variables. Boosted regression trees are an additive regression model used for explanation and prediction (Elith et al., 2008). They are a combination of decision tree algorithms and boosting methods. Like Random Forest models, they fit many decision trees to improve the accuracy of the model. It is a modelling method growing in popularity for ecological and citizen science data Fink et al., 2020;Hochachka et al., 2012). The model structure suits our data because boosted regression trees allow modelling of nonlinear relationships that vary based on the nature of the relationship among different groups (i.e., observed species) and variables (i.e., meteorological variables), and test whether interactions have been detected and report the relative strength of these among predictor variables (Elith et al., 2008).
Predictor variables included in our full annual cycle models (i.e., data included for the entire year, all 3 years) for each species were (1) daily maximum temperature, (2) daily minimum temperature, (3) daily humidity, (4) daily rainfall, (5) cumulative rainfall from the previous 3 days, (6) cumulative rainfall from the previous 10 days, (7) and longitude with the predictor variables of interest are included as a part of the modelling process. Moreover, we were not making spatial predictions, only accounting for potential differences (i.e., interactions among variables) in space. We also included the number of observations across all species per grid cell and day in the model as covariates to account for the biases that may influence species presence and detection rate (Brodie et al., 2020;Johnston et al., 2021). Boosted regression trees were fit using the R package dismo (Hijmans et al., 2021) gbm.step function, which uses crossvalidation to estimate the optimal number of tree for each model.
We used a tree complexity of 5, a learning rate of 0.005, and a bag fraction of 0.5 based on exploratory analysis of our data and suggestions by Elith et al. (2008). Tree complexity determines the degree to which predictors may interact with each other in relation to the response variable. The learning rate determines the contribution of each tree to the model. The bag fraction is the portion of the data drawn at random and without replacement from the full training set with each iteration. We fit the model with a "Bernoulli" distribution, as our response variable is a binary presence/absence (see methods illustrated for an example species in Figure 1).
For each species with over 100 frog observations within a target species' range (N = 100 species, observations = 152,534; Table S1), we extracted the relative influence of each predictor variable. We scaled these values from 0 (the least important variable) to 1 (the most important variable), excluding our covariates (latitude, longitude, and number of observations per grid and day) included in the models. The scaled independence of this variable allowed us to compare both the predictor variables' importance to each other on an individual species level, as well as the strength of multiple species relationships to the same set of meteorological drivers, while remaining population-and scale-independent across species. We also extracted predictor interactions from each model and scaled them from 0 to 1 to identify and compare relevant interactions among predictor variables. To test the robustness of our modelling process outlined above to spatial and temporal biases, for 10 of the most widely distributed species we re-ran our analysis (Figure 1). stratified by Australia's largely contiguous climate zones (i.e., temperate, subtropical, tropical, and desert) (BOM, 2021), and then stratified by year (i.e., 2018, 2019, and 2020).
To explore environmental predictors within a breeding season only, we repeated the above analysis for all species where day of year was the most important predictor (predictor importance = 1) (N = 67 species, observations = 80,774; Table S2). For each species, we identified the breeding season by creating a histogram of the observations grouped by day and clipping the annual data to consecutive days hosting 90% of the observations. For these models, day was not included as a variable in order to test the influence of other variables, aside from day.
A K-means cluster analysis (Aristeidou et al., 2017;Jain, 2010) was used to group frogs by their call patterns. K-means is an unsupervised learning algorithm used to identify patterns in data, and form groups based on those patterns. The iterative algorithm tests the Euclidian distance of each species to every group centroid. After a new species is classified, a new centroid is calculated as the mean of all species clustered in each group. The classification converges and the iterations stop when fewer species change their cluster assignment than in the previous iterations. We used the 10 predictor importance variables from the boosted regression tree analysis as inputs to a K-means cluster algorithm using the R packages cluster (Maechler et al., 2019) and factoextra (Kassambara & Mundt, 2020).
The output compiled categories of frogs based on the degree each variable explained calling behaviour in each species using the scaled predictor importance from the full annual cycle boosted regression tree models. We plotted the within cluster sum of squares for 1 to 20 groups and chose an optimal cluster size of seven by recognizing the asymptote change and performing a cluster validation using silhouette width.
| RE SULTS
We used a total of 152,534 frog occurrence records for 100 species, with a mean ± s.d. of 1510 ± 3664 records per species. The predictive power of the models was relatively high (mean cv AUC = 0.888 ± 0.01) (see Table S1 for species-specific scores). And unsurprisingly, the covariates included in the models were often of the highest relative influence (e.g., latitude, longitude, number of observations per grid and day).
Day of year was the strongest predictor of frog calling overall: in 67 out of 100 species, day of year had a predictor importance of 1, and the overall mean predictor importance among all species was 0.878 ( Figure 2a). Mean of the maximum temperature over a 10-day period was the second most important variable across all species with a mean influence of 0.532, followed by mean of the minimum temperature over a 10-day period temperature with a mean influence of 0.473. The relationship of calling to the four temperature variables was highly heterogeneous across species.
For some species (e.g., Crinia subsignifera, Cyclorana verrucosa, and Mixophyes fasciolatus), maximum and minimum temperature over a 10-day period were the most important variables, while mean minimum temperature on the day of calling had a mean influence of 0.299, and mean maximum temperature on day of calling had a mean influence of 0.298. Humidity had an overall importance of 0.322 and had a predictor importance of 1 in only one species (Litoria serrata). All rainfall variables showed a low relationship to calling across the majority of species, generally trending down from rainfall accumulated in the last 10 days (mean = 0.334), rainfall accumulated in the last 3 days (mean = 0.123), and rainfall the day of calling (mean = 0.008). However, some species (e.g., Austrochaperina pluvialis, Pseudophryne raveni, and Uperoleia tyleri) showed strong responses to previous rainfall. Moon phase was more significant than two of the rainfall variables (mean importance = 0.253), but not of high predictor importance for any species (see Figure S3 for species specific results).
We found the strongest model interactions among the most significant variables. Day and mean of maximum temperature over a 10-day period (mean = 1), followed by interactions between day and mean of minimum temperature over a 10-day period (mean = 0.61), and interactions between mean of maximum and minimum temperature over a 10-day period (mean = 0.61) (Figure 3).
When we repeated our analysis for those species with a strong dependence on day of year ("seasonal breeders"), restricting data to within the breeding season, we found notable differences between overall predictor importance compared to the annual species model (Figure 2b). For the 67 seasonal breeders assessed, the most important variables in predicting calling within their breeding season were minimum temperature over a 10-day period (mean importance = 0.772) and maximum temperature over a 10-day period (mean importance = 0.716). The next most significant variable was rainfall accumulated over the past 10 days with a mean predictor importance of 0.651. Fourth, humidity had an overall variable importance of 0.557, followed by mean maximum and mean minimum temperature on the day of calling (mean importance = 0.505 and 0.487, respectively). Again, moon phase was of moderate significance (mean = 0.429), and showed a stronger relationship to calling than rainfall over the previous 3 days and rainfall on the day of calling (mean importance = 0.238 and 0.018, respectively) (see Figure 4 for species specific results).
We found no distinct grouping of frog species based on their calling with respect to the predictor variables. Frogs did not cluster into groups of species based on the strength of their relationship to predictor variables (i.e., combined effects of recent rainfall and day of year or combined effects of temperature and daily rainfall).
Rather, the frogs fell along a spectrum of shared predictor importance patterns (Figure 4 [for an interactive figure with species names see Appendix S1]). The variance explained by the clustering of frogs based on predictor variables indicated the relationship between calling and the other explanatory variables produced groups with as much in common within clusters as between clusters.
| DISCUSS ION
Using more than 150,000 citizen science observations from 100 species, we demonstrate the importance of day of year and temperature thresholds as predictors of calling behaviour in Australian frogs. Day of year was by far the most important variable predicting calling behaviour at the scale examined, with a maximum predictor importance (PI = 1) in 67% of species examined. Conducted at a continental scale over multiple years, our analysis revealed strongly day-driven seasonal trends in calling across Australian frogs, but also unique species-specific responses to meteorological variables.
One of the reasons for the overriding importance of day of year, a proxy for photoperiod, may be that animals have evolved to respond to photoperiod as a harbinger of other important conditions (i.e., seasonal shifts in temperature and rainfall) (Bradshaw & Holzapfel, 2007). Indeed, for ectotherms, photoperiod tends to be more important to phenology than temperature because it is more consistent (Gotthard, 2001). Our results suggest that this may also be the case for Australian frogs. Although temperature and photoperiod have both been considered significant to calling (Duellman & Trueb, 1994), the importance of photoperiod may be higher when investigating calling on broad temporal scales such as in this study (36 months). In an 18-month study period in Uruguay, photoperiod was similarly recorded as the main predictor of frog calling (Canavero & Arim, 2009). Within a season, some environmental variables may F I G U R E 2 Scaled importance of predictor variables influence on calling (a) for all 100 species, and (b) for the 67 species with a strong dependence on day of year ("seasonal breeders"), restricting data to within each species' breeding season Annual all-species model Seasonal model (day species only) be important (see Oseen & Wassersug, 2002;Saenz et al., 2006;Weir et al., 2005), but across multiple seasons, photoperiod may be the predominant driver of calling behaviour (Both et al., 2008;Canavero & Arim, 2009;Schalk & Saenz, 2016). And the relative strength of photoperiod may depend on the interaction of photoperiod with other variables, as suggested by our analysis where we found the importance of day of year was strongly correlated with temperature ( Figure 3).
While we did not explicitly model the "timing" of frog calling, but rather the presence/absence of calling on a given day, our results suggest that the timing of frog calling (i.e., a proxy for the breeding season) is strongly seasonal in most frog species.
Therefore, these results can be used to quantify approximate breeding seasons for Australian frogs, which are often necessary for biodiversity monitoring (e.g., Roth et al., 2014;Wilson & Bart, 1985). Additionally, while we present the strength of Surprisingly, we found very little influence of rainfall on the probability of calling. The low significance of all three rain variables underscores the findings in some regional Australian studies, where the probability of calling was uncoupled with rainfall (Lemckert & Grigg, 2010), and more related to lagged rainfall (over the previous week) than rainfall on the day of calling (Heard et al., 2015).
Likewise, we found some evidence that cumulative rainfall within a 10-day period was more important than cumulative 3-day rainfall, which, in turn, was more important than mean rainfall on a given day F I G U R E 3 Heatmap of the scaled strength of interactions among predictor variables for the annual all-species model. Interactions were strongest among the variables with the strongest relationship to frog calling ( Figure 2). A negative association between rainfall and calling has even been demonstrated in some temperate environments (Heard et al., 2015;Oseen & Wassersug, 2002). Possible reasons for minimal correlation with rainfall in frog species include increased risk of eggs washing away for lotic breeding frogs (Heard et al., 2015), noise interference from the precipitation (Dorcas & Foltz, 1991;Henzi et al., 1995;Saenz et al., 2006), and competition avoidance with other frog species (Duellman & Pyles, 1983;Heard et al., 2015). Similarly, a global meta-analysis of amphibian phenology also found phenological shifts in calling across taxa were associated more strongly with temperature than precipitation (Ficetola & Maiorano, 2016).
Overall, there is evidence both for and against the positive correlation of rainfall and lagged rainfall on calling patterns -varying by species, time scale, and ecoregion (Canavero & Arim, 2009;Heard et al., 2015;Lemckert & Grigg, 2010;Oseen & Wassersug, 2002;Shalk and Saenz 2006). Indeed, although we found overall weak support for rainfall, this varied among species, and rainfall over the past 10 days was the most important predictor for 15% of species during the breeding season (i.e., Limnodynastes dumerilii, Litoria tornieri, and
Neobatrachus sudellae).
The coarse spatial scale of our study is also likely to have played a role in the strength of the effect of photoperiod over meteorological conditions in our results. BOM data are already interpolated to 5 km 2 grid cells (BOM, 2020) before we smoothed them further in our analysis. Weather is highly variable over space and time and short-duration extreme precipitation (lasting an hour or less and covering only a few km 2 ) cannot be well documented by rain gauge networks (Lengfeld et al., 2020). Weather variables can be difficult to reliably interpolate from weather station observations, especially where weather stations can be thousands of km apart, such as throughout much of arid and semi-arid Australia (Peña- Arancibia et al., 2013). Conversely, at a finer scale, pairing on-site data loggers with observations may result in stronger associations between meteorological variables and calling (Dorcas et al., 2009; F I G U R E 4 PCA plot showing K-means clusters of all species grouped by the similarity of their response to predictor variables. Clusters were not recovered as distinct groups, illustrating the continuum of variable importance across frog species Weir et al., 2005). Together, these results suggest the importance of considering temporal and spatial scale in predicting phenological patterns among species. Improvements in meteorological interpolation may allow more accurate, large-scale analyses in the future. The strength of the macroscale, aggregate analysis presented here does not lie in revealing the influences of extremely localized factors that surely do impact frog survival (Scheffers et al., 2014) and successful reproduction (Blaustein et al., 1999;Harkey & Semlitsch, 1988;Kiesecker & Blaustein, 1998;Watkins & Vraspir, 2006). Our results trend towards common species and broad scale patterns.
Sampling biases are common in citizen science data. In this study, urban and temperate areas were disproportionately well sampled compared with dry and remote regions. Participants most often record in the evening and near areas of high population density Callaghan et al., 2020;Liu et al., 2021). In both model outputs, burrowing and terrestrial frogs (such as Austrochaperinia pluvialis, Austrochaperina robusta, and Heleioporus eyrei) comprised the bulk of the outliers (high and low predictor importance) across all variables. As a result of small ranges or infrequent detection (i.e., large periods of aestivation and often remote locations), these frog species also had fewer grid cells and observations included in analysis and, in some cases, were disproportionately impacted by our spatiotemporal subsampling (Tables S1-S2). For example, while citizen science participants sample frogs all year round in many parts of New South Wales, this is less true in more remote parts of our Australia, potentially influencing our results. When we stratified a subset of species to climate zone, predictor importance varied across climate zones. For species with records in both zones, rainfall variables were often more important in subtropical than temperate zones, while humidity was more important in temperate than subtropical zones ( Figure S5), adding to some of the variation. Additionally, when we stratified the same subset of species by observation year, predictor importance varied somewhat from year to year ( Figure S6), suggesting the importance of using robust multi-year datasets to investigate phenology. While our work was largely focused on temporal differences, future work should test how breeding cues differ in space. Our exploratory analysis suggests that macro-evolutionary constraints (i.e., different evolutionary responses to different climate zones) may influence breeding cues in frogs.
Understanding how meteorological conditions influence the onset of phenological events, such as breeding, is particularly important considering the rapid changes in environmental conditions and stability throughout most of the world, and how important breeding is to species survival, population dynamics, and resilience. While we focus on species-specific responses, community-level data (e.g., species richness) could also better inform risk assessment models. For example, by understanding how the underlying species diversity in space leads to co-occurrence and competition, we can uncover frog communities facing shared risks. Indeed, future work should look to test how the effect of meteorological determinants investigated here influence spatiotemporal co-occurrence. To paraphrase Gwinner and Helm (2003): circannual rhythms are intimately involved in the seasonal organization of breeding behaviour, providing the substrate onto which seasonal environmental factors act. The large volume of data across broad spatial and temporal scales necessary for elucidating phenological patterns is rapidly becoming available through citizen science (Bird et al., 2014;Hochachka et al., 2012;Sullivan et al., 2014). Indeed, citizen science data are increasingly being used to document phenological changes, including spatiotemporal changes in butterfly (Soroye et al., 2018) and bird migration (Hurlbert & Liang, 2012), flowering time (Gonsamo et al., 2013), and patterns in bird call phenology (Dickinson et al., 2010;Sullivan et al., 2014) -and now, frog calling behaviour. To the best of our knowledge, our research represents significantly more data, from a greater number of species, and over a greater timespan than any frog call phenology study to date. While the availability of freshwater breeding sites is vital for frogs, at the scale examined, calling may not be as tightly linked to rainfall for all frog species as often assumed. The correlation between calling and temperature recovered, particularly within breeding seasons, may result in a shift in breeding with climate change (Gibbs & Breisch, 2001), and this has the potential to further affect many frog species breeding success and survival. Our results illustrate the importance of day of the year as a strong, but not isolated, predictor for frog calling behaviour at a macroecological scale.
ACK N OWLED G EM ENTS
We would like to thank the Citizen Science Grants of the
CO N FLI C T O F I NTE R E S T
The authors declare no conflicts of interest.
DATA AVA I L A B I L I T Y S TAT E M E N T
The data that support the findings of this study are available here: https://doi.org/10.5281/zenodo.7042152. | 2022-10-13T15:13:08.132Z | 2022-10-10T00:00:00.000 | {
"year": 2022,
"sha1": "48b7cbc8743516657b25577391488bf0d8fbd537",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/ddi.13634",
"oa_status": "GOLD",
"pdf_src": "Wiley",
"pdf_hash": "979bae8d048ad4733a4e764dacd1603eda805a95",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": []
} |
247897075 | pes2o/s2orc | v3-fos-license | Physical Properties of Wall Clads Produced From Mixture of Saw Dust and Pure Water Sachet
(16.5 x 36.5 x 145.5) mm clads were produced from the mixture of sawdust (SD) of three indigenous wood species(Miliciaexelsa, Ceibapetandraand Cola gigantia) and nylon sachet (NS) of “pure water”. From each of the wood specie, clads were produced at three different SD/NS ratios of 40:60, 30:70 and 20:80. The effect of the wood species and mix ratio on the physical properties (water absorption, thickness swelling and linear expansion) of the clads were investigated by immersing them inside water of temperature 20oC for 24 hours. NS was washed, dried, weighed as appropriate and allowed to melt at 190oC in the melting chamber of an existing locally produced Wood Plastic CompositeExtruder (WPCE)of 0.8 kg/h capacity before adding SD which had earlier been dried to a moisture content of 10 % and sieved to size 10mm. The mixture was then fedinto the WPCE kneading chamber for a thorough kneading into slurry form before extruding into a (20x 40x 150) mm mould which was hot pressed at 120°c and 1.12N/mm2 force to a thickness of 16.5 mm, breadth 36.5 mm and length 145.5 mm. Samples were thereafter cut into specific dimension in accordance with British Standard D373. Results show that clads produced from Miliciaexelsaat SD/NS mix ratio 20:80 were relatively low in water absorption, thickness swelling and linear expansion making it suitable for buildings protection in waterlogged areas.
Abstract-(16.5 x 36.5 x 145.5) mm clads were produced from the mixture of sawdust (SD) of three indigenous wood species(Miliciaexelsa, Ceibapetandraand Cola gigantia) and nylon sachet (NS) of "pure water". From each of the wood specie, clads were produced at three different SD/NS ratios of 40:60, 30:70 and 20:80. The effect of the wood species and mix ratio on the physical properties (water absorption, thickness swelling and linear expansion) of the clads were investigated by immersing them inside water of temperature 20oC for 24 hours. NS was washed, dried, weighed as appropriate and allowed to melt at 190oC in the melting chamber of an existing locally produced Wood Plastic CompositeExtruder (WPCE)of 0.8 kg/h capacity before adding SD which had earlier been dried to a moisture content of 10 % and sieved to size 10mm. The mixture was then fedinto the WPCE kneading chamber for a thorough kneading into slurry form before extruding into a (20x 40x 150) mm mould which was hot pressed at 120°c and 1.12N/mm2 force to a thickness of 16.5 mm, breadth 36.5 mm and length 145.5 mm. Samples were thereafter cut into specific dimension in accordance with British Standard D373. Results show that clads produced from Miliciaexelsaat SD/NS mix ratio 20:80 were relatively low in water absorption, thickness swelling and linear expansion making it suitable for buildings protection in waterlogged areas.
I. INTRODUCTION
Waste generation is an important aspect of living which cannot be banished but can only be managed. These waste degrade the urban environment, and reduce its aesthetic value. As observed by [1], waste produce offensive odours during the rainand pollute the air with smoke when burnt uncontrollably.
They also constitute health hazards in themselves if they are not timely or badly disposed, and they become breeding places for worms and insects as also observed by [2].Wood and polymeric wastes whose pictures are shown in Plates 1 and 2 are two major wastes identified to be common and poses great threat to healthy living in Nigeria according to [3] and [4]. Plate 2. Heap of sawdust (SD) Nigeria is one of the countries with large number of polymeric wastes made from polyethylene water sachet popularly called "Pure Water" [5]. PWS is non-biodegradable, and as such belongs to single-use plastic group. They can however be recycled by crafting it to another useful products, but unfortunately it mostly ends up on landfills or littering the ground after use [6]. [7]discovered that sachet water was introduced to the Nigerian market around 1990 and started attracting nationwide attention from year 2000 when the country's National Agency for Food and Drugs Administration Control (NAFDAC) registered 134 different 10 packaged water producers. This led to the emergence and proliferation of private water enterprises that operated side by side with the government-owned public water utilities resulting in increase in their waste. As reported by [5] in the work of [8], about 70% of Nigerian adults drink, at least, a sachet of water per day. Unfortunately, the nylon used to package the sachet water are poorly disposed leading to environmental pollution and outbreak of diseases [9].As also observed by [10], sachet water is all over Nigeria these days; and after drinking the water, the containers (plastic materials) is simply dumped anywhere. The same goes for wood waste popularly called saw dust (SD) in Nigeria. Wood dust or waste is the product of wood shavings from machining wood; it refers to the tiny sized and powdery waste produced by sawing of wood [11]. Wood processing and improper disposal of its wastes oftentimes results in emission of toxic when burnt and non-toxic particulates, pollution of inland waters and may also contribute to health hazards [12] and [13]. The saw mill industry in Nigeria keep increasingand as such the waste from the processed wood are rising without adequate measure for their disposal [14]. The deleterious effects of wood wastes can be curtailed by incorporating these items in the production of value added composite products [15] and [16]. Many Researchers have used plastic, especially polyethylene (PET) extensively as binder in the production of Wood Plastic Composites [4], [16], [17], [18], [19]. Lesser attention have been paid to nylon obtained from sachet pure water which also is a major source of municipal waste according to the research carried out by [5].SD can be mixed with nylon sachet under specific heat to produce clads. Clads provide a degree of thermal insulation and fire resistance to buildings, and also improve the aesthetic appearance of farm structures according to [20].According to [21], cladding is a type of "skin" or extra layer on the outside of a building. It can be attached to a building's framework or an intermediate layer of battens or spaces. It is mainly used to stop wind and rain from entering the building. It is also used to make a building exterior look more attractive.Clads are made from wood, metal, block, vinyl, and composite materials that can include aluminum, wood, blends of cement and recycled polystyrene, wheat/rice straw fibers. This prompted the research on how clads can be produced from waste materials using simple technology. Miliciaexelsa, Ceibapetandraand Cola gigantiaare common indigenous trees in South Western part of Nigeria, hence its, selection for the research. The physical properties of clads produced from the sawdust of these trees using water sachet nylon as the binder will enable us to know the type of tree and the mix ratio of SD and NS that is most suitable farm structures against adverse weather condition.
II. MATERIALS AND METHODS
Sawdust (SD) from wood species (Miliciaexelsa, Ceibapetandraand Cola gigantia) were sourced from Olukayode Sawmill in Akure, Nigeria, while the Nylon Sachet (NS) were got from the male hostel refuse bin of the Federal University of Technology, Akure. The study was carried out at the Farm Power and Machinery Workshop of Agricultural and Bio-Environmental Engineering Department of the Federal University of Technology, Akure.The moisture content of SD was reduced to 10 % by exposing them to the sun using hygrometer to measure the reduction in moisture content. This was done in order to reduce the moisture content of thewood cell lumen and gives room for the liquid NS diffusion. The SD was then sieved to a particle size of 2.00 mm for thorough and homogeneous mixing of SD and melted NS. Three mix ratios of SD/NS were chosen as 40:60, 30:70 and 20:80 after the initial trial test in respect to the three types of wood species under investigation. For the first group, Miliciaexelsa, NS was weighed to 210 g using electronic digital weighing machine while the SD was weighed to 84 g, 63 g and 42 g representing 40 %, 30 % and 20 % of 210 g of NS. This procedure was repeated for Ceibapetandraand Cola gigantiarespectively.For the first group, Miliciaexelsa, 210 g of NS was melt inside the melting and mixing chamber of an existing WPC extruder at a constant temperature of 190°C through a 3.5 Kw heat band. Thereafter 84 g of SD was added, stirred and allowed to fall under gravity into the extruding chamber of the machine, still maintaining the temperature at 190°C. The extruder kneaded the mixture thoroughly at machine speed of 277 rpm for 5 minutes after which it was collected into a mould of size 16.5mm x 36.5mm x 145.5mm. This was hot pressed at 120°C and 1.12N/mm 2 force to a thickness of 16.5mm and left for 15 minutes to give room for solidification before removal. Infrared thermometer and calibrated hydraulic press locally fabricated were used to measure the temperature and the thickness of the clad inside the mould while pressing it. This procedure was repeated for the other species of wood, Ceibapetandraand Cola gigantiarespectively. Three sets of cladding were produced (Figure 2.0) for each group and a total of nine products which were subjected to physical properties investigation. A. Physical Properties Test The physical properties tested include water absorption, thickness swelling, and linear expansion. The instruments used to carry out this test were electronic digital weighing balance, ruler, verniercalliper, micrometer screw gauge, stopwatch, hacksaw, plastic bowls and calculator. Required samples to be tested were still trimmed to conform to the dimensional size of 16.5 mm x 36.5 mm x 145.5 mmfrom each of the flat platen cladding in accordance to [22].The initial weight, thickness and length of the clads were taken as W o, T o and L o respectively before immersing inside water of temperature 20 o C. These parameters were measured at the end of the first 2 hours of water immersion, then the 12th hourand finally at the end of the 24 hours as W 1 , T 1 and L 1 representing the final weight, final thickness and final length of the clads respectively. According to [4] Data collected were statistically analyzed using experimental model design of 2 3 factorial in Complete Randomized Design (CRD) using SPSS software of version 13.0. Analysis of variance (ANOVA) was used to establish the significance of the independent variables (W.A., T.S. and L.E) on the dependent variables (wood type and mix).
A. Physical properties
The physical properties investigated were the water absorption, the thickness swelling and the linear expansion.
Water absorption
As shown in Figure 3.0, at the end of the first 2 hours of moisture immersion test, the water absorbed by clads produced by the three types of tree samples ranged from 1.69 % to 2.80 %. Samples obtained from Miliciaexelsaabsorbed the least water of 1.47 %, 0.85 % and 0.88 % for SD/NS mix ratio of 40:60, 30:70 and 20:80 respectively. This is followed by that of Ceibapetandraof 2.80 %, 2.29 % and 1.60% and Cola gigantiahaving the greatest percentage of water absorbed as 4.49 %, 2.88 % and 1.69 % for same mix ratio respectively. This trend was observed at the end of the 10 hours of the immersion test as shown in the bar chart of
Linear expansion
The linear expansion (L.E.) of clads produced from the saw dust of Miliciaexelsa, Ceibapetandraand Cola gigantiaat the end of the 2 h, 10 h and 24 h of water immersion test ranged from 0.03 % to 1.40 % as expressed in the bar charts ofFigures3.6, 3.7 and 3.8 respectively. At the end of the 2 hours water immersion test, clads produced from Miliciaexelsahad the least linear expansion (L.E.) for SD/NS mix ratio of 40:60, 30:70, and 20:80 as 0.14 %, 0.10 % and 0.03 % respectively. This was followed by the ones produced from Ceibapetandrawith values 0.28 %, 0.21 % and 0.10 % at the same mix ratio of SD/NS respectively. Clads produced from Cola gigantiahad the highest L. It was generally observed for all the three species of wood that the higher the NS content, the lower the water absorbed, thickness swelling, linear expansion while the higher the SD content, the higher the water absorbed, thickness swelling and linear expansion. This agrees with the related work of [19] and [4] www.wjir.org the hydrophobic nature of thermoplastic generally of which nylon is a family as observed in the work of [16] and [6] on the properties of wood plastic composites.
As shown in Tables 1, 2and 3, Statistical analysis of variance conducted on the independent factors (type of woods and mix ratio) and dependent factors (W.A., T.S. and L.E.) at 5% level of probability revealed that the wood specie and the time specimen spent in water have significant effects on its water absorption, thickness swelling and linear expansion all through the 2 hours, 10 hours and up to 24 hours. Whereas, the mix ratio does not have significant effect on the specimens' water absorption after the first 2 hours spent in water until the 10th and 24th hour. This was different for T.S., though very little, the mix ratio has effect on the specimen T.S. at the end of the 2 hours, 10 hours and 24 hours of water immersion test.This also was different for L.E., the mix ratio of SD/NS have a significant effect only after the 2 h of water immersion test, but were no longer significant for the 10th and 24th hour under investigation. This implies that the type of wood and SD/NS mix ratio have significant effect on the physical properties of clads when exposed to rainfall.
IV. CONCLUSION
The investigation on the effect of three indigenous wood species at three SD/NS mix ratios for clad production was successfully carried out. Clads produced from MiliciaexelsaandCeibapetandrain that order were both outstanding after exposure to moisture content for 24 hours.They were dimensionally stable than clads produced fromCola gigantia. However, the SD/NS mix ratio of 20:80 of Miliciaexelsawas more resistance to water uptake, thickness swelling and linear expansion than the other mix ratios of 30:70 and 40:60.From this study, it is important to consider the type of wood and its mix ratio with the binder before embarking on the production of clads. It is seen from this research also that clads produced from the saw dust of | 2022-04-03T16:14:36.468Z | 2022-03-27T00:00:00.000 | {
"year": 2022,
"sha1": "45c3bf380d8088b0af57c0eb2edfeac72c922860",
"oa_license": null,
"oa_url": "https://doi.org/10.31871/wjir.12.3.4",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "cdfb216f39650a02bf06dd81aabd6f40b59b3c71",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": []
} |
232091260 | pes2o/s2orc | v3-fos-license | Upgrade of the SPECIES beamline at the MAX IV Laboratory
The transfer and upgrade of the SPECIES beamline and its endstation to the 1.5 GeV storage ring at the MAX IV Laboratory is reported.
Introduction
The SPECIES beamline is a soft X-ray undulator beamline on the 1.5 GeV storage ring at MAX IV Laboratory in Lund, Sweden. The beamline covers the photon energy range from 30 to 1500 eV with variable polarization. The X-rays are generated using an elliptically polarizing undulator, EPU61 (Wallé n et al., 2014), and monochromated with a plane-grating monochromator illuminated with collimated light [cPGM (Follath et al., 1998)]. The beamline was originally built on the MAX II storage ring, which was decommissioned at the end of 2015. The entire beamline and the endstations were then transferred to the new MAX IV facility where user operation began in 2019. The exact details of the beamline are mostly unchanged from the previous configuration in 2015 (Urpelainen et al., 2017), and only the upgrades and changes are discussed in this paper.
The beamline offers two branches: branch A is dedicated to ambient pressure X-ray photoelectron spectroscopy (APXPS) and branch B to resonant inelastic X-ray scattering (RIXS). The main technique in the APXPS branch is X-ray photoelectron spectroscopy (XPS), but it has also capabilities for X-ray absorption spectroscopy (XAS) and near-nedge X-ray absorption fine-structure (NEXAFS) experiments in total or partial electron yield mode. The RIXS branch can also perform XAS and NEXAFS measurements by recording the emitted electrons or photons. While the APXPS endstation is also capable of measurements in the UV range and thus qualifies as an ambient-pressure UV photoelectron spectro-scopy (APUPS) instrument, we will only refer to it as the APXPS endstation throughout this paper. A schematic layout of the beamline is presented in Fig. 1 including distances between optical elements.
In the surface science community, ultrahigh vacuum (UHV) XPS is a well known and trusted method for obtaining detailed information on the electronic structure of surfaces as well as elemental and chemical composition (Hü fner, 2013). While exposing surfaces to gases is also possible in UHV systems, higher pressures are normally inaccessible. In contrast, APXPS makes it possible to study materials and their properties under conditions that more closely mimic those occurring under real-world processes and phenomena (Bluhm et al., 2007;Ogletree et al., 2009;Schnadt et al., 2020). Experiments at ambient pressure (in the context of this paper, ambient pressure refers to pressures of $ 1 mbar) pose a serious challenge, however, as the electron mean free path in gas is of the order of millimetres (Ogletree et al., 2002). The method employed at SPECIES is to use the 'Lund cell' approach (Knudsen et al., 2016), which uses the cell-incell concept (Schnadt et al., 2012;Tao, 2012;Starr et al., 2013). Here, the sample environment is created inside an ambient pressure (AP) cell, which itself is placed inside a UHV vacuum chamber. The concept enables changing the sample environment swiftly from UHV conditions to ambient pressure since the AP cell can be removed in-vacuum. Another advantage is the possibility for fast exchange of gases due to the small volume of the cell. Typically in APXPS setups, an aperture of the electron analyser is placed very close to the sample surface in order to minimize the distance the electrons have to travel in the high-pressure region, thereby increasing the transmission of electrons.
The SPECIES beamline offers the possibility of conducting APXPS measurements using low photon energies which make possible avenues of research that might have been previously neglected. In particular, the ability to measure valence band spectra in elevated pressure regimes is interesting for corre-lating different phenomena on the surface such as the absorption of various molecules, which might not give very strong signal on the core-levels. However, the use of low photon energies (< 100 eV) will ultimately result in low kinetic energy photoelectrons as well. As was recently reported by Held et al., the transmission of low kinetic energy electrons through a layer of high pressure gas can be very low (Held et al., 2020). Thus, one may have to take into account the consequence that high pressure has on the electron transmission and make compromises. At SPECIES, the very high flux in low photon energies is an advantage that will help even in situations where transmission might be otherwise low.
The APXPS setup has been previously described (Schnadt et al., 2012;Knudsen et al., 2016;Urpelainen et al., 2017) and here we will only give the details of the upgrades that have been carried out on the system in connection to the transfer to the new facility. The paper is organized as follows: details of the upgrade of the APXPS endstation, details of the RIXS endstation, the performance of the beamline on the new 1.5 GeV storage ring, and examples of research using the APXPS endstation.
Upgrade of the APXPS endstation
The APXPS endstation is a surface science instrument equipped with a SPECS Phoibos 150 NAP electron energy analyser. The layout of the endstation is shown in Fig. 2. The endstation has two manipulators for sample movement. The UHV manipulator is intended for measurements without the AP cell, in UHV conditions. The UHV manipulator can be used for in-vacuum sample preparation and characterization inside the preparation chamber, which is situated above the analysis chamber. The preparation chamber has permanent instruments for Ar + sputtering, sample heating, low-electron energy diffraction (LEED) characterization of sample surfaces, and for dosing gases up to pressures of $ 10 À5 mbar. In addition, the preparation chamber has ports available for infrequently used equipment, such as evaporation sources, or user equipment. The additional ports are behind gate valves, allowing equipment installation without the need for venting the whole preparation chamber.
The AP cells are installed on another manipulator, placed horizontally and facing the analyser. This placement makes it possible to keep the AP cell and its manipulator isolated from the vacuum of the analysis chamber, thus making maintenance, repairs and bake-outs faster and more accessible since only a smaller chamber is vented. For measurements, the AP cell has to be docked onto the analyser using its own manipulator. Once docked to the analyser, it is locked in place with a bayonet mechanism, which keeps the cell in place but Layout of the SPECIES beamline showing the most important optical components. The first mirror (M1) collimates the beam vertically, with focusing done using the third mirrors (M3) (only vertically for APXPS and vertically and horizontally for RIXS). Refocusing mirrors (M4) are used to focus on the optimum spot for the samples. allows sample movement. Proper sample movement is important in order to characterize several areas on samples and for mitigating X-ray-induced beam damage. Samples are transferred from the analysis chamber into the AP cell with a transfer wobblestick, which is also used to operate the door that seals the volume within the cell and keeps high-vacuum conditions outside it. The sample holders have the typical SPECS/Omicron flag-type shape with a modified thermocouple design.
All AP cells have two gas inlet lines. Both lines can be connected to a dedicated gas system, where several gases can be installed simultaneously. The flows from each gas line are independently controlled using mass flow controllers. This allows mixing and accurate control of the gas composition, which is fed into the cell. Vapours from liquid sources can be fed into the cell using, for example, leak valves. Special sources can also be installed (for example in the case of the atomic layer deposition cell, see Section 2.3).
During the installation and commissioning phase of the beamline at the new MAX IV 1.5 GeV storage ring, several improvements were made on the APXPS endstation. The electron spectrometer is a commercial system purchased from SPECS Surface Nano Analysis GmbH, Berlin, Germany. The NAP 150 spectrometer houses a differentially pumped electrostatic lens system allowing for ambient pressure measurements while still keeping the detector and hemisphere at high vacuum. The spectrometer was originally equipped with a CCD camera-based electron detection system. As part of the beamline transfer, this detector system was replaced with a faster acquisition scheme involving microchannel plates (MCPs) and a delay-line detector (DLD). The detector is a 3D-DLD4040-150 from Surface Concept GmbH and it consists of MCPs in a chevron stack and two layers of delay lines (X and Y) in a meander structure. The electron signals from the delay line are analysed by the readout electronics including a constant fraction discriminator (CFD) for pulse shaping and a fast time-to-digital converter (TDC) for time stamping. All detector electronics are housed in a single rackmounted electronics box. The active area of the detector is approximately 40 mm in diameter, which is converted by the electronics into an image with a size of about 800 Â 1000 pixels. The binning of the detector image can be changed in the software to reduce the size of the saved image if necessary. The detector is capable of reaching a count rate in the MHz range before saturation is reached.
The DLD system allows synchronization of the detector with any other source of external pulsing, or gating it to inhibit electron detection. As an example of this, we have recently demonstrated that it is possible to trigger the detector at the same time as pneumatically actuated valves in order to achieve pulses of reactant gas. Such synchronization is essential for precise gas control as well as data acquisition, which is required for accurate alignment of gas pulses temporally to each other while also providing a convenient platform to program advanced experiment automation (Redekop et al., 2020).
The differentially pumped lens system of the analyser has also been upgraded with a new version of the SPECS pre-lens (Release 3). An important feature of the pre-lens is that it should create a leak-tight connection with the AP cell. The new pre-lens features special guiding elements that ensure that the AP cell is always placed to the same position with respect to the pre-lens. During the upgrade, the pre-lens electronics were replaced with newer, modernized versions. The Release 3 version of the pre-lens offers approximately one order of magnitude higher transmission with a similar resolution setting as the Release 2 version of the pre-lens system (SPECS, 2020). Further developments of the endstation to enhance its operation are ongoing. The gas mixing system will consist of gas panels with independent gas lines for most common gases, such as O 2 , CO, CO 2 , H 2 , N 2 and Ar. Each gas line will include mass flow controllers and pumping capabilities for easy exchange of gas bottles. For gases that need them, there are also gas purifiers and the capability of using condensers to remove impurities such as water. Independent gas lines will also reduce the potential for cross contamination. The panels will be incorporated into the MAX IV gas standard and control logic which allows for a simple, remotely operated system.
A second load-lock chamber has been designed, which will be installed next to the vacuum chamber where the AP cell typically is located. The idea of the second load-lock is to have a small volume that is detachable from the load-lock itself that can be taken, for instance, into a glove-box filled with an inert gas for inserting samples that are sensitive to air. This so-called controlled atmosphere load-lock allows the user to have a well controlled sample transfer atmosphere from the sample loading to the measurement without having to expose the sample to air or vacuum.
Several ambient pressure cells have been constructed for use at the APXPS endstation. These include one generalpurpose cell (standard cell) for most common measurements that do not require any special conditions, one cell dedicated to measurements with corrosive gases (sulfur cell) such as sulfurcontaining gases, and one cell dedicated for atomic layer deposition research (ALD cell). The important parameters in the present cells are tabulated in Table 1. All cells have the same volume of about 200 ml, excluding the gas tubes.
All cells share the same window design for the incoming synchrotron radiation. Different windows are available depending on the requirements of the experiment. Currently, there are windows with 200 nm thickness of Si 3 N 4 with a thin, protective Al coating (from Silson Ltd, UK) or pure Al (from Luxel Corporation, USA). The calculated transmission curves for these materials differ substantially in the low-energy range as can be seen in Fig. 3. Calculations are performed using the Center for X-ray Optics (CXRO) database (Henke et al., 1993).
The cone through which electrons enter the analyser is identical in each cell. The size of the cone aperture ultimately defines the maximum achievable pressure of the cell. With a 0.3 mm diameter the maximum pressure is about 20 mbar, while with the other cones available at the beamline (0.5 mm and 1.0 mm diameters) the maximum pressures are lower. Consequently, larger cone apertures will yield higher electron transmission in situations where the footprint of the beam is large (for example when using an X-ray anode as a light source). The accuracy of the sample positioning in front of the aperture is also dependent on its diameter. With smaller cone aperture, the sample has to be placed closer to the cone which yields stricter requirements for the alignment of the sample with respect to the analyser and synchrotron beam. A rule-ofthumb is to place the sample approximately twice the aperture diameter away from the cone aperture. At this distance, the pressure on the sample surface will remain approximately homogeneous despite the pumping effect through the cone (Bluhm et al., 2007). This distance is often visually confirmed using a camera which looks at the cone and the sample surface.
Standard cell
The standard cell (designed in collaboration with Synchrotron SOLEIL), as well as the other cells, follow the basic design principles of the original cell from SPECS (Schnadt et al., 2012). Some improvements have been made, however. The most notable change is the method of releasing the gas into the cell. Gas is introduced to the cell using the so-called double-cone inlet system, where the gas inlets are directed towards the sample surface from the same direction as the analyser. The double-cone inlet system is schematically shown in Fig. 4(a). The cone which separates the cell vacuum Transmission of synchrotron radiation through 200 nm of Al or Si 3 N 4 calculated using the CXRO database (Henke et al., 1993). The Al window has a good transmission up to the Al 2p edge at around 72 eV and begins to increase again after 200 eV. The Si 3 N 4 window has much smaller transmission due to the Si 2p and N 1s edges at about 100 and 400 eV. Both windows have relatively good transmission above 500 eV. Inlet and/or outlet Outlet Inlet and/or outlet from the pre-lens vacuum has another, larger, cone around it and the gas enters the cell volume from there. Since the cone(s) are often very close to the sample surface, this ensures that the response from the surface is faster than if the gas was introduced at the back of the cell, as was done in the original AP cell. Gas flow simulations have been carried out using this design, with an example shown in Fig. 4(b). The simulations were performed using the Molecular Flow module in the COMSOL Multiphysics software (COMSOL, 2020). The results indicate an efficient flow of gas towards the spot where the X-ray beam hits the sample surface, and that entire sample is reached by the flow uniformly. With the double-cone system, it is very unlikely that the gas bypasses the sample surface altogether, a scenario that could trouble setups where the gas inlet is located elsewhere. The endstation is equipped with a quadrupole mass spectrometer (QMS), which can probe gas composition in the outlet and one of the inlet lines. Since the double-cone system directs the gas flow towards the sample surface, reactivity measurements using the QMS should be more detailed, than in a geometry where the gas inlet is behind the sample.
The cell is equipped with a miniaturized Pirani gauge (MicroPirani model #905 by MKS Instruments) located on one of the ports facing the sample surface. The MicroPirani is capable of measuring the pressure range from 10 À5 to 1000 mbar and gives the possibility of measuring pressures very near the sample, thereby increasing accuracy. The MicroPirani has been initially tested and is available to users soon. The sample heating system in the cell was chosen to be based on resistive heating using a button heater (Model #101275 from HeatWave Labs, Inc.). In the button heater, the resistive platinum filament is housed inside an Al 2 O 3 body. The button heater itself is placed just below the sample holder, where the heat can be conducted and radiated into the sample holder and to the sample itself. According to the specifications the heater can be operated up to 1200 C in an oxygen atmosphere, but we have chosen to limit the highest temperature to approximately 600 C due to the presence of sealing O-rings very close to the heater. As the filament wire and the housing of the button heater itself can be catalytically active (Palomino et al., 2017), it is often imperative that the QMS data are verified by measuring another reference data set without the sample inside the cell. A possible solution for the issue is to replace the filament with more inactive material, e.g. graphite.
The standard cell is also equipped with a cooling channel near the sample stage itself. The purpose of this is to increase the rate of cooling from high temperatures towards room temperature (RT). The same cooling channel can also be used to cool the samples below RT, by flowing a cold gas or liquid through the channel. So far it has been demonstrated that by flowing cold nitrogen gas through the channel it is possible to cool the samples down to À30 C. Cooling the samples further extends the available sample environments and is useful, for instance, in the cases where controlling the relative humidity inside the cell is desired (Lin et al., 2021). The same cooling could also be used for the investigations of ice on surfaces.
Sulfur cell
The beamline provides also a cell for corrosive and sticky gases. The principal design is nearly identical to the standard cell. The general idea, however, is to provide a dedicated setup for use with these types of gases. With a dedicated setup, problems caused by cross-contamination with experiments that require clean conditions can be avoided. For this reason, the sulfur cell ideally contains its own set of piping on the inlet and outlet side which would be completely separate from the piping that is shared with the standard and ALD cells.
To facilitate faster transport of gases into the cell and decrease condensation into the tubes, they can be heated in vacuum with resistive heating elements. The heating element is wrapped around the longest section of the tubes in-vacuum, which allows heating them to a temperature of up to 200 C.
The sulfur cell will have its own, independent, gas panel system with individual gas lines and mass flow controllers for specific gases. As sulfur-containing gases are typically very corrosive, the rationale is to avoid cross-contamination with the other cells as much as possible. The gas panel for the sulfur cell will be thus isolated from the gas system of the other cells.
Material choices for the sulfur cell are somewhat restricted due to the highly corrosive nature of the used gases. The material for the sealing O-rings was chosen to be FFKM-type perfluoroelastomer which provides enhanced chemical resistance and better stability at higher temperatures. Additionally, typical K-type thermocouple material is not stable in an atmosphere of sulfur-containing gas. Therefore, a C-type thermocouple material will be used instead.
Allowed sulfur containing gases will depend on a risk analysis that is done on a case-by-case basis. Once the sulfur cell has been fully commissioned, the list of gases will be determined and the information will be available through the beamline website.
ALD cell
Atomic layer deposition (ALD) is a technique to grow uniform layers of material with high degree of control (Miikkulainen et al., 2013). A substrate is exposed to pulses of two (or more) precursor gases in a sequential manner, thereby achieving a highly ordered growth of atomic layers. APXPS is a very convenient tool for studying ALD processes since the pressure and temperature ranges needed for optimal growth are within the ranges of the typical AP cells in use at SPECIES. APXPS has been recently demonstrated as a very powerful tool for in situ and operando investigations into the first half-cycles of the ALD processes (Head et al., 2016;Timm et al., 2018;Temperton et al., 2019;D'Acunto et al., 2020).
The ALD cell at SPECIES was designed and constructed (collaboration between University of Helsinki and MAX IV) in order to achieve a gas flow that would mimic flows in real ALD reactors. For this purpose, the cell contains two independent gas inlet lines to be used with two different precursor gases. These gas lines are built inside the cell so that they point towards the sample surface, ideally resulting in a laminar-like flow across the surface. Additionally, there is an outlet or pumping line on the other side of the sample also facing the sample surface, further increasing the likelihood of flow with laminar characteristics. In the same way as the other AP cells, the ALD cell has relatively long gas tubes ($ 1.5 m) from the feedthrough to the cell itself. The tubes are heated in the same way as in the sulfur cell. Additionally, the cell walls can also be heated using independent heating elements. The aim of these extra heating elements is to facilitate faster transport of precursor gases and to prevent the formation cold spots around the cell chamber.
The outlet line of the ALD cell is connected to the QMS in the same manner as in the standard cell. This enables accurate measurement of the reaction products, which often show response to the ALD reactions happening on the surface due to broken ligands and other fragments that are normal during the specific ALD process in question.
Since the cell walls are heated, it was decided that no cooling lines would be made for the cell. The lack of cooling has some restrictions on the type of O-rings that could be used for sealing various parts of the cell. Therefore, also for the ALD cell, FFKM-type O-rings are used. The walls of the cell and the gas tubes will be very likely coated with different metals as ALD experiments are conducted. So far, this coating has not been seen interfering with the experiments, as long as the pumping efficiency of the cell and the tubes is not affected (i.e. as long as there is no accumulation of 'sticky' gases in time). The cell is designed with a high degree of exchangeability in mind, allowing easy replacement of contaminated or dirty components.
Control systems
The control systems for the beamline were already partially developed when the beamline was commissioned and operated on the MAX II storage ring. The control system and their design principles are detailed elsewhere (Lindberg et al., 2015;Sjö blom et al., 2016). However, several improvements have been implemented after the move to MAX IV Laboratory especially since the other beamlines in the facility share the same control system logic.
The APXPS endstation has gone through an overhaul of the vacuum control systems. Most vacuum pumps, gauges and valves are connected to the same control system through an automation interface and Tango Controls (Tango, 2020) based logic. This allows for easy control of all vacuum-related components as well as logging of crucial parameters. Moreover, user safety and equipment protection can be increased through many different vacuum pressure setpoints and other monitorable parameters.
The manipulators in the APXPS endstation and the monochromator can be controlled directly from the XPS measurement software (SpecsLab Prodigy). This allows sophisticated measurement modes to be made where, for example, sample movement is performed automatically during the electron spectrum acquisition. Additionally, the monochromator can be moved between measurements, making easy and automated partial electron yield measurements possible.
RIXS endstation
The Resonant Inelastic X-ray Scattering (RIXS) endstation is designed with a high degree of sample versatility in mind. The experimental endstation consists of a customized vacuum chamber for the spectrometers and a load-lock which allows sample transfer from air or from portable vacuum suitcases. The endstation is equipped with a high-stability manipulator with exchangeable sample rods. Each rod is 60 mm in diameter and 600 mm long and can hold several samples. Different types of sample environments are available, such as heliumcooled, nitrogen-cooled and standard rods. Depending on the cooling type, the sample temperature can reach 10 K (helium cooled) or 80 K (nitrogen cooled). A micro-jet setup and different types of liquid cells are currently being developed to expand the in situ measurement capability.
The main chamber of the endstation has additional detectors for recording XAS or NEXAFS spectra. These measurements can be made by measuring the drain current from the sample itself, or by recording the total amount of emitted photons with dedicated detectors such as photodiodes or MCP detectors.
The endstation currently houses two spectrometers both mounted perpendicularly with respect to the incident photon beam and opposite to each other. The first spectrometer is a modified Scienta XES350 (Grace) operated in slitless mode (Nordgren et al., 1989). The Grace spectrometer houses three spherical gratings with line densities of 300 lines mm À1 (3 m radius), 400 lines mm À1 (5 m radius) and 1200 lines mm À1 (5 m radius). The Grace spectrometer can cover the photon energy range from 50 to 1500 eV at reasonable resolving power (hundreds) by operating it in the first or second diffraction order. Due to the relative low photon flux above 650 eV, experiments at the Grace spectrometer will usually focus on the energy range below the Mn L-edge. The second spectrometer is a newly developed plane-grating spectrometer (PGS) consisting of a collimating parabolic mirror followed by a large 1200 lines mm À1 grating (Agå ker et al., 2009). The diffracted light is focused onto an MCP detector by a plane beamlines parabolic mirror. This optical scheme yields high throughput and good energy resolution between 27 and 200 eV. Both spectrometers use MCP detectors with delay-line readout for high spatial resolution and low readout noise. Delay-line detectors also allow for synchronization to external pulses, such as the bunch marker signal from the storage ring. For instrumentation protection, both spectrometers can be isolated from the experimental chamber with thin filters and windows.
Beamline performance at the new MAX IV 1.5 GeV ring
The photon beam is created by the EPU61 insertion device in the 1.5 GeV storage ring. The photon beam intersects the first mirror of the beamline that forms a collimated beam which is monochromated by a plane-grating monochromator (cPGM). Two gratings with blazed grooves are installed: a 1221 lines mm À1 with Au coating and a 250 lines mm À1 with Ni coating. The Ni-coated grating is dedicated to measurements where improved flux but modest resolution is needed in the photon energy range 200-600 eV. At the time of the beamline transfer process, the Au grating as well as the mirrors were cleaned from carbon contaminants prior to installation to the beamline with UV-light generated ozone. Nevertheless, after a few years of operation there is a clear dark stripe of carbon visible on some of the optical components. The two first mirrors are water-cooled, with the plane mirror having internal cooling channels and the first mirror cooled from the sides. The cooling method was designed in order to be able to handle the heat load coming from the MAX IV 1.5 GeV ring running at 500 mA. The beamline components were manufactured by FMB Berlin except for the gas absorption cell which was made in-house based on a design from Paul Scherrer Institute (Schmitt, 2013). A detailed study of the mechanical performance of the monochromator is published elsewhere (Sjö blom et al., 2020).
The monochromated light is directed to one of the branches (either APXPS or RIXS) by one of the two focusing mirrors, which act also as switching mirrors. Both branches have their own gas absorption cells, exit slits and refocusing mirrors. Further information on the photon source and optics can be found elsewhere (Urpelainen et al., 2017). The general details of the beamline are summarized in Table 2.
The measured flux curve of the beamline is shown in Fig. 5. The flux was measured using a photodiode (IRD AXUV100) on the beam position monitor in the APXPS branch, which is located before the final refocusing mirror. During the measurement, the size of the exit slit was varied in each point to reach approximately 0.1% bandwidth. The flux measurement was made using the Au grating with a fixed focus constant (c ff ) value of 2.25. The flux on the sample position in the APXPS branch is reduced slightly due to the reflection losses in the final refocusing mirror [reflectivity of the Aucoated refocusing mirror (2 grazing incidence angle) varies between approximately 95% and 50% in the photon energy range of the beamline]. When the AP cell is in use, the flux is further reduced due to the absorption in the window material (see Fig. 3). It is anticipated that cleaning some of the carbon contamination on the optics will give more photon flux above approximately 150 eV. In situ cleaning of the optics by leaking in O 2 gas into the vacuum chambers is expected to start soon. All the vacuum chambers and components therein were designed to comply with a constant oxygen leak, with the Ni grating being an exception since its surface can oxidize upon continuous exposure to O 2 .
Photoabsorption measurements
The photon energy resolution was measured to show beamline performance at typical settings but also to check the performance with respect to designed values. The resolution was measured using core-level photoabsorption measurement of nitrogen (N 2 ) and neon (Ne) gases.
The N 1s photoabsorption spectrum which shows the various vibrational lines from the excitation of the N 1s Table 2 Details of the beamline optics.
The beam size is given as full width at half-maximum value. The flux value is based on the measurement shown in Fig. 5 and as discussed in the text. The flux in the cell is an approximation and depends on many factors such as the window material on the cell. 5 Â 10 13 to 1 Â 10 10 in AP cell (approximately) 3 Â 10 13 to 5 Â 10 9
Figure 5
Flux curve measured using a constant photon bandwidth of 0.1%. The measurement is made on a photodiode in the APXPS branch, before the refocusing mirror (M4). The decrease around 280 eV is due to carbon contamination on the optics. Measurement was made with an Au grating using a c ff value of 2.25 and with the opening of a beam-defining aperture before the monochromator set to 1 mm  1 mm. The transmission of the M4 mirror will reduce the flux slightly from these data.
electrons to the à g levels in the N 2 molecule is shown in Fig. 6(a) together with a least-squares fit. In all resolution measurements the Au grating was used with a c ff value of 2.25. The spectrum was recorded using a small opening in the entrance of the monochromator (about 0.5 mm  0.5 mm) and a beamline exit slit opening of 50 mm. A Voigt profile was used for the fit where the Lorentzian width was fixed to be 120 meV (Hitchcock & Brion, 1980), which gave a Gaussian width of $ 61 meV. At this energy, this gives a resolving power of approximately 6500.
Using Ne gas, the resolution was characterized at higher photon energy as well. Figure 6(b) shows the total ion yield of Ne gas in the photon energy of about 876 eV. The spectrum was recorded with a monochromator entrance opening of 0.5 mm  0.5 mm and a beamline exit slit opening of 50 mm. The spectrum was fitted with a Voigt profile with fixed Lorentzian width of 254 meV (Coreno et al., 1999), which resulted in a Gaussian width of 191 meV, giving a resolving power of approximately 4500. The resolution values at both edges are in good correspondence with the expected performance at these settings.
Beam profile measurements
An essential part of a beamline commissioning work is to ensure that the beam travels through the whole beamline correctly and hits all optical elements in the proper angle and position, resulting in a desired spot at the sample position. This type of commissioning work is often done by observing how the beam appears on various diagnostic elements, such as diodes and fluorescent screens. Another method is to measure undulator spectra to see, among other things, the ratio between even and odd harmonics. During the commissioning work of SPECIES, all of these techniques were used, but we have also characterized the spatial profile of the beam with the use of the baffles situated in front of the monochromator. In these measurements, the baffles were configured to create a rather small opening (in this case 0.5 mm  0.5 mm) which was then rastered over a specific range. The beam current was subsequently measured on the photodiode which was placed behind the exit slit of the APXPS branch. The results of this measurement are shown in Fig. 7. The experimental results are compared with the simulated beam profile, which was calculated using the SPECTRA software (version 10.2.0) (Tanaka & Kitamura, 2001).
As can be seen from Fig. 7, the correspondence between the experiment and theory is good, indicating that the beam goes through the beamline in a manner that is expected. The first harmonic profile shows some asymmetry in the vertical direction, however, which is most likely due to small alignment errors in the beamline optics. With these types of results, it is The total ion yield spectra of (a) the N1s ! Ã g excitation in the N 2 molecule and (b) neon in the Ne 1s À1 3p excitation. The black points indicate measured data points with the fitted curve shown in red with its components as dotted blue lines and the residual as green solid line in both panels. Resolving powers of 6500 and 4500 were reached for N 2 and Ne cases, respectively.
Figure 7
Measured and calculated beam profile maps of the first and second harmonics. The measurement was made using the monochromator baffles placed about 1 m upstream from it. The undulator was tuned to give first harmonic at 50 eV photon energy, and the second harmonic was recorded at 100 eV. The theoretical maps were calculated with the SPECTRA software using realistic values for the light source, and the same number of bins/steps as in the experiments. very obvious if the beam is severely cut, or enters the optics in an incorrect angle, as the shape of the beam profile is quite sensitive to these effects.
Example research
5.1. Oxidation states of an industrial SCR catalyst using APXPS It has been recognized for a long time that XPS is a powerful tool for the investigation of catalyst samples and reaction mechanisms (Briggs, 1980;Pijpers & Meier, 1999). Likewise, it also has been recognized that there exists a pressure gap between UHV experiments and real catalytic conditions, which may limit the use of XPS (Knop-Gericke et al., 2009;Knudsen et al., 2016) and other surface science techniques (Ertl, 1990;Lee et al., 1986) in the study of catalytic samples. APXPS addresses this gap by allowing XPS investigations at more realistic pressures (Starr et al., 2013;Salmeron & Schlö gl, 2008;Schnadt et al., 2020). Besides the pressure gap, another gap exists between typical surface science experiments and catalytic applications: the materials gap, which refers to the higher structural complexity of real catalysts in comparison with the model systems of surface science.
The present research example addresses both gaps. It is concerned with the most common industrially relevant catalysts for the selective catalytic reduction of NO x by NH 3 (NH 3 -SCR) made of 3% V 2 O 5 supported on anatase-TiO 2 with an admixture of 5% SiO 2 (3% V 2 O 5 /TiO 2 -5% SiO 2 ). In the SCR reaction the redox properties of vanadium play a major role on the reaction as the vanadium ions in the V 5+ state serve as an adsorption site for the NH 3 molecules, which reduces it to V 4+ in the catalytic reaction cycle. Subsequently, the V 4+ ions are re-oxidized by the O 2 gas in the reaction environment (Busca & Zecchina, 1994;Arnarson et al., 2017).
The redox properties of an SCR catalyst (provided by Dinex Finland) were studied in UHV conditions and at 1 mbar of air. The samples were prepared by diluting approximately 100 mg of the catalyst sample into 5 ml of ethanol, which was then spin-coated onto a gold foil. XPS was carried out under UHV conditions and at a sample exposure to 1 mbar of air. The Au 4f 7/2 core level from the gold foil was used for energy calibration. A Shirley background or a combination of a linear with a Shirley background was subtracted from the spectra. The spectra were measured with an overall energy resolution of approximately 200 meV. The photon energy was chosen so that electrons had a kinetic energy of about 100 eV.
Exposure of the sample to 1 mbar of air leads to a blueshift of the V 2p peak to an energy that is characteristic of the V 5+ oxidation state (Koust et al., 2018). Subsequent evacuation to UHV reduces the sample back to the V 4+ state. The shift in binding energy is highlighted in the inset of Fig. 8, which shows the statistical first moment of the V 2p peaks in the three spectra. In the O 1s region, an increase is seen in the binding energy region where hydroxyl groups, adsorbed water and carbon contamination from the air are expected (Sanjiné s et al., 1994;Zimmermann et al., 1998;Silversmit et al., 2004).
Once the sample is in vacuum again, these components are not removed within the time of this measurement. The finding of a reversible reduction of the vanadium oxide in the catalyst material upon introduction in UHV shows that a UHV environment is not suitable for the study of the SCR catalyst. It emphasizes that pre-and post-analysis methods might not be sufficient to understand chemical reactions and their mechanism on surfaces.
In situ and operando techniques like Raman spectroscopy, infrared spectroscopy and several other spectroscopic methods (Knop-Gericke et al., 2009;Busca & Zecchina, 1994;Topsoe et al., 1995;Chakrabarti et al., 2017) have been employed to study the NH 3 -SCR reaction. So far, however, they have not succeeded in drawing a conclusive picture on the reaction mechanism and the active site of the catalytic reaction. We foresee that APXPS might enable us to obtain relevant information about the role and identification of active sites and the reaction mechanism. Experiments with SCR conditions and reactants have been performed. Information from the NH 3 adsorption and SCR reaction mechanism onto the catalytic surface are expected to be published soon. With information obtained directly from real industrial catalytic materials, we are one step closer to overcoming not only the pressure but also the materials gap and demonstrate that the catalytic industry can benefit from APXPS and synchrotron radiation research.
APXPS of hydrogen on platinum surface
With this case study we demonstrate the capability of the APXPS endstation to record UPS data at ambient pressures. We have therefore chosen to study the effect of hydrogen adsorption on a platinum surface. Hydrogen is a notoriously difficult element to observe on surfaces with XPS due to its low cross section at typical XPS photon energies and it is thus O 1s and V 2p core-level spectra for the 3% V 2 O 5 /anatase-TiO 2 -5% SiO 2 catalyst under UHV conditions, during exposure to 1 mbar of air and after evacuation. The spectra are normalized with the V 2p 3/2 area. The inset shows the shift of about 2 eV when exposed to air in the first momentum of the V 2p 3/2 core level. often said to be impossible to observe (Kerber et al., 1996;Stojilovic, 2012). Since the lowest photon energy that SPECIES can produce is 30 eV, it is a suitable beamline to also make measurements on the valence bands levels.
As a sample, we chose a platinum (111) crystal which was initially cleaned with several cycles of Ar + sputtering, oxygen treatments and UHV annealing. The cleanness of the surface was subsequently checked with LEED and XPS.
After the initial cleaning, the sample was transferred to the ambient pressure cell and exposed to hydrogen gas at a total pressure of 1 mbar at room temperature. Figure 9(a) shows the valence band measurement before, during and after the H 2 exposure. The spectra are dominated by the rich d-band structure of the Pt surface, but, when the surface is exposed to H 2 , new features appear at binding energies of 11.9 and 9.3 eV on the spectrum as indicated by the vertical lines. These features are at very similar binding energies as those observed by Zhong et al. (2018) and attributed as Pt-H bonds, indicating adsorbed hydrogen. It should be noted that, since our measurement was made at much lower photon energy than that of Zhong et al., we see a much stronger signal due to the higher photoionization cross section. Figure 9(b) shows a pure gas phase spectrum of the gas introduced to the cell. In this measurement, the Pt sample was retracted out of the way of the synchrotron beam to minimize the secondary electrons that reach the electron analyser. In this spectrum, the vibrational lines from the ionization of the H 1 g valence orbital of the H 2 molecule are very clearly seen. The pure gas phase spectrum allow us to probe possible impurities introduced into the cell together with the H 2 gas. In this case, it is clear that some small amounts of water appear in the gas phase as well. Figure 10(a) shows the Pt 4f core-levels also measured before, during and after H 2 exposure. The main feature originates from the bulk Pt 4f electrons at about 71 eV with surface component observed at about 0.35 eV lower in binding energy. Upon exposure to gas, the surface states decrease to be only a few percent of the bulk and remains low even when the cell is evacuated. During the H 2 exposure, a new component appears at about 0.8 eV higher binding energy with respect to the bulk peak. The binding energy shift from the surface Pt line to the new component is too large to be attributed to adsorbed hydrogen (Pt-H) bonds. This apparent contradiction between the UPS spectrum that suggest Pt-H bonds and the Pt 4f 7/2 spectrum acquired during exposure that is incompatible with Pt-H bonds indicates that other surface species could be present on the surface.
The O 1s and C 1s core-levels shown in Figs. 10(b) and 10(c) taken at the same time as Pt 4f and the valence band indeed gives experimental support for carbon-containing species on the surface during H 2 exposure. In the C 1s spectrum, a peak is observed at 283.8 eV indicating adsorbed carbon. As the sample is exposed to H 2 , two new components arise at binding energies of 285.9 and 286.6 eV fitting well with adsorbed CO in atop and bridge sites, respectively (Bjö rneholm et al., 1994). Similarly, the O 1s spectra display components at 530.9 and 532.6 eV also corresponding to CO in atop and bridge sites.
While the sample was initially exposed to the gas, we also recorded time-resolved spectra of the valence band levels using the so-called snapshot mode of the analyser. In this mode, the analyser voltages are kept constant and only a fixed kinetic energy range is observed on the detector. As no voltages need to be changed, this mode allows to measure spectra very fast, thereby capturing spectral changes in subsecond timescale. The time-resolved spectra of the valence levels are shown as a colour map in Fig. 11(a), where the x-axis denotes the time since the start of the measurement and the y-axis the electron binding energy. In the snapshot mode, the energy range of the measurement is determined by the analyser's pass energy, which in this case was 50 eV, giving a energy window of approximately 5 eV. In this case the energy window was placed so that the features at binding energies of 11.9 and 9.3 eV would fit into the same window. The timeresolved colour map very clearly shows how these two peaks grow as a function of time from the background signal. The intensity of peak at 11.9 eV is additionally integrated in Fig. 11(b) with a fitted trend line indicating the rise in signal intensity. For additional clarify, the sum of the first and last ten spectra from the time-resolved measurement are shown in Fig. 11(c), with fitted Voigt shapes for the two peaks. The timeresolved data were recorded at a photon energy of 200 eV, which is different to what was used to record the spectra in Fig. 9. Higher photon energy was chosen for the time-resolved measurements in order to obtain higher kinetic energy electrons. In this case, measuring at higher kinetic energy simplifies the background subtraction process considerably, as a linear background can be assumed.
The time-resolved measurement gives additional evidence for CO adsorption on the surface. In this experiment, based on the data in Fig. 11(b), it took approximately 22 min to reach saturation for the peak at 11.9 eV. This is a very long time for the sample to be in 1 mbar of H 2 and, if the peak would correspond to adsorbed hydrogen, it would appear much faster. A more likely explanation is that small traces of CO in the H 2 gas, with time, leads to adsorbed CO molecules in the atop and bridge sites. We can also identify the new peaks in the valence band: the peak at 11.9 eV would seem to correspond quite well with the 4 orbital of CO and the peak at 9.3 eV fits with the mixture of 5 and 1 levels (Alnot et al., 1982).
We thus have to conclude that our H 2 experiment suffers from CO contamination, which is a well known problem in reducing or H 2 conditions in the APXPS community. While on one hand the results are not what we expected, on the other hand the example underlines one of the strong and rather unique capabilities of the SPECIES beam time: the ability to correlate UV photoelectron spectra with X-ray photoelectron spectra in mbar gas environment. This is, for example, very important for correct interpretation of AP-UPS spectra, which we demonstrate with our example. In fact, correct interpretation of UPS spectra and identification of UV fingerprint signal at mbar conditions is becoming more and more important in the coming years as very powerful laser sources and laboratory UV-sources start to become available for ambient pressure applications.
Silicon wafer RIXS
To characterize the performance of the RIXS endstation, we chose to investigate the absorption and emission properties (a) Pt 4f core-level spectra measured at 200 eV photon energy. The inset shows a close-up view of the Pt 4f 7/2 peak. The spectra are normalized to bulk Pt 4f intensity. (b) C 1s core-level spectra measured at 380 eV photon energy. The C 1s spectra are normalized to have equal intensity in the peak at 284 eV. (c) O 1s core-level spectra measured at 640 eV photon energy. In each core-level, the red spectra are recorded before any H 2 was added, the blue spectra are recorded when the sample was exposed to 1 mbar H 2 , and the green spectra after the cell had been evacuated.
Figure 11
The valence region of the Pt(111) surface as seen in a time-resolved experiment with the hydrogen gas dose. Each spectrum was taken at approximately one-second intervals, thereby the scan number also indicates the amount of time that has passed since the beginning in seconds. The black lines in (a) indicate the binding energy region that was used for the integrated signal in (b), where the trend of the increase of the signal is shown as a black line. (c) The first and last ten spectra of the measurement in red and black, respectively. The spectrum at the end has been fitted with a Voigt curve to show the appearance of the peaks. All measurements were made at a photon energy of 200 eV. of silicon wafer around the Si L 2,3 edge. Figure 12(a) shows the X-ray absorption spectra of Si(001) wafer measured in the RIXS endstation using the total electron yield (TEY) detector. The results agree well with those observed for crystalline silicon (Terekhov et al., 2008). Figure 12(b) shows the energy-dependent RIXS spectra of Si wafer with excitation energies scanned through the Si L 2,3 edges. To reduce the strong signal from the elastically scattered photons, the polarization of the incident beam was kept in the scattering plane ( polarization). The entrance slits were set to 80 mm opening and the beamline resolution was set to about 80 meV with the total energy resolution determined from the elastic peak as 200 meV (full width at half-maximum value). All spectra were recorded at room temperature using the Grace spectrometer with the 300 lines mm À1 grating. Each spectrum was recorded for 15 min. The top spectrum (black curve) was recorded with incident photon energy of 108 eV which is far above the ionization threshold, reflecting the full partial densities of state of Si. Our results are consistent with the bulk Si results (Kasrai et al., 1993;Hu et al., 2004;Terekhov et al., 2008;Š iller et al., 2009). At an incident energy of 99.75 eV (orange curve) a sharp feature was observed at 95.7 eV emission energy. This strong resonance implies the transition is due to the valence excitations during the RIXS process. This has been previously theoretically suggested (Minami & Nasu, 1998;Shirley et al., 2001).
Conclusions
The SPECIES beamline has two branches, with the first one providing a facility for photoelectron spectroscopy in ambient pressures and UHV conditions. The second branch is dedicated to resonant inelastic scattering experiments. These complementary techniques provide a unique place for conducting experiments to study the electronic structure of matter in various depths and at different ambient pressure ranges. The low photon energies accessible at the beamline are demonstrated in this paper to provide information that often is not considered, especially in the APXPS community. The industrial catalyst example is aimed at highlighting the importance of APXPS for the industry and how it is possible to develop and improve our current knowledge from realworld systems. The results from the beamline performance show that it meets the design parameters. Both branches of the beamline are currently accepting users. Figure 12 (a) Si L 2,3 X-ray absorption spectrum of Si(001) wafer measured in the TEY mode. The coloured vertical lines indicate few selected photon energies corresponding to RIXS measurements. (b) Incident photon energy dependent RIXS spectra of Si(001) wafer. From bottom to top, the spectra were measured with E in = 99.5 to 108 eV, the steps were set to 0.25 eV between E in = 95.5 and 104 eV and to 0.5 eV above E in = 104 eV. The black, blue, cyan, green, orange and blue curves represent photon energies depicted with vertical bars of the same colour in (a). | 2021-03-03T06:23:24.975Z | 2021-02-05T00:00:00.000 | {
"year": 2021,
"sha1": "48985d6a669898619421ce479745ae91fd559116",
"oa_license": "CCBY",
"oa_url": "https://journals.iucr.org/s/issues/2021/02/00/ok5038/ok5038.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "5864d0c50b80eb0e77683655eb27402f05745ed3",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
17626593 | pes2o/s2orc | v3-fos-license | Vacuum polarization by a global monopole with finite core
We investigate the effects of a $(D+1)$-dimensional global monopole core on the behavior of a quantum massive scalar field with general curvature coupling parameter. In the general case of the spherically symmetric static core, formulae are derived for the Wightman function, for the vacuum expectation values of the field square and the energy-momentum tensor in the exterior region. These expectation values are presented as the sum of point-like global monopole part and the core induced one. The asymptotic behavior of the core induced vacuum densities is investigated at large distances from the core, near the core and for small values of the solid angle corresponding to strong gravitational fields. In particular, in the latter case we show that the behavior of the vacuum densities is drastically different for minimally and non-minimally coupled fields. As an application of general results the flower-pot model for the monopole's core is considered and the expectation values inside the core are evaluated.
Introduction
It is well known that different types of topological objects may have been formed in the early universe after Planck time by the vacuum phase transition [1,2]. Depending on the topology of the vacuum manifold these are domain walls, strings, monopoles and textures. Among them, cosmic strings and monopoles seem to be the best candidate to be observed. A global monopole is a spherical heavy object formed in the phase transition of a system composed by a self-coupling Goldstone field, whose original global symmetry is spontaneously broken. The matter fields play the role of an order parameter which outside the monopole's core acquires a non-vanishing value. The global monopole was first introduced by Sokolov and Starobinsky [3]. A few years later, the gravitational effects of of the global monopole were considered in Ref. [4], where a solution is presented which describes a global monopole at large radial distances. The gravitational effects produced by this object may be approximated by a solid angle deficit in the (3+1)-dimensional spacetime.
The nontrivial properties of the vacuum are among the most important predictions of quantum field theory. These properties are manifested in the response of the vacuum to the external electromagnetic and gravitational fields. In particular, the explicit calculations of the vacuum polarization caused by particular external fields have played an important role in the development of quantum field theory. The quantum effects due to the point-like global monopole spacetime on the matter fields have been considered for massless scalar [5] and fermionic [6] fields, respectively. In order to develop this analysis, the scalar respectively spinor Green functions in this background have been obtained. The influence of the non-zero temperature on these polarization effects has been considered in [7] for scalar and fermionic fields. Moreover, the calculation of quantum effects on massless scalar field in a higher dimensional global monopole spacetime has also been developed in [8]. The combined vacuum polarization effects by the nontrivial geometry of a global monopole and boundary conditions imposed on the matter fields are investigated as well. In this direction, the total Casimir energy associated with massive scalar field inside a spherical region in the global monopole background have been analyzed in Refs. [9,10] by using the zeta function regularization procedure. Scalar Casimir densities induced by spherical boundaries have been calculated in [11,12] to higher dimensional global monopole spacetime by making use of the generalized Abel-Plana summation formula [13,14]. More recently, using also this formalism, a similar analysis for spinor fields with MIT bag boundary conditions has been developed in [15,16].
Many of treatments of quantum fields around a global monopole deal mainly with the case of the idealized point-like monopole geometry. However, the realistic global monopole has a characteristic core radius determined by the symmetry braking scale at which the monopole is formed. A simplified model for the monopole core where the region inside the core is described by the de Sitter geometry is presented in [17]. The vacuum polarization effects due to a massless scalar field in the region outside the core of this model are investigated in Ref. [18]. In particular, it has been shown that long-range effects can take place due to the non-trivial core structure. In the present paper we will analyze the effects of global monopole core on properties of the quantum vacuum for the general spherically symmetric static model with a core of finite radius. The most important quantities characterizing these properties are the vacuum expectation values of the field square and the energy-momentum tensor. Though the corresponding operators are local, due to the global nature of the vacuum, the vacuum expectation values describe the global properties of the bulk and carry an important information about the structure of the defect core. In addition to describing the physical structure of the quantum field at a given point, the energy-momentum tensor acts as the source of gravity in the Einstein equations. It therefore plays an important role in modelling a self-consistent dynamics involving the gravitational field.
As the first step for the investigation of vacuum densities we evaluate the positive frequency Wightman function for a massive scalar field with general curvature coupling parameter. This function gives comprehensive insight into vacuum fluctuations and determines the response of a particle detector of the Unruh-DeWitt type moving in the global monopole bulk. The problem under consideration is also of separate interest as an example with gravitational and boundaryinduced polarizations of the vacuum, where all calculations can be performed in a closed form. The corresponding results specify the conditions under which we can ignore the details of the interior structure and approximate the effect of the global monopole by the idealized model.
The paper is organized as follows. In Section 2 we consider the Wightman function in the exterior of the global monopole for the general structure of the core assuming that the components of the metric tensor and their derivatives are continuous at the transition surface between the core and the exterior. By using this function, in Section 3 we investigate the vacuum expectation values of the field square and the energy-momentum tensor. The Section 4 is devoted to the generalization of the corresponding results when an additional surface shell is present on the bounding surface between the core and the exterior. As an illustration of the general results, in Section 5 we consider the flower-pot model with the Minkowskian geometry inside the core. For this model the vacuum expectation values inside the core are investigated as well. In Section 6 we present our concluding remarks. In Appendix we show that the formulae obtained in the paper for the core induced parts are also valid in the case when bound states are present.
Wightman function
We consider a model of (D + 1)-dimensional global monopole with a core of radius a in which the spacetime is described by two distinct metric tensors in the regions outside and inside the core. In the hyperspherical polar coordinates (r, ϑ, φ) ≡ (r, θ 1 , θ 2 , . . . θ n , φ), n = D − 2, the corresponding line element in the exterior region r > a has the form where dΩ 2 D is the line element on the surface of the unit sphere in D-dimensional Euclidean space, the parameter σ is smaller than unity and is related to the symmetry breaking energy scale in the theory. The solid angle corresponding to Eq. (1) is σ 2 S D with S D = 2π D/2 /Γ(D/2) being the total area of the surface of the unit sphere in D-dimensional Euclidean space. This leads to the solid angle deficit (1 − σ 2 )S D in the spacetime given by line element (1). It is of interest to note that the effective metric produced in superfluid 3 He − A by a monopole is described by the three dimensional version of the line element (1) with the negative angle deficit, σ > 1, which corresponds to the negative mass of the topological object [19]. The quasiparticles in this model are chiral and massless fermions. We will assume that inside the core (region r < a) the spacetime geometry is regular and is described by the general static spherically symmetric line element where the functions u(r), v(r), w(r) are continuous at the core boundary: u(a) = v(a) = 0, w(a) = ln(σa).
Here we assume that there is no surface energy-momentum tensor located at r = a and, hence, the derivatives of these functions are continuous as well. The generalization to the case with an infinitely thin spherical shell at the boundary of two metrics will be discussed in section 4.
Note that by introducing the new radial coordinater = e w(r) with the core center atr = 0, the angular part of the line element (2) is written in the standard Minkowskian form. With this coordinate, in general, we will obtain non-standard angular part in the exterior line element (1). For the metric corresponding to line element (2) the nonzero components of the Ricci tensor are given by expressions (no summation over i, we adopt the convention of Birrell and Davies [20] for the curvature tensor) where the prime means the derivative with respect to the radial coordinate r and the indices i = 2, 3, . . . , D correspond to the coordinates θ 1 , θ 2 , . . . , φ respectively. The corresponding Ricci scalar has the form Note that from the regularity of the interior geometry at the core center one has the conditions u(r), v(r) → 0, and w(r) ∼ lnr forr → 0. In the region outside the core, r > a, for the nonzero components we have the standard expressions (no summation over i): where i = 2, 3, . . . , D. For n = 0 the spacetime outside the core is flat and coincides with D = 2 cosmic string geometry. The influence of the non-trivial core structure for the cosmic string on a quantum scalar field has been considered in Refs. [21,22,23]. In the discussion below we will assume that n > 0.
In this paper we are interested in the vacuum polarization effects for a scalar field with general curvature coupling parameter ξ propagating in the bulk described above. The corresponding field equation has the form where ∇ i is the covariant derivative operator associated with line element (1) outside the core and with line element (2) inside the core. The values of the curvature coupling parameter ξ = 0, and ξ = ξ D with ξ D ≡ (D − 1)/4D correspond to the most important special cases of minimally and conformally coupled scalar fields, respectively. As a first stage for the evaluation of the vacuum expectation values (VEVs) for the field square and the energy-momentum tensor we consider the positive frequency Wightman function 0|ϕ(x)ϕ(x ′ )|0 , where |0 is the amplitude for the corresponding vacuum state. This function also determines the response of the Unruh-DeWitt type particle detector at a given state of motion (see, for instance, [20]). By expanding the field operator over eigenfunctions and using the commutation relations one can see that with {ϕ α (x), ϕ * α (x ′ )} being a complete orthonormalized set of positive and negative frequency solutions to the field equation. The collective index α can contain both discrete and continuous components. In Eq. (8) it is assumed summation over discrete indices and integration over continuous indices.
Due to the symmetry of the problem under consideration the eigenfunctions can be presented in the form ϕ α (x) = f l (r)Y (m k ; ϑ, φ)e −iωt , l = 0, 1, 2, . . . , where m k = (m 0 ≡ l, m 1 , . . . , m n ), and m 1 , m 2 , . . . , m n are integers such that and Y (m k ; ϑ, φ) is the hyperspherical harmonic of degree l [24]. The equation for the radial function is obtained from the field equation (7) and has the form In the region r > a described by the line element (1), the linearly independent solutions to this equation are r −n/2 J ν l (λr) and r −n/2 Y ν l (λr) with λ = √ ω 2 − m 2 , where J ν l (x) and Y ν l (x) are the Bessel and Neumann functions with the order In the following consideration we will assume that ν 2 l is non-negative. This corresponds to the restriction on the values of the curvature coupling parameter for n > 0, given by the condition This condition is satisfied by the minimally coupled field for all values σ and by the conformally coupled field for σ D − 1. The solution of the radial equation (11) in the region r < a regular at the origin we will denote by R l (r, λ). From Eq. (11) it follows that near the core center this solution behaves asr l . Note that the parameter λ enters in the radial equation in the form λ 2 . As a result the regular solution can be chosen in such a way that R l (r, −λ) = const · R l (r, λ). Now for the radial part of the eigenfunctions one has where the coefficients A l and B l are determined by the conditions of continuity of the radial function and its derivative at r = a. From these conditions we find the following expressions for these coefficients Here and in what follows for a cylinder function F (z) we use the notation where R ′ l (a, λ) = ∂R l (r, λ)/∂r| r=a . Note that due to our choice of the function R l (r, λ), the logarithmic derivative in formula (16) is an even function on z. Hence, in the region r > a the radial part of the eigenfunctions has the form where the notation is introduced.
For the eigenfunctions we have the following orthonormalization condition where δ αα ′ is understood as the Kronecker symbol for discrete indices and as the Dirac delta function for continuous ones. Substituting eigenfunctions (9), and using the relation (the explicit form for N (m k ) is given in [24] and will not be necessary for the following consideration in this paper), the normalization condition is written in terms of the radial eigenfunctions where r 0 is the value of the radial coordinate r corresponding to the origin and g r is the radial part of the determinant g. Note that in general r 0 = 0 (see, for instance, the special case of the flower-pot model in section 5). As the integral on the left is divergent for λ ′ = λ, the main contribution in the coincidence limit comes from large values r. By using the expression (17) for the radial part in the region r > a and replacing the Bessel and Neumann functions by the leading terms of their asymptotic expansions for large values of the argument, it can be seen that from (21) the following result is obtained: Having the normalized eigenfunctions, now we turn to the evaluation of the Wightman function by using the mode sum formula (8). Substituting eigenfunctions (17) and using the addition formula for the hyperspherical harmonics [24] m k for the Wightman function in the region outside the monopole's core one obtains In formula (23), S D = 2π D/2 /Γ(D/2) is the total area of the surface of the unit sphere in Ddimensional space, C q p (x) is the Gegenbauer or ultraspherical polynomial of degree p and order q, and θ is the angle between directions (ϑ, φ) and (ϑ ′ , φ ′ ). Let us denote by 0 m |ϕ(x)ϕ(x ′ )|0 m the positive frequency Wightman function for the geometry of the idealized point-like global monopole described by the line element (1) for all values of the radial coordinate. This function can be presented in the form [11] In order to investigate the part induced by the non-trivial core structure, we consider the difference Using formulae (24), (25) and the relation with H (s) ν (x), s = 1, 2 being the Hankel functions, the core induced part in the Wightman function is presented in the form Now we rotate the integration contour in the complex plane λ by the angle π/2 for s = 1 and by the angle −π/2 for s = 2. By using the property that the logarithmic derivative of the function R l (r, λ) in formula (16) is an even function on z, we can see that the integrals over the segments (0, im) and (0, −im) of the imaginary axis cancel out. As a result, after introducing the modified Bessel functions, the core induced part can be presented in the form Here and below the tilted notation for the modified Bessel functions is defined as with The VEVs in the bulk of the idealized point-like global monopole are well-investigated in literature (see, for instance, [5]- [12] and references therein) and in the discussion below we will be mainly concerned with the part induced by the non-trivial core structure. As we see from (29), all information about the inner structure of the global monopole is contained in the logarithmic derivative of the interior radial function in formula (31). In deriving formula (29) we have assumed that there are no bound states for which λ is purely imaginary. In appendix we show that this formula is also valid in the case when bound states are present.
Vacuum expectation values outside the monopole core
The VEV of the field square is obtained by computing the Wightman function in the coincidence limit x ′ → x. In this limit expression (24) gives a divergent result and some renormalization procedure is needed. Outside the monopole core the local geometry is the same as that for a point-like global monopole. Hence, in the region r > a the renormalization procedure for the local characteristics of the vacuum, such as the field square and the energy-momentum tensor, is the same as for the point-like global monopole geometry. This procedure is discussed in a number of papers (see [5]- [8]). For the renormalization we must subtract the corresponding DeWitt-Schwinger expansion involving the terms up to order D. For a massless field the renormalized value of the field square has the structure ϕ 2 m,ren = [A + B ln(µr)] /r D−1 , where the coefficients A and B are functions on the parameters σ and ξ only and the arbitrary mass scale µ corresponds to the ambiguity in the renormalization procedure. For a spacetime of odd dimension B = 0 and this ambiguity is absent. In general, it is not possible to obtain closed expression for the coefficients A and B. For small values 1 − σ 2 approximate expressions are derived in Ref. [8] for D = 4 and D = 5. In this paper our main interest are the parts in the VEVs induced by the non-trivial core structure and below we will concentrate on these quantities. By using the formula for the Wightman function from the previous section, the VEV of the field square in the exterior region is presented in the form Taking into account the relation for the part induced by the core we find Here the factor is the degeneracy of each angular mode with given l. For a fixed l and large z the integrand contains the exponential factor e 2z(a−r) and the integral converges when r > a. For large values l, introducing a new integration variable y = z/ν l in the integral of Eq. (33) and using the uniform asymptotic expansions for the modified Bessel functions [25], it can be seen that the both integral and sum are convergent for r > a and diverge at r = a. For the points near the sphere the part (33) behaves as 1/(r − a) β 1 , where β 1 is an integer which depends on the specific model of the core. For this parameter one has β 1 D − 1. The exception is the case of the core model for which the leading term in the uniform asymptotic expansion of the functioñ K ν l (za) for large values l vanishes. The latter takes place for the interior radial function with the asymptotic behavior R l (a, lz/σ) ∼ −(l/σ) √ 1 + z 2 for large l. For the case of a massless scalar the asymptotic behavior of the part (33) at large distances from the sphere can be obtained by introducing a new integration variable y = zr and expanding the integrand in terms of a/r. The leading contribution for the summand with a given l has an order (a/r) 2ν l +D−1 [assuming that ν l = 0 and R l (a, 0) = ±ν l ] and the main contribution comes from the l = 0 term. Now comparing this with the part ϕ 2 m,ren , we see that for ν 0 > 0 the VEV of the field square at large distances from the core is dominated by the part corresponding to the geometry of the point-like global monopole. For the case ν 0 = 0 the ratio ϕ 2 c / ϕ 2 m,ren decays logarithmically and long-range effects of the monopole core appear similar to those for the geometry of a cosmic string [21,22] (see also the discussion in Ref. [18] for the model with de Sitter spacetime inside the core). This case is realized by special values of the parameters satisfying the condition (1/σ 2 − 1)ξ = −ξ D−1 . For a massive field assuming that mr ≫ 1, the main contribution into the integral over z in Eq. (33) comes from the lower limit and to the leading order one has with the exponentially suppressed VEV. Consider the limit σ ≪ 1 for a fixed value r. In accordance with Eq. (6) this corresponds to large values of the scalar curvature and, hence, to strong gravitational fields. To satisfy condition (13) we will assume that ξ 0. For ξ > 0, from Eq. (12) one has ν l ≫ 1, and after introducing in Eq. (33) a new integration variable y = z/ν l , we can replace the modified Bessel function by their uniform asymptotic expansions for large values of the order. The main contribution to the sum over l comes from the summand with l = 0, and the core induced VEV ϕ 2 c is suppressed by the factor exp −(2/σ) n(n + 1)ξ ln(r/a) . For ξ = 0 and σ ≪ 1 for the terms with l = 0 one has ν l ≫ 1 and the corresponding contribution is again exponentially small. For the summand with l = 0 to the leading order over σ we have ν l = n/2 and ϕ 2 c ∼ 1/σ D−1 . Hence, we conclude that in the limit of strong gravitational fields the behavior of the VEV ϕ 2 c is completely different for minimally and non-minimally coupled scalars. Now we turn to the investigation of the VEV of the energy-momentum tensor in the region r > a. Having the Wightman function and the VEV for the field square, these VEVs are evaluated on the base of the formula Similar to the Wightman function, the components of the vacuum energy-momentum tensor can be presented in the decomposed form where 0 m |T ik |0 m is the vacuum energy-momentum tensor for the geometry of a point-like global monopole and the part T ik c is induced by the core. In accordance with the problem symmetry both these tensors are diagonal. For massless fields the VEV of the energy-momentum tensor for the point-like global monopole geometry is investigated in Refs. [5]- [8]. The corresponding renormalized components have the structure similar to that given for the field square: where the coefficients q ik depend only on the parameters σ and ξ, and q (2) ik = 0 for D being an even number. Substituting the expressions of the Wightman function and the VEV of the field square into formula (36), for the part of the energy-momentum tensor induced by the non-trivial core structure one obtains (no summation over i) where for a given function f (y) the notations are introduced withξ = 4(n + 1)ξ − n and in Eq. (42) i = 2, 3, . . . , D. It can be seen that components (39) satisfy the continuity equation ∇ k T k i c = 0, which for the geometry under consideration takes the form The core induced part T k i c are finite everywhere outside the core, r > a, and diverge on the core boundary. Near this boundary the main contribution comes from large values l and to find the corresponding asymptotic behavior we can use the uniform asymptotic expansions for the modified Bessel functions. To the leading order one finds T k i c ∼ 1/(r − a) β 2 for the energy density and the azimuthal stress and with β 2 D + 1. An exception is the special case of the core model for which the leading term in the uniform asymptotic expansion for the functionK ν l (za) vanishes. For large distances from the core boundary, r ≫ a, and for a massless scalar field the main contribution into the VEV T k i c comes from the l = 0 summand. Under the assumptions ν 0 = 0 and R 0 (a, 0) = ±ν 0 , the leading term of the corresponding asymptotic expansion behaves like T k i c ∼ (a/r) 2ν 0 +D+1 . For a massive scalar field under the condition mr ≫ 1, the main contribution into the integral over z in Eq. (39) comes from the lower limit and by using the asymptotic formulae for the function K ν l (zr) for large values of the argument, to the leading order one finds and the radial stress is suppressed by an additional factor 1/mr. Now let us consider the VEV of the energy-momentum tensor in the limit σ ≪ 1 for a fixed r > a. For ξ > 0 by the calculations similar to those given above for the field square, one finds that the core induced VEV are suppressed by the factor exp −(2/σ) n(n + 1)ξ ln(r/a) and the vacuum stresses are strongly anisotropic: T 1 1 c / T 2 2 c ∼ σ. For a minimally coupled scalar, ξ = 0, the leading term of the asymptotic expansion over σ comes from the l = 0 summand in Eq. (39) with ν l = n/2. This term behaves as σ 1−D .
Core with an infinitely thin shell
The results considered in the previous section can be generalized to the models where an additional infinitely thin spherical shell located at r = a is present with the surface energy-momentum tensor τ k i . We denote by n i the normal to the shell normalized by the condition n i n i = −1, assuming that it points into the bulk on both sides. From the Israel matching conditions one has where the curly brackets denote summation over each side of the shell, h ik = g ik + n i n k is the induced metric on the shell, K ik = h r i h s k ∇ r n s its extrinsic curvature and K = K i i . For the region r a one has n i = δ 1 i e v(r) and the non-zero components of the extrinsic curvature are given by the formulae The corresponding expressions for the region r a are obtained by taking u(r) = v(r) = 0, w(r) = ln(σr) and changing the signs for the components of the extrinsic curvature tensor. Now from the matching conditions (46) we find (no summation over i) where f ′ (a−) is understood in the sense lim r→a−0 f ′ (r). The discontinuity of the functions u ′ (r) and w ′ (r) at r = a leads to the delta function term in the Ricci scalar and, hence, in the equation (11) for the radial eigenfunctions. Note that the expression in the square brackets is related to the surface energy-momentum tensor by the formula where τ is the trace of the surface energy-momentum tensor. Due to the delta function term in the equation for the radial eigenfunctions, these functions have a discontinuity in their slope at r = a. The corresponding jump condition is obtained by integrating the equation (11) through the point r = a: Now the coefficients in the formulae (14) for the eigenfunctions are determined by the continuity condition for the radial eigenfunctions and by the jump condition for their radial derivative. It can be seen that the corresponding eigenfunctions are given by the same formulae (17) and (22) with the new barred notation Consequently the parts in the Wightman function, in the VEVs of the field square and the energy-momentum tensor induced by the core of the finite thickness, are given by formula (29), (33) and (39), where the tilted notation is defined by Eq. (30) with the function The trace of the surface energy-momentum tensor in this expression is related to the components of the metric tensor inside the core by formula (51).
Flower-pot model for global monopole
As an application of the general results given above let us consider a simple example of the core model assuming that the spacetime inside the core is flat. The corresponding model for the cosmic string core was considered in Refs. [21,22,23] and following these papers we will refer to this model as flower-pot model. Taking u(r) = v(r) = 0 from the zero curvature condition one finds e w(r) = r + const. The value of the constant here is found from the continuity condition for the function w(r) at the boundary which gives const = (σ − 1)a. Hence, the interior line element has the form In terms of the radial coordinate r the origin is located at r = (1 − σ)a. From the matching conditions (48), (49) we find the corresponding surface energy-momentum tensor with the nonzero components The corresponding surface energy density is positive for the global monopole with σ < 1. After this brief review, let us analyze for this model the influence of the monopole's core on the vacuum polarization effects. We will consider the exterior and interior regions separately.
Exterior region
In the region inside the core the radial eigenfunctions regular at the origin are the functions: wherer = r + (σ − 1)a is the standard Minkowskian radial coordinate, 0 r σa. In appendix we show that in the flower-pot model no bound states exist. Note that for an interior Minkowskian observer the radius of the core is σa. The normalization coefficient C l is found from the condition (22): with the barred notation for the cylindrical functions and Note thatJ ν l (λa) = 0 for σ = 1. Hence, the parts in the Wightman function, in the VEVs of the field square and the energy-momentum tensor due the non-trivial structure of the core in the flower-pot model, are given by formulae (29), (33) and (39) respectively, where the tilted notations for the modified Bessel functions are defined by (30) with the coefficient For σ = 1 one hasĨ ν l (z) = 0 and as we could expect the VEVs vanish. Using the value for the standard integral involving the product of the functions K ν given in Ref. [26], in the case of a massless scalar field the leading term for the asymptotic expansion over a/r can be presented in the form where Note that for a minimally coupled scalar A n = 0 and the presented leading term vanishes. In figure 1 we have plotted the dependence of the part in the VEV of the field square induced by the core as a function on the rescaled radial coordinate for minimally and conformally coupled D = 3 massless scalar fields in the flower-pot model with σ = 0.5 Now let us analyze the VEV of the energy-momentum tensor given by Eq. (39) with the tilted notation given by (30), (61). For large distances from the core, r ≫ a, the main contribution into the VEV of the energy-momentum tensor for a massless scalar field comes from the l = 0 summand. Under the assumption ν 0 = 0 the leading terms of the asymptotic expansions have the form (no summation over i) The integrals in this formula can be evaluated using the value for the integrals involving the product of the functions K ν given in Ref. [26]. As we see, for ν 0 > 0 and for large distances from the sphere the vacuum energy-momentum tensor is dominated by the part corresponding to the point-like monopole. As it has been mentioned above on the core surface the VEVs diverge. For the region near the core the main contribution comes from large values of l. By using the uniform asymptotic expansions for the modified Bessel functions it can be seen that to the leading order ϕ 2 c ∼ (r − a) 2−D and the components of the vacuum energy-momentum tensor behave as (r − a) −D for the energy density and the azimuthal stress and as (r − a) 1−D for the radial stress. Due to surface divergencies near the surface the total vacuum energy-momentum tensor is dominated by the parts induced by the finite thickness of the core. As an illustration, in figure 2 we have presented the dependence of the core-induced vacuum energy density as a function on the radial coordinate for D = 3 minimally and conformally coupled massless scalar fields in the flower-pot model with σ = 0.5.
Interior region
Now let us consider the vacuum polarization effects inside the core for the flower-pot model. The corresponding eigenfunctions have the form given by Eq. (9) with f l (r) = R l (r, λ) and the function R l (r, λ) is defined by formula (57). Substituting the eigenfunctions into the mode sum formula for the corresponding Wightman function one finds To find the renormalized VEVs of the field square and the energy-momentum tensor we need to evaluate the difference between this function and the corresponding function for the Minkowski bulk: The appropriate form for the Minkowskian part is obtained from Eq. (25) taking σ = 1 and replacing r →r. By using the corresponding formula, for the subtracted Wightman function one finds where the barred notation is defined by Eq. (59). The integral in this formula is slowly convergent and the integrand is highly oscillatory. In order to transform the expression for the subtracted Wightman function into more convenient form, we note that the following identity takes place where we have introduced the notation Note that in terms of this notation one has We add the left-hand side of Eq. (68) with z = λa as a coefficient to the term π 2 /4 in the square brackets of Eq. (67). After this replacement the term in the square brackets is written in the form Note that both terms in the sum over s on the right of this relation are separately regular at the zeros of the function C{J l+n/2 (σλa), J ν l (λa)}. Substituting (71) into formula (67) we rotate the integration contour in the complex plane λ by the angle π/2 for s = 1 and by the angle −π/2 for s = 2. Under the conditionr +r ′ + |t − t ′ | < 2σa the contribution from the semicircle with the radius tending to infinity vanishes. Note that as we consider the points inside the core this condition is satisfied in the coincidence limit. The integrals over the segments (0, im) and (0, −im) of the imaginary axis cancel out and after introducing the modified Bessel functions the subtracted Wightman function can be presented in the form with the notation U l (σ, z) = 1/σ + C{I l+n/2 (σz), K ν l (z)}C{K l+n/2 (σz), I ν l (z)} C{I l+n/2 (σz), I ν l (z)}C{I l+n/2 (σz), K ν l (z)} .
Hence, in the limit σ → 0 for a fixed radius of the core, σa, the part in the renormalized VEV of the field square inside the core tends to the finite limiting value. For large values of the mass, assuming that m(σa −r) ≫ 1, it can be seen that ϕ 2 ren is suppressed by the factor e −2m(σa−r) . In figure 3 we have plotted the renormalized VEV ϕ 2 ren inside the core of the flower-pot model with σ = 0.5 as a function ofr/σa for minimally and conformally coupled massless scalars. Again we can observe that there exists a strong dependence of this quantity on the curvature coupling parameter. The renormalized VEV of the energy-momentum tensor is found by using the formula (36) with the subtracted Wightman functions. This leads to the following formula (no summation over i) where the functions F with l = 0 and l = 1 and one has Note that for the conformally coupled massless scalar at the center one has T 0 0 ren = −D T 1 1 ren . This can also be obtained directly from the zero trace condition. Near the core surface the components of the vacuum energy-momentum tensor behave as (a−r) −D for the energy density and the azimuthal stress and as (a − r) 1−D for the radial stress. As in the case of the field square, in the limit σ → 0 for a fixed radius of the core radius σa, the part in the vacuum energy-momentum tensor induced by the non-trivial core tends to the finite limiting value. This limiting value is obtained from formula (79) by making the replacement U l (σ, za) → U l+n/2 (zσa)/I 2 l+n/2 (zσa). As in the case of the field square, for large values of the mass for the field quanta the VEV T k i ren is exponentially suppressed by the factor e −2m(σa−r) . The dependence of the renormalized interior vacuum energy density on the radial coordinate is presented in figure 4 for minimally and conformally coupled massless scalar field in D = 3 for the geometry of a global monopole with σ = 0.5.
Conclusion
In the present paper we have considered the one-loop vacuum effects for a massive scalar field with general curvature coupling parameter on background of the (D + 1)-dimensional global monopole with non-trivial core structure. The previous papers on the investigation of the vacuum polarization by the gravitational field of the global monopole are concerned with the idealized point-like model, where the curvature has singularity at the origin. The exception is the Ref. [18], where the vacuum densities for a massless scalar field are studied outside the monopole core with the interior de Sitter geometry. Here we consider the general spherically symmetric static model of the core with finite thickness, described by the line element (2), and investigate the vacuum properties in both exterior and interior regions. Among the most important characteristics of these properties, which carry an information about the core structure, are the VEVs for the field square and the energy-momentum tensor. In order to obtain these expectation values we first construct the positive frequency Wightman function. In the region outside the core this function is presented as a sum of two distinct contributions. The first one corresponds to the Wightman function for the geometry of a point-like global monopole and the second one is induced by the non-trivial structure of the monopole's core. The latter is given by formula (29), where the tilted notation is defined by formula (30) with the coefficient from (31) for the model without an infinitely thin spherical shell on the boundary of the core. This coefficient is determined by the radial part of the interior eigenfunctions and describes the influence of the core properties on the vacuum characteristics in the exterior region. In the case of the core model with a thin shell on the boundary the derivatives of the metric tensor components are discontinuous on the core surface. This leads to the delta function type contribution to the Ricci scalar and, hence to the equation for the radial eigenfunctions in the case of the non-minimally coupled scalar field. As a result, the radial eigenfunctions have a discontinuity in their slope at the core boundary. This leads to an additional term in the coefficient of the tilted notation which is proportional to the trace of the surface energy-momentum tensor (see Eq. (54)).
By using the formula for the Wightman function, in section 3 we have investigated the influence of the non-trivial core structure on the VEVs of the field square and the energymomentum tensor. As in the exterior region the local geometry is the same as that in the pointlike global monopole model, the presence of the core does not lead to additional divergences for the points outside the core. As a result, the parts in these VEVs induced by the core are directly obtained from the corresponding part of the Wightman function for the case of the field square and by applying on this function a certain second-order differential operator and taking the coincidence limit for the energy-momentum tensor. These parts are given by formulae (33) and (39) for the field square and the energy-momentum tensor respectively. They diverge as the boundary of the core is approached. The surface divergences in the VEVs of the local observables are well-known in quantum field theory with boundaries and are investigated for various boundary geometries. We have investigated the asymptotic behavior of the core induced VEVs near the core boundary and at large distances from the core. In particular, at large distances and for a massless scalar field with ν 0 > 0, the ratio of the core induced and the point-like monopole parts decay as (r/a) 2ν 0 for the both field square and the energy-momentum tensor. For the special case with ν 0 = 0 this ratio decays logarithmically and long-range effects of the monopole's core appear. In the limit of strong gravitational fields corresponding to small values of the parameter σ, the behavior of the core induced parts is completely different for minimally and non-minimally coupled fields. The corresponding VEVs are suppressed by the factor exp[−(2/σ) n(n + 1)ξ ln(a/r)] for the non-minimally coupled scalar and behave like σ 1−D for the minimally coupled field.
As an example of the application of the general results, in section 5 we have considered a simple core model with a flat spacetime inside the core, so called flower-pot model. The corresponding surface energy-momentum tensor on the boundary of the core is obtained from the matching conditions and has the form given by Eq. (56). The core induced parts of the exterior VEVs in this model are obtained from the general results by taking the function in the coefficient of the tilted notation form Eq. (61). For the flower-pot model we have also investigated the vacuum densities inside the core. Though the spacetime geometry inside the core is Monkowskian, the non-trivial topology of the exterior region induces vacuum polarization effects in this region as well. In order to find the corresponding renormalized VEVs of the field square and the energy-momentum tensor we have derived a closed formula, Eq. (72), for the difference of the interior Wightman function and the Wightman function for the Minkowski spacetime. The subtracted function is finite in the coincidence limit and can be directly used for the evaluation of the VEVs of the field square and the energy-momentum tensor. The latter quantities are given by formulae (74) and (79). As in the case of the exterior region, we have considered various limiting cases when the general formulae are simplified. In particular, we have shown that in the limit σ ≪ 1 under the fixed value σa, which is the core radius for an internal Minkowskian observer, the renormalized VEVs of the field square and the energy-momentum tensor tend to finite limiting values. and from the continuity of the radial derivative we see that the possible bound states are solutions By evaluating the integrals in this formula it can be seen that this term exactly cancels the contribution (89) coming from the corresponding bound state (for the similar cancellation in the Casimir effect with Robin boundary condition see Ref. [27]). Hence, we conclude that the formulae given above for the core induced parts in the VEVs are valid in the case of the presence of bound states as well.
In order to see the possibility for the appearance of bound states in the flower-pot model, we note that introducing a function F l (r) = r (D−1)/2 * f l (r) with r * = r outside the core and r * =r inside the core, equation (11) for the radial part of the eigenfunctions is written in the form of the Schrodinger equation. The corresponding effective potential is equal [ν 2 l + (n − 1)/4]/σ 2 r 2 in the exterior region and [l(l + n) + (n 2 − 1)/4]/r 2 in the interior region. Under the conditions ν 2 l 0 and n > 0 assumed earlier, the potential is non-negative and, hence, in the flower-pot model no bound states exist. | 2014-10-01T00:00:00.000Z | 2006-07-06T00:00:00.000 | {
"year": 2006,
"sha1": "5d537132026f1bcdb7482d2f0e7776e3ab2ea8bb",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-th/0607036",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "0cec15c070e8b77e48cc0c1143b4556546c361b1",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
53716067 | pes2o/s2orc | v3-fos-license | Novel upregulation of amyloid-β precursor protein (APP) by microRNA-346 via targeting of APP mRNA 5′-untranslated region: Implications in Alzheimer’s disease
In addition to the devastating symptoms of dementia, Alzheimer’s disease (AD) is characterized by accumulation of the processing products of the amyloid-β (Aβ) peptide precursor protein (APP). APP’s non-pathogenic functions include regulating intracellular iron (Fe) homeostasis. MicroRNAs are small (~ 20 nucleotides) RNA species that instill specificity to the RNA-induced silencing complex (RISC). In most cases, RISC inhibits mRNA translation through the 3′-untranslated region (UTR) sequence. By contrast, we report a novel activity of miR-346: specifically, that it targets the APP mRNA 5′-UTR to upregulate APP translation and Aβ production. This upregulation is reduced but not eliminated by knockdown of argonaute 2. The target site for miR-346 overlaps with active sites for an iron-responsive element (IRE) and an interleukin-1 (IL-1) acute box element. IREs interact with iron response protein1 (IRP1), an iron-dependent translational repressor. In primary human brain cultures, miR-346 activity required chelation of Fe. In addition, miR-346 levels are altered in late-Braak stage AD. Thus, miR-346 plays a role in upregulation of APP in the CNS and participates in maintaining APP regulation of Fe, which is disrupted in late stages of AD. Further work will be necessary to integrate other metals, and IL-1 into the Fe-miR-346 activity network. We, thus, propose a “FeAR” (Fe, APP, RNA) nexus in the APP 5′-UTR that includes an overlapping miR-346-binding site and the APP IRE. When a “healthy FeAR” exists, activities of miR-346 and IRP/Fe interact to maintain APP homeostasis. Disruption of an element that targets the FeAR nexus would lead to pathogenic disruption of APP translation and protein production.
Introduction
Alzheimer's disease (AD) is a neurodegenerative disorder most typical of old age (65 +). The disease is characterized by extracellular neuritic plaques that consist mostly of amyloid-β (Aβ) peptide. Neurofibrillary tangles (NFT) of hyperphosphorylated tau occur within neurons, whereas gliosis, neuroinflammation, and synaptic loss are also evident in the hippocampi and brain cortices of affected individuals [1,2]. Although autosomally dominant inherited (familial) forms of AD exist, they constitute no more than 5% of AD cases [2]. AD (familial and sporadic) is influenced by multiple genetic and environmental factors, and these factors are considered particularly influential in sporadic AD [3,4], which requires the study of multiple molecular targets, mechanisms, pathways, and therapeutic strategies [5][6][7][8]. Significant evidence supports an Aβ-centric view of AD, e.g., carriers of a protective Aβ precursor protein (APP) polymorphism, APP A673T (Icelandic) [9], have reduced: incidence of AD, of Aβ levels throughout their lives, and of Aβ aggregation. The Aβ peptide is cleaved from the APP by β-secretase (or BACE1) and γsecretase complex [10].
APP has non-pathogenic functions. Although Aβ accumulation is a typical pathological feature of AD, the instigating disease mechanism is still very poorly understood. How could disruption of APP in the normal brain contribute to neuropathogenesis? A vital physiological role for APP is metal regulation, including ferrohomeostasis [11,12]. Further, Fe stimulates production of APP protein [13][14][15]. This is particularly relevant given ample evidence of Fe dyshomeostasis in AD [13,16]. Notably, non-amyloidogenic processing of APP is enhanced by increasing Fe levels, but only up to a threshold, at which point, additional Fe inhibits non-amyloidogenic APP processing [17]. Monomeric Aβ reduces oxidative stress brought about by metals, in particular, monomeric Aβ inhibits reduction of Fe(III) and prevents lipid peroxidation induced by Fe(II) [18]. Current work in the regulation of APP production by Fe has concentrated on an iron-responsive element (IRE) in the APP mRNA 5′-UTR [14,15,19,20].
We have previously shown the regulatory effects of several microRNA (miRNA) species on AD-associated gene products, including miR-101 and miR-153, which act Fig. 1 miR-346 targets human APP 5′-UTR via a target site overlapping a known iron-responsive element (IRE). a Schematic of the APP transcript indicating relative sizes of 5′-UTR, coding sequence (CDS), and 3′-UTR. Locations of miR-101, -153, and -346 binding sites also indicated. Binding sites for multiple other miRNAs are omitted for clarity. b Diagram indicating miR-346 target site in the 5′-UTR, along with a known IRE and an interleukin-1 acute box (IL-1) that each partially overlap the miR-346 site. The IL-1 acute box reference consensus motif is solid light blue, with the remainder of the APP 5'-UTR fragment that responded to IL-1 treatment indicated with a dashed line. c Sequence and predicted base-pairing of human miR-346 with its predicted target site in the human APP 5′-UTR, including the seed sequence interaction (red box). Sequences from multiple mammalian species, orthologous to the predicted miR-346 target site in the human APP 5′-UTR are shown. Red text highlights nucleotide differences of other species' sequences when compared to human sequence. Bold, italicized, black text in human APP 5′-UTR sequence represents fragment of functional IRE consensus sequence. d APP 5′-UTR reporter construct containing the APP 5′-UTR sequence inserted upstream of a firefly luciferase CDS. Predicted target site in the 5′-UTR reporter construct was mutated by cassette mutagenesis. Red text highlights mutations introduced in seed sequence. e Wildtype and target site mutant reporter luciferase expression. f WT 5′-UTR APP reporter construct co-transfected with miR-346 along with either 200 nM negative control target protector or putative miR-346-APP 5′-UTR target protector. *p < 0.05, n = 6. "NC TP": negative control target protector; 346 TP: target protector for APP 5′-UTR recognition site of miR-346 on the APP 3'-UTR [21,22], and miR-339-5p on the 3'-UTR of the BACE1 transcript [23]. In this context, miRNAs are a unique class of small (~22 nt), non-coding RNA that fine-tune gene expression. In particular, miRNAs appear in complex interactive regulatory networks that govern both normal function and sporadic diseases of the central nervous system [24]. Specific miRNAs may even "co-dispose" toward apparently disparate disorders, such as AD and pulmonary fibrosis [25]. Mature miRNA often binds a protein of the argonaute (AGO) family to form RNAinduced silencing complex (RISC). The miRNA allows RISC to recognize sites of imperfect complementarity on target mRNA transcripts. In essence, a specific miRNA is a "socket" that grants sequence specificity. Most known miRNA target sites are in the 3′-untranslated regions (UTRs) of mRNAs. RISC typically inhibits protein synthesis by repressing translation or destabilizing the transcript. APP [21,22] and BACE1 [23] are among known miRNA targets in AD.
Our process for evaluating the impact of miRNAs for APP expression began with non-presumptive in silico database comparisons between 5′-UTR and 3′-UTR sequences of genes of interest (e.g., APP and BACE1) vs known miRNA seed sequences [26]. We not only predicted but biologically tested multiple potential miRNA regulators of APP [21][22][23]; miR-346 was found among the database predictions.
Interestingly, miR-346 may have broad neuropsychiatric influence. An analysis of predicted miRNA:mRNA interactions for schizophrenia-associated gene products revealed that miR-346 contains a higher rate of predicted interactions than expected by chance [27]. Of greater interest, miR-346 expression decreased in the brains of schizophrenic and bipolar patients relative to control patients [27]. Paradoxically, elevated miR-346 has also been reported in the blood of schizophrenia patients, with strong diagnostic utility (AUC 0.713; specificity 90.2%) [28]. The coding sequence for pri-miR-346 is hosted in intron 2 of a known schizophrenia-susceptibility gene, glutamate receptor delta 1 subunit (GRID1) [27]. However, expression of miR-346 appears to be driven independently from GRID1 expression, based on miR-346-GRID1 correlation analyses [27,28]. Although no specific association (risk or protective) has been identified near the GRID1 locus for AD, it may be noteworthy that genetic risks for schizophrenia and AD may be at least somewhat inversely related [30], although specific genes highlighted in the reference are not reported to be regulated by miR-346.
We now demonstrate herein unique characteristics for miR-346. First, unlike most miRNAs, miR-346 interacts with the APP 5′-UTR (Fig. 1a). Second, miR-346 upregulates APP mRNA translation. Third, the specific effect of miR-346 on APP expression is enhanced by intracellular iron chelation with deferroxamine in human primary neuronal enriched cultures. Finally, this target site for miR-346 overlaps with active sites for iron response protein 1 (IRP1) and an interleukin-1 (IL-1) acute box (Fig. 1b). In addition, this segment of the APP 5'-UTR may respond to other cytokines, including transforming growth factor (TGF)α and TGFβ [89].
We chose to do the bulk of our work in human primary neuronal enriched cultures because our characterization revealed that they showed critical similarities to active neurons in an accompanying matrix of cells that render it particularly valuable for neurological research. These cultures were viable in vitro up to at least 40 days in culture. Cell morphology included neuronal morphology, with a network of processes [31]. Immunocytochemistry revealed the presence of pan-neuronal, astrocytic (GFAP) [31], and neuroprogenitor (nestin-1) (Supplemental Fig. 1) markers, distinct to individual cells. Protein characterization of cultures showed the presence of neuron-specific enolase, GFAP, and synaptosome associated protein-25 [31]. The cultures contain serotonergic, dopaminergic, and GABAergic neuronal cells, although the preponderance of each changes over culture age [31]. Of particular note, these cultures contained cells that were not only morphologically and biochemically neuronal, but cultures also had neuronal functional response, as measured by KCl depolarization [31]. Over time, mature neuronal population within the cultures increased [31]. Finally, the cultures were practical for transfection studies [31]. As such, we deemed them an appropriate model for the present work of exploring neuronal effects of miR-346 upon the APP 5′-UTR.
Based on our present work, we propose a "FeAR" (Fe, APP, RNA) nexus in the APP 5′-UTR that comprises an overlapping miR-346-binding site and the APP IRE. When a "healthy FeAR" exists, activities of miR-346 and IRP/Fe interact to maintain APP homeostasis. Disruption of an element that targets the FeAR nexus would lead to pathogenic disruption of APP translation and protein production.
Materials and methods
Prediction of miR-346 binding site in APP 5′-UTR We scanned the APP 5′-UTR with the miRanda utility on the RegRNA web server [26] to determine potential miRNA recognition sequences.
Alignment of mammalian APP 5′-UTR sequences
Sequences corresponding to the APP 5′-UTR from 28 species were downloaded from GenBank and aligned with WEBPRank [32]. Total information content of alignments with two major gaps was calculated as P 2 þ P t b¼a f b;i  log 2 f b;i À Á [33], where f b,i is the relative frequency of a nucleotide (A, C, G, T) b at position i. SE for information was estimated by , where n is the number of sequences at a position without a gap.
Generation of mutant miR-346 site APP 5′-UTR reporter clone The pGAL reporter construct was used to study regulatory effects on the APP 5′-UTR [13]. Mutagenesis at a predicted miR-346 target site in this 5′-UTR proved refractory to standard site-directed mutagenesis procedures. Therefore, cassette mutagenesis was employed instead. In this form of mutagenesis, the region of plasmid DNA to be mutated was excised by restriction digest. A mutagenized version of this cassette was synthesized, annealed, digested, and ligated into the linearized vector. The oligonucletoides to replace the miR-346 target site in the APP 5′-UTR were obtained from Integrated DNA Technologies (Coralville, IA). We double digested pGAL with HindIII and NcoI. The digested plasmid was resolved by agarose gel electrophoresis, bands containing linearized plasmid were excised and purified with QIAQuick Gel Extraction kit. Mutant oligonucleotides were designed so that, once annealed, the 5′ and 3′ ends would form sticky ends to match HindIII and NcoI sites in the linearized pGAL. Oligonucleotides were annealed and directly ligated into linearized pGAL by combining annealed cassette, linearized pGAL, T4 ligase buffer, and T4 ligase in a 20 µL final volume and incubating at room temperature for 2 h. Approximately, 1 µL of ligase reaction mix was then transformed into Z-competent Escherichia coli and plated overnight. True clones were confirmed by direct sequencing of plasmid DNA. Mutagenic oligonucleotides were miR-346mut 5′: 5′-AGCTTAGTTTCCTCGGCAGCGGTAGGCGAGA GCACGCGGAGGAGCGTGCGCGGGGGCCCCGGGA GACGGCGGCGGTGGCGGCGCGAATGAGGCAAGG ACGCGGCGGATCCCACTCGCACAGCAGCGCACTC GGTGCCCCGCGCAGGGTCGCGC-3 and miR-346mut 3′: 5′-CATGGCGCGACCCTGCGCGGGGCACCGAGTG CGCTGCTGTGCGAGTGGGATCCGCCGCGTCCTTGC CTCATTCGCGCCGCCACCGCCGCCGTCTCCCGGGG CCCCCGCGCACGCTCCTCCGCGTGCTCTCGCCTAC CGCTGCCGAGGAAACTA-3′. Boldface indicates specifically mutated nucleotides. Sequencing oligonucleotides was 5′-CTGCTGTGCGAGTGGGAT-3′.
We cultured human primary neuronal enriched ("human primary") cultures according to procedures we developed and reported [31]. Primary cultures were prepared from the brain parenchyma of aborted fetuses (80-100 days gestational age). The tissues were obtained from the Birth Defects Research Laboratory (BDRL) at the University of Washington with approval from the Indiana University Institutional Review Board (IRB). Fetal brain materials (10-20 g) were shipped overnight in chilled Hibernate-E medium (Invitrogen, Grand Island, NY) supplemented with 1 × B27 serum-free supplement (Invitrogen), 0.5 mM Gluta-MAX (Invitrogen), and antibiotic-antimycotic solution (Cellgro).
Tissues were digested in 0.05% trypsin/0.53 mM ethylenediaminetetraacetic (EDTA) acid solution and incubated in a shaking water bath (150 rpm) at 37°C for 15 min. Trypsin-digested tissues were transferred to Hibernate-E and triturated several times with a siliconized, fire-polished pipette followed by centrifugation at 400×g, 15 min. The cell pellet was resuspended in Hibernate-E and triturated once more followed by centrifugation. The pellet was resuspended in culture medium (see below) and cells counted by Trypan blue exclusion.
Cells were plated at a density of 2-4 × 10 5 cells per well on poly-D-lysine (Sigma-Aldrich, St Louis, MO) coated 24well plates in Neurobasal medium (Invitrogen), supplemented with 1 × B27, 0.5 mM GlutaMAX, 5 ng/ml Basic fibroblast growth factor (bFGF, Invitrogen) and antibioticantimycotic cocktail. Half media changes were performed every fourth day of culture. Cell culture health was assessed by Cell-Titer Glo (CTG) luminescent cell viability assay (Promega, Madison WI), which measures ATP generation.
For those experiments wherein cells were treated with deferroxamine mesylate (DFO, Sigma-Aldrich, St. Louis MO), the appropriate volume of DFO was prepared from a 5 mg/ml stock solution in phosphate-buffered saline (PBS) and added to human primary cell culture plates approximately one hour prior to transfection or HeLa cells were treated with DFO added to cultures for 72 h before harvesting.
Transfection of DNA vectors or RNA oligonucleotides into cell lines and primary cultures
We transfected several commercially obtained miRNA and siRNA molecules (Supplemental Table 1). During all transfections, antibiotics were omitted from cell culture media. Lipofection was used for all transfections, either with Transfectin (Bio-Rad, Hercules, CA) or Lipofectamine RNAiMAX (Invitrogen). In all experiments where negative control RNA oligonucleotides (i.e., miRNA mimics, miRNA inhibitors, target protectors) were transfected, we used universal negative controls (Supplemental Table 1). These controls are not scrambled sequences and therefore do not necessarily have base composition identical to the experimental oligonucleotides for which they serve as controls.
In those experiments that used the pGAL reporter construct with luciferase-expressing cassette or mutated pGAL, we transfected into HeLa cells. HeLa cells (5 × 10 4 cells per well) were cultured in white-walled 96-well plates, each well containing 100 µl of serum-supplemented media/well, and transfected with 150-300 ng of reporter constructs using Transfectin. Transfection complexes were prepared by incubating DNA in 20 µl per well of serum-free medium, with 0.75 µl Transfectin per well for 15-20 min. Mixture was directly added to cells on-plate in serum-containing media. Luciferase assays were performed 48 h after transfection.
HeLa cells were co-transfected with reporter constructs and miRIDIAN miRNA mimics (Dharmacon, Lafayette, CO) by incubating HeLa cells cultured in 96-well plates (5 × 10 4 cells per well) with 150 ng reporter DNA and 40 nM miRNA mimic using 0.2 µl Transfectin per well. Transfection complexes were prepared as described herein.
We did single transfections of Silencer Select siRNA (Applied Biosystems, Carlsbad, CA), miRNA mimics or miRNA target protectors (Qiagen, Valencia, CA) into HeLa or U373 cells using RNAiMAX reagent (human primary cultures discussed below). For most experiments, HeLa cells (1.35 × 10 5 cells per well) and U373 cells (7.5 × 10 4 cells per well) were cultured onto 24-well plates and reverse-transfected [34]. In reverse transfections, transfection complexes are added to cultures at the same time as cells are plated. Cells are initially transfected in suspension until they settle and adhere onto the plate. HeLa cells were transfected with either 20 nM siRNA, 50 nM miRNA mimic, or 100-1000 nM miRNA target protector (TP) using 0.5 µl RNAiMAX per well. Transfection complexes were prepared in 50 µl Opti-MEM serum-free media (Invitrogen) with 10-15 min incubation periods prior to mixing with cell suspensions. U373 cells were similarly transfected with 75 nM miRNA mimics using 3.5 µl RNAiMAX per well. In several cases, miRNA mimics were co-transfected into HeLa cells with siRNA or miRNA target protectors. In these cases, RNAiMAX levels were boosted to 1 µl per well to account for the increase in nucleic acid content.
Multiple batches of human primary cultures were transfected at days in vitro (DIV) 17 in 24-well plates. Cultures were transfected with 20 nM siRNA, 150 nM miRNA mimics, and 1000 nM LNA miRNA inhibitors (Exiqon, Woburn, MA), using 1.25 µl RNAiMAX per well. bFGF supplementation was omitted from media during transfections. In one series of experiments, human primary cultures were transfected with miRNA mimics in the presence of 150 µM DFO. The appropriate volume of DFO was prepared from a 5 mg/ml stock solution in PBS and added to cell culture plates~1 h prior to transfection.
In all experiments, employing transfection of small RNA oligonucleotides, transfection efficiency was assessed qualitatively by including a siRNA transfection (20 nM) against the gene product of interest. These siRNA were validated in HeLa cells as capable of reducing APP or BACE1 protein and mRNA expression to < 5% of mock or negative control siRNA transfections.
Human brain samples
Two independent cohorts of brain specimens were utilized in this study. The first set of specimens was provided by Dr. Peter T. Nelson from the University of Kentucky Alzheimer Disease Brain Bank. These specimens were isolated from BA9 of the frontal cortex and consisted of both control (n = 5) and AD (n = 15) specimens. These specimens were agematched with a mean age for control specimens of 84.0 ± 2.2 years and 80.8 ± 1.7 years for AD specimens. All AD specimens had advanced AD neuropathology (Braak stage VI and CERAD (Consortium to Establish a Registry for Alzheimer's Disease) neuropsychological battery score C). CERAD score combines quantification of neuritic plaque in specific brain regions with presence or absence of dementia. Importantly, all specimens were collected following a short PMI (range 1.75-8 h). Finally, the AD component of this cohort consisted of three subgroups defined by history of treatment with AD medications: no history of AD medication (No Rx; n = 5), history of treatment with rivastigmine but not memantine (n = 5), and history of treatment with memantine but not rivastigmine (n = 5).
The second set of specimens originated from the Harvard Tissue Resource Center and was provided by Dr. P. Hemachandra Reddy. These specimens were also isolated from BA9 of the frontal cortex and consisted of control (n = 5) and AD (n = 15) specimens. Demographic details were previously published [35]. The AD group was further subdivided into three groups defined by stage of neurofibrillary pathology: Braak stage I/II (early AD; n = 5), Braak stage III/IV (definite AD; n = 5), and Braak stage V/VI (severe AD; n = 5). Therefore, this group consisted of specimens spanning the stages of AD progression. Analyses of this cohort was performed either by making comparisons across all Braak stages or by combining control and stage I/ II and stage III-VI into two distinct groups for comparison. The rationale for consolidating groups was to increase power of analysis by increasing sample size. Given that stage I/II specimens have only very mild AD pathology and represent a very early-stage of the clinical disease, the assumption is that control and stage I/II specimens are more biochemically similar to one another than to either stage III/ IV or V/VI specimens.
Specimens were initially pulverized using a stainless steel chamber, pre-chilled with liquid nitrogen. Pulverized samples were quickly aliquoted and stored at −80°C, avoiding sample thawing.One aliquot of each sample was processed for protein analysis. This frozen aliquot was immersed in M-PER (ThermoFisher, Waltham, MA) supplemented with 0.1% SDS and protease inhibitor cocktail set III and immediately sonicated using a Sonifier Cell Disruptor 350 (Branson, St Louis, MO) until visible clumps were no longer apparent. Lysates were then incubated with 50 U/mL Benzonase enzyme (EMD, Billerica, MA) for 10 min at 37°C to reduce nucleic acid content and associated viscosity. Lysates were centrifuged down at 30,000 g for 2 h to clear debris. Cleared supernatants were collected and stored at −80°C for future protein analysis. For all brain studies, human brain specimens were analyzed in a blinded fashion with diagnostic categories only revealed for data analysis after performing appropriate quality control checks and data normalization. Human brain specimens were provided via external investigators after collection from deceased donors and provided with no identifying information. Therefore, research using these specimens was deemed not to be human subject's research as defined by HHS and therefore exempt from institutional IRB approval.
Protein quantification, SDS-PAGE, and western blotting
Cell lysate protein concentrations were measured by bicinchoninic acid assay (Pierce, Rockford, IL) per the manufacturer's instructions. Protein concentrations were measured with 10 µl of lysate and 200 µl of working reagent at absorbance of 570 nM with a microplate reader (Bio-Rad). All samples were analyzed in duplicates and absorbance values averaged. Concentrations were calculated by comparison to a bovine serum albumin standard curve.
Aβ ELISA analyses
Levels of Aβ40 were measured in the conditioned media (CM) of human primary culture and human brain autopsy samples using a sensitive and specific commercially available ELISA kit (IBL America, Minneapolis, MN). Equal volume of CM (25 µl) was loaded in a plate pre-coated with antihuman Aβ (35)(36)(37)(38)(39)(40) antibody (clone 1A10) and incubated overnight. This kit uses HRP-conjugated anti-human Aβ (11)(12)(13)(14)(15)(16)(17)(18)(19)(20)(21)(22)(23)(24)(25)(26)(27)(28) as detection antibody. The overall assay was performed according to the manufacturer's instructions. In brief, CM was added onto pre-coated plates and incubated overnight at 4°C. The next day plates were vigorously washed with buffer supplied by IBL in the kit and then incubated with detection antibody for~1 h at 4°C. Plates were again vigorously washed and then incubated with chromogenic substrate tetramethylbenzidine for 30 mins in the dark. Chromogenic reaction was then stopped by the addition of stop solution and absorbance at 450 nm was read using Tecan GENios microplate reader. Aβ40 values (in pg/ml of CM) were calculated by comparison with an Aβ40 standard curve. This value was normalized to the total lysate protein yield from each well to control for variability attributable to differences in cell number and scaled relative to mock transfection values.
RT-qPCR analysis of mRNA and miRNA
Both mRNA and miRNA levels were quantified by reverse transcription (RT) quantitative PCR (RT-qPCR). All RT and qPCR steps were performed at a dedicated PCR/RNA workbench with separate supplies to avoid DNA or RNA contamination. For miRNA quantification, stem-loop TaqMan assays were employed (Applied Biosystems, Carlsbad, CA).
Briefly, total RNA (10 ng) was converted to complementary DNA (cDNA) using TaqMan microRNA Reverse Transcription kit (Applied Biosystems) by combining RNA, miRNA-specific RT primer, MultiScribe reverse transcriptase, RNase inhibitor enzyme, dNTPs, reaction buffer and water per the manufacturer's protocol and incubating reaction mix on a thermocycler at 16°C for 30 min, 42°C for 30 min and 85°C for 5 min. The cDNA was subjected to qPCR using specific TaqMan hydrolysis probe assays (Applied Biosystems). The RT reaction mix (cDNA) was combined with TaqMan miRNA assay and TaqMan Universal PCR master mix (Applied Biosystems) per the manufacturer's protocol and analyzed on a 7300 Real-Time PCR instrument (Applied Biosystems). Each sample was analyzed in duplicate and signals averaged.
For mRNA quantification, standard mRNA TaqMan hydrolysis probe assays were utilized. Total RNA (10-75 ng) was converted to cDNA with the High Capacity RNAto-cDNA kit (Applied Biosystems) by combining total RNA, RT enzyme mix, and RT reaction buffer per the manufacturer's protocol and incubating reaction mix in a thermocycler at 37°C for 60 min and then 95°C for 5 min. The RT reaction mix (cDNA) was combined with TaqMan mRNA assay and TaqMan Universal PCR master mix as in miRNA analyses. The PCR reactions were then analyzed on the 7300 Real-Time PCR instrument. Each sample was analyzed in duplicate and signals averaged.
Relative quantification was performed using a modified Δ-Cq method. Relative levels were calculated by taking the ratio of E x ΔCt,x for the gene of interest to E y ΔCt,y for the stable reference gene, where E x and E y are experimentally determined PCR amplification efficiencies for the gene of interest and reference gene, respectively. This is implemented into the qBase PLUS software used in these studies. In order to determine amplification efficiencies for each Taq-Man assay, aliquots of every RNA sample in a given analysis were pooled and used to create a relative standard curve by serial dilution. This standard curve was then converted to cDNA and analyzed by qPCR in parallel with unknown samples. The slope of the plot of Ct versus standard curve dilutions was used to calculate amplification efficiency. For miRNA relative quantification studies, RNU48, RNU49, RNU6B, and miR-16 were used for normalization. For mRNA relative quantification studies, glyceraldehyde 3-phosphate dehydrogenase (GAPDH), β2 microglobulin (B2M), β-actin, and TATA-box binding protein (TBP) were used for normalization. HPLC-purified synthetic oligoribonucleotide standards were obtained commercially (Sigma and Integrated DNA Technologies), identical in sequence to human miR-101, miR-153, miR-346, miR-339-5p, miR-124, and miR-1. Oligoribonucleotides were resuspended and concentrations measured by A 260 values. Standard curves with absolute copy counts were prepared by serial dilution, converted to cDNA, and analyzed by qPCR in parallel with unknown samples. Copy counts per reaction were determined from standard curve analysis. Copy counts were then presented as copy counts/15 pg total RNA as a rough estimate of copy counts per average human cell.
Statistical analyses
Statistical analyses were performed using Prism GraphPad, SPSS, or R using Student's t test and linear or generalized linear models followed by post hoc Dunnett's t test, Šidakcorrected pairwise comparisons, Tukey's Honest Significant Difference test, or Student-Neuman-Keuls, as appropriate. All tests used p ≤ 0.05 as the threshold for significance. Generalized linear models were used whenever data violated the fundamental assumptions of the t test or linear models (normality and homoscedasticity). Distribution families and links were chosen by application of the secondorder Akaike information criterion [36]. In all cases, error bars represent standard error of the mean. For human brain specimen analysis, the sample size was set to be sufficient to detect a 35% difference in means between groups. Specifically, each brain specimen cohort had 5 control (non-AD) and 15 or 20 disease patient specimens. This sample size was sufficient at power 80% to detect a 32% difference in means with relative standard deviation of 20% for each group, with type I error rate (alpha) of 5%. For cell culture experiments, sample size was determined by multiple previous works in cell culture reasonable sample sizes for our APP and other assays [21][22][23].
Proposed FeAR nexus is well-conserved
Sequence alignment revealed that the FeAR nexus was well-preserved among placental mammals, particularly primates (Fig. 1c, Supplemental Table 2). No homologies were found outside the Eutheria. When expressed in terms of information (bits), primate alignment conservation was 97.44 ± 1.58. Mammalian conservation was 97.72 ± 1.02. This corresponds to a relative information content of 91.9% ± 1.5% and 81.4% ± 0.9% vs. 100% for perfect conservation (106 bits maximum information for primates, 140 for all species). These calculations ignored the two large gaps in the alignment that only had sequences for dog and/or horse. Other gaps were accounted for by an increase in error term value for that position owing to smaller sample size.
miR-346 activity is through the predicted target site in the APP 5′-UTR We used the predicted miR-346 target sequence in the APP 5′-UTR (Fig. 1c) to design a mutation (Fig. 1d) ). b APP (~110-130 kDa by the mAb22C11 probing) signal was normalized to α-tubulin protein (5 1 kDa) signal. APP siRNA significantly (p ≤ 0.05) depressed APP, whereas miR-346 mimics both significantly increased it, but each was not different from the other. Letters indicate pairwise statistical comparison (Tukey's) outcomes. Samples sharing letters are not significantly different. c CellTiter-Glo (CTG) cell viability assay of transfected cell cultures. Transfections did not alter overall culture viability (no omnibus or pairwise significant differences). d RT-qPCR of miR-346 at 48 h post transfection (two technical replicates), normalized to geometric mean of RNU48, RNU6B and miR16, further scaled relative to mock-transfected levels. RQ = relative quantification; * p < 0.05 relative to negative control-transfected cells. e APP mRNA RT-qPCR 48 h post transfection (n = 3), normalized to geometric mean of β-actin, B2M, GAPDH and TBP, further scaled relative to mocktransfected levels. f APP western blot of miR-346 target protection assay with increasing dose of the target protector and fixed amount of miR-346. g Blots quantified by densitometric analysis and APP levels normalized to α-tubulin levels, scaled relative to mock transfection (n = 4). Linear analysis revealed a significant (p = 0.011) doseresponse relationship between target protector and reduction of miR-346 activity. h CTG of target protector assay cell cultures. No effect appeared from target protector on culture viability. i Western blot of miR-346 treatment of U373 cell cultures. j Densitometry of APP for U373 cultures was adjusted for α-tubulin. Although NC mimic appeared to increase APP levels, miR-346 induced a greater increase. k ELISA of Aβ40 in CM of U373 cells transfected with mock, NC mimic, or miR-346. Transfection with miR-346 significantly (p < 0.05) increased levels of Aβ40 in CM luciferase expression vector that contained the APP 5′-UTR fused between the SV40 promoter and the firefly luciferase reporter gene [37]. Co-transfection of HeLa cells with the wildtype and mutant luciferase vectors and miR-346 mimic resulted in significant (p ≤ 0.05) increase in luciferase signal for wildtype APP 5′-UTR or no alteration by miR-346 for mutated APP 5′-UTR (Fig. 1e). When we co-transfected the wildtype APP 5′-UTR luciferase vector with miR-346 and a TP designed to block the interaction of miR-346 at the predicted APP 5′-UTR target site, we saw a significant reduction of miR-346 mimic effect (Fig. 1f) vs. a negative control TP.
miR-346 upregulates levels of APP in HeLa cells in a consistent fashion
Transfection of HeLa cells with 50 nM of two miR-346 mimics from different commercial sources (Fig. 2a) resulted in elevated (2-2.5-fold) levels of α-tubulin-normalized APP (Fig. 2b). CTG (measuring overall cell culture health) was not perturbed by this treatment (Fig. 2c). We confirmed successful delivery of miR-346 into HeLa cells by RT-qPCR 48 h post transfection (Fig. 2d). Normalized APP mRNA levels in HeLa cells, assayed by RT-qPCR 48 h post transfection, were unchanged (Fig. 2e). RT-qPCR expression levels were normalized to the geometric mean of βactin, B2M, GAPDH, and TBP expression levels and scaled relative to mock-transfected levels. To confirm binding specificity, we further transfected HeLa cells with miR-346 along with increasing concentrations of a sequence-specific TP. Total transfected nucleic acid concentration was kept constant by adding adjusted amounts of "negative control target protector". We harvested and lysed cells 72 h post transfection, analyzed protein lysates on SDS-PAGE, and visualized APP and β-actin by western blot on the same membrane (Fig. 2f). We quantified by densitometric analysis and normalized APP levels to α-tubulin levels and scaled relative to mock transfection (n = 4). Endogenous cellular response to miR-346 is blocked by protection of target site in the APP 5′-UTR We treated HeLa cells with 15 nM miR-346 mimic and increasing concentrations of TP. We found a significant (p = 0.011) inverse relationship between TP dose and miR-346 activity (Fig. 2g). However, CTG-based cell viability was not perturbed (Fig. 2h). The target protection assay established that miR-346 mimics also upregulated endogenous APP mRNA translation. Direct blockade of the specific miR-346 recognition site within the APP mRNA reduced miR-346 mimic activity in a specific dose-dependent fashion.
miR-346 activity exists in other cell lines
We transfected miR-346 mimics into human glioblastoma U373 cells and analyzed cell extracts on western blot (Fig. 2i). When densitometry was adjusted by corresponding α-tubulin signal, U373 results showed that miR-346 mimic treatment increased APP levels (Fig. 2j). We further evaluated CM for levels of secreted Aβ40 peptide by ELISA and found that miR-346 treatment significantly increased Aβ40 levels in CM of transfected U373 cells (Fig. 2k).
Activity of miR-346 in the APP 5′-UTR requires the conventional machinery of miRNA activity To check the role of the RISC component AGO2 on miR-346 activity (Fig. 3a), we co-transfected HeLa cells with or without miR-346 mimic along with either negative control siRNA, Dicer, or AGO2 siRNA, and measured APP levels of cell lysates 72 h post transfection by western blot (Fig. 3b) followed by densitometry and normalization to αtubulin levels (Fig. 3c, d). After ANOVA testing interaction of siRNA × miRNA treatments, we found a significant interaction (p = 0.010); Šidak-adjusted pairwise comparisons revealed that treatment with siRNA against AGO2 reduced but did not eliminate miR-346 activity (p < 0.05).
Levels of miR-346 and APP diminish as primary human brain cultures mature
We cultured human primary cells as described herein and harvested cultures at 7, 10, 14, 18, 22, and 26 DIV. We measured miR-346 levels by RT-qPCR. We determined that miR-346 significantly decreased as cultures aged (Fig. 4a) A. proportionally to the square of days in culture. We measured APP by western blot [22]. We determined that APP also decreased proportionally to the square of DIV (Fig. 4b).
When both miR-346 and APP levels were standardized by subtracting overall means and dividing by standard deviations, the resulting trends were nearly identical (Fig. 4c). (Fig. 4d). We do not report formal pairwise comparisons, as N = 2 for each cell line, although it appears that HeLa had specifically lower levels of miR-346 than other cell lines tested, while human neuroblastoma (SK-N-SH) cells exhibited increased levels of miR-346 in a differentiation specific manner. Fig. 6 Analysis of miR-346 and Aβ levels in human brain specimens. RT-qPCR analysis of expression levels for miR-346 in brain specimens from AD and control patients in both cohorts. "AD (No Rx)" represents a subgroup of AD patients in cohort 1 that had no history of treatment with cholinesterase inhibitors or memantine. a, c, e expression levels vs. different Braak stages were determined using the modified ΔC q relative quantification method as implemented in qBase PLUS software. Expression levels vs. different Braak stages were normalized to the geometric mean of four endogenous controls: RNU6B, RNU48, RNU49, and miR16. In b, d, f, expression levels vs. different Braak stages were quantified in absolute terms as miRNA copy counts per 15 pg of total RNA. Copy counts were calculated from standard curves prepared from serial dilutions of miRNA oligonucleotide standards with known concentrations. (*p < 0.05). g Aβ40 levels in brains of cohort 1 patients. h Aβ40 levels in brains of cohort 2 patients 0.05) effect was found for DFO dose vs. APP and vs. CTG. Notably, DFO reduced APP levels by dose and increased CTG signal (Fig. 5a-c).
Iron deficiency is necessary for miR-346 effect on APP levels in human primary neuronal enriched cultures
We transfected human primary cultures with miR-346 ( Fig. 5d-g); treated them with DFO, 150 µM (Fig. 5h-k); or combined miR-346 transfection and DFO treatment (Fig. 5l-o). We quantified APP levels (adjusted by αtubulin) by western blot followed by densitometry. In contrast to HeLa culture results, transfection with miR-346 in isolation did not alter APP levels. DFO treatment, alone significantly (p ≤ 0.05) reduced adjusted APP. However, when transfected under iron-deficient conditions (DFO chelation), treatment with miR-346 not only reversed chelation effects but increased APP levels beyond untreated culture levels. However, no treatment with DFO or miR-346, alone or combined, significantly altered Aβ40 levels in CM samples. We propose, therefore, that under physiological conditions, miR-346 activity on the APP 5′-UTR depends upon iron deficiency [38].
miR-346 levels are reduced in AD, particularly in later Braak stages whereas AD increases
We analyzed two different cohorts of human brain tissue specimens. Both cohorts included individuals with neuropathological AD and age-matched non-AD controls. We performed RT-qPCR analysis of miR-346 levels in brain specimens from AD and control patients in both cohorts ( Fig. 6a-f) and Aβ peptides in cohort 1 (Fig. 6g, h). "AD (No Rx)" represents a subgroup of patients from the AD group that had no history of treatment with cholinesterase inhibitors or memantine. We normalized "Relative" expression levels to the geometric mean of four endogenous controls: RNU6B, RNU48, RNU49, and miR-16. We calculated "copies/15 pg total RNA" from standard curves prepared from serial dilutions of miRNA oligonucleotide standards with known concentrations. We normalized levels of Aβ40 and Aβ42 to means of Control samples. In cohort 1 (Fig. 6a, b), both all AD and AD without medications had significantly (p ≤ 0.05) lower levels of miR-346 than controls. We found no significant differences if analyzing each Braak stage group in cohort 2 ( Fig. 6c-f) separately. When we combined Braak stages as "Control, I/II" vs. III through VI, we found that the reduction in miR-346 according to Braak stage was significant. We likewise found that relative levels of both Aβ peptides were significantly higher in "no Rx" AD samples than controls. Aβ42 levels were also significantly higher than controls for all AD.
Although Aβ40 levels were elevated for AD including drugtreated patient samples, the difference was not significant (Fig. 6g-h).
Discussion
APP plays a central role in AD etiology and progression. In this report, we address novel features of regulation by miRNA of APP mRNA translation. Among its many functions, APP has metal-associated redox activity [12] and stabilizes the plasma membrane for Fe transport (with or without ferroxidase activity) [11,39]. Thus, preventing disruption of Fe metabolism is a worthwhile target of AD research [40]. Several miRNAs, including miR-101, miR-153, and miR-298, regulate APP mRNA translation [21,22,41]. To discover further miRNA regulators of APP, we scanned the APP 5′-and 3′-UTRs with the miRanda utility in the RegRNA online database [26] and found a putative target site for miR-346 in the 5′-UTR. When tested, miR-346 strongly upregulated expression of an APP 5′-UTR reporter clone and endogenous APP protein in HeLa cells. Site mutagenesis and TP transfections demonstrated that these effects were mediated by specific interaction with the predicted APP 5′-UTR target site (Figs. 1-2). We also observed an upregulatory effect in human primary cultures but only after iron chelation. Therefore, miR-346 has "noncanonical" (stimulative/disinhibitive) regulatory effects on APP expression via a "non-canonical" target site in the APP 5′-UTR that likewise contains an IRE. Inhibiting the interaction we observed may be a viable therapeutic strategy for potentially regulating APP expression and Aβ production in the AD brain.
Early exploration into the upregulation of mRNA translation by miRNAs concentrated on conventional 3′-UTR region targets. In those cases, it was determined that miR-NAs would direct AGO and fragile-X mental retardation syndrome-related protein (FXR1) toward AU-rich areas (ARE) of the 3′-UTR, and many miRNA target sites (7 5%) are within AREs. Furthermore, this effect can switch from stimulation to repression depending on whether cells are quiescent or dividing [42]. FXR1 is a homolog of FMRP, which is known to repress translation of APP [43]. However, more recent work has determined that stimulation of translation by miRNA is not limited to targeting the 3′-UTR, nor is it limited to interactions with ARE [44]. Instead, multiple pathways can operate that involve either the 3′-or 5′-UTR and several potential protein partners, although AGO2 is usually (but not always) present [44]. It bears noting that the miR-346 site in the APP 5′-UTR is not within an ARE. What is a particularly interesting contrast is that our own work demonstrated that miR-346 stimulation at least partially required AGO2, while miR-346 stimulation of RIP140 was enhanced by knockdown of AGO2 [45]. Thus, specific action of a particular miRNA on a specific mRNA may depend closely upon local metabolic conditions.
While most known miRNA regulatory interactions are limited to the mRNA 3′-UTR [46], several examples exist of effective miRNA targeting in the 5′-UTR or CDS [45,47], and some even target both 5′-and 3′-UTRs in a single mRNA [48]. The vast majority of validated miRNA: mRNA target interactions inhibit target translation. Nevertheless, several examples exist of apparent stimulation by miRNA on target expression [44]. Most specifically, our results for APP are similar to miR-346 regulation of receptor-interacting protein 140 (RIP140) [45]. That is, miR-346 stimulates translation through the RIP140 5′-UTR.
Specific differences have been reported between miRNA activities in quiescent (G0) vs. actively dividing cells [42]. Given that we have evidence that our human primary culture contains a significant portion of mature neurons [31], we believe that our results adequately reflect one such difference, particularly since the effects of miR-346 in HeLa (immortalized, actively dividing cells) were not identical to those we observed in human primary cultures.
A potentially pertinent pathway for AD is miR-346 regulation of the unfolded protein response (UPR) [29]. This pathway activates under accumulation of unfolded proteins in the ER. Activation of UPR results in the inhibition of global protein production and targeted induction of gene expression for products that increase ER protein folding capacity [49]. Expression of miR-346 increases UPR through UPR-linked transcription factor XBP1 [29]. This leads to decreased expression of TAP1 through interaction between miR-346 and the TAP1 3′-UTR. TAP1 is an ATP-binding cassette transporter that translocates antigens derived from proteasomal processing into the ER lumen for loading onto MHC antigen receptors. Notably, miR-346 also decreases MHC class I gene expression via indirect interactions, further implicating miR-346 as an immunomodulatory miRNA.
To bring this into context with our present work, neurons in the AD brain are often invested with NFT consisting of aggregated hyperphosphorylated tau protein that might be expected to induce ER stress. UPR is activated in pretangle neurons [50]. Given that UPR is active in the AD brain and that APP expression is elevated following UPR activation [51], it is reasonable to speculate that miR-346 expression may also be induced in certain cells of the AD brain and drive APP expression in pretangle neurons. Even broader associations between neurodegeneration and UPR likely exist [52]. Multiple neurodegenerative diseases, including AD, Parkinson's disease, Huntington's disease, and amyotrophic lateral sclerosis have association with activated UPR [53]. In AD, a specific UPR-related mechanism may be autophagy [54]. Furthermore, UPR may contribute to AD amyloidosis. Specifically, X-box binding protein 1 (XBP1) is a transcription factor that regulates ADAM10 [55]. ADAM10 is the primary α-secretase, which drives APP processing away from amyloidogenic Aβ production. XBP1 is differentially spliced during UPR [56]. This specific splicing difference likewise alters XBP1 activity on ADAM10. In brains from AD, normal XBP1 and ADAM10 mRNA levels were below those of non-AD controls [55]. Of particular pertinence, Fe depletion reduces the ability of cells to mount UPR against ER stress, and this is relieved by Fe supplementation [57].
Mechanisms involved in post-transcriptional miRNAmediated inhibitory regulation are fairly universal and well described [58]: AGO2 as a member of RISC recruits GW182 to the target transcript, promoting further protein interactions that lead to translational inhibition and transcript deadenylation and degradation [58,59]. To explore the mechanism underlying the upregulation effect of miR-346 on APP mRNA translation, we tested involvement of proteins implicated in canonical miRNA biogenesis (Dicer) and function (AGO2). Upregulation of APP by miR-346 was significantly reduced when expression of AGO2 was knocked down. AGO2 was originally discovered as a component of a molecular complex involved in translation initiation [60]. This function has since gone largely unexplored. Given the location of the miR-346 target site in the APP 5′-UTR, near the site of ribosome assembly, one possible explanation for the requirement of AGO2 is that it may mediate the upregulation effect via its function in translation initiation. Another possibility is that AGO2 may be required to sterically inhibit interactions between inhibiting trans-factors and the APP 5′-UTR IRE.
The miR-346 target site in the APP 5′-UTR directly overlaps with a known IRE and an IL-1 acute box [13,14]. The IRE located within the APP 5′-UTR binds IRP1 but not IRP2 [13,14,19]. It is possible that miR-346 activity may in some way interact with IRP1 and/or IL-1 activity through their co-localized target sites on the APP 5′-UTR, particularly IRP1. In this regard, IRP1 inhibits APP translation when bound to the 5′-UTR IRE. When iron levels are increased, IRP1 binds free iron and dissociates from the APP mRNA allowing translation to proceed uninhibited. When iron levels are decreased (such as with iron chelation), free iron dissociates from IRP1 allowing IRP1 to bind to the APP 5′-UTR IRE and inhibit APP translation. IL-1 participates in Fe homeostasis indirectly, through inflammatory cascades. In particular, IL-1 increases recruitment of IRP1 by transient increase of the labile Fe pool [61]. Further, IL-1 stimulates translation of APP mRNA through its 5′-UTR [62]. In primary human primary cultures, miR-346 activity was absent unless Fe levels were reduced by chelation with DFO. Although it is tempting to speculate that the potent effect of miR-346 on APP levels in HeLa cells could be attributed to relative Fe deficiency, we have no direct evidence of this as we did not measure free Fe levels in media. In fact, media supplementation with FBS would be expected to provide Fe both in free form and bound to transferrin. Further, it is not clear that comparing media iron levels would reflect differences in intracellular free iron levels. Therefore, the exact mechanism whereby miR-346 regulates APP levels in HeLa cells requires further investigation.
Nevertheless, our work allows us to build an extended model of miR-346's role in APP's promotion of export of Fe from the cytosol to the extracellular space. Aside from its role in regulating APP expression, Fe, along with Cu and Zn, bind to Aβ, particularly in plaque cores [63], and slows the normal ordered progression of Aβ to higher ordered aggregates, such as fibrils. This Fe interference promotes Aβ toxicity in neuronal cells [64]. Fe bound to Aβ also accelerates ROS formation [65]. Thus, therapies that modulate Fe homeostasis in the AD brain have been proposed as a means of reducing Aβ-associated Fe toxicity and reducing APP translation and Aβ production [20,66,67]. This may be a chicken-or-egg question: does Fe accumulation, exacerbated by perturbation of miR-346-dependent regulation of APP, lead to AD, or does it merely exacerbate symptoms after the disease already exists?
In addition to Fe, several other metals play some role in the production of APP and Aβ. These include lead (Pb) [39], copper (Cu) [68][69][70], and manganese (Mn) [71,72,87]. Their contributions are complex and often not overlapping. Cu, in particular, appears to regulate transcription and translation [69]. However, it may be a complex relationship. Although Cu supplementation stimulated APP 5′-UTR activity [70], net effects may vary by tissue [68][69][70]. It is noteworthy that Cu binds IRP1 and reduces its ability to bind mRNA, although at less efficiency than Fe [73]. Pb enhances IRP1 inhibition of APP translation via enhancing IRP1:APP 5′-UTR interaction [39]. Shorter-term exposures to Pb also increases IRP1 levels before resulting in lower levels with more extended exposure. This operates through Pb disruption of extracellular signalregulated kinase 1/2 [74]. A neurotoxic effect has also recently been explicitly measured for Mn, via suppression of APP 5′-UTR activity [87]. [13,15,19] that includes both the IRP1 site and miR-346 recognition sequence. b During Fe influx, IRP1 is recruited away from the APP 5′-UTR, no longer inhibiting APP translation. Although this may "free" the APP 5′-UTR to bind with miR-346/RISC (RISC represented by Ago2), the apparent stimulative activity is parsimonously explained by disinhibition vs. IRP1. When IRP1 is not inhibiting, binding by RISC offers no additional stimulative effect. Cu also has some activity in recruiting IRP1 away from the APP 5'-UTR. Cu has a lower affinity for IRP1 but is still able to bind and partially recruit it away. c If Fe levels are low, IRP1 is not recruited away and binds the APP 5′-UTR, inhibiting APP translation. In addition, Mn may bind to IRP1 and prevent its recruitment by Fe or otherwise interfere in Fe recruitment. Pb activates ERK1/2, which has a complex cascade of consequences, some of which include complex disruption of IRP1 levels. d Binding of the miR-346/RISC complex would then disinhibit by displacing IRP1. Alternation of IRP1 inhibition and miR-346/RISC disinhibition would facilitate APP homeostasis In our studies, miR-346 upregulated Aβ in U373 human astrocytoma cells but did not have a significant effect on Aβ levels in primary human cell cultures. In AD brain samples, miR-346 was significantly downregulated in late-Braak stages. We had previously reported that both miR-101 and miR-153 were also downregulated in late-Braak AD, accompanied by significant elevation of Aβ and APP [21,22]. If miR-346 is to upregulate APP, why would it be deficient in AD brain? We admit that late-stage reduction in miRNA species, without early-stage or prodromal evidence, could reflect a general breakdown in miRNA regulation that cuts across specific functions or be an epiphenomal change reflecting broad changes in the relative number of different cell types as neurodegeneration progresses. If dysregulation of APP's contribution to Fe homeostasis plays a role in AD, that role would be in earlier stages of the disorder, such as mild cognitive impairment (MCI) and Braak stage I, and may not be reflected in the "accumulative phase" (Braak II +).
We propose a "first-order" model that incorporates Fe and miR-346, along with "supporting roles" played by Cu and Mn (Fig. 7). Although Zn can bind IRP1 [88], and it blocks APP ferroxidase activity it does not alter APP levels [12]. Under our model, "healthy FeAR" is homeostatic. The IRE and miR-346 sites partially overlap in the APP 5′-UTR (Fig. 7a). Fe influx recruits IRP1 away from the APP 5′-UTR, which may "free" the site, and is equivalent to simple disinhibition (Fig. 7b). When Fe is reduced, IRP1 becomes available and binds its site, inhibiting APP translation (Fig. 7c). Binding of miR-346/RISC would displace IRP1, disinhibiting APP translation in the same fashion that Fe recruitment of IRP1 would (Fig. 7d). This process would alternate back and forth between inhibition and disinhibition, permitting sufficient APP to be translated for its multiple functions [75][76][77][78]. Notably, this phenomenon would require the interaction of IRP1 with the 5'-UTR IRE and, therefore, would be expected to be blunted in a setting of "iron excess", thereby providing a plausible hypothesis for why the stimulatory effect was observed in human primary neuronal enriched cultures only after iron chelation. Further experimental work will be necessary to better integrate Cu, Mn, Pb, and IL-1 into the Fe-miR-346 activity network. Considering clinical complications, unmodified iron chelation therapy in AD is likely to be a poor treatment strategy. Metal-complexing agents exist with more targeted and less-systemic effects on metal ion binding and redistribution, the so-called metal-protein attenuating compounds [79,80]. "XH1" binds Aβ and chelates metals. It reduces APP protein expression in neuronal cells [81]. However, idiopathic anemia is a common comorbidity with AD [82], and low hemoglobin associates with greater risk of death among AD patients [83]. On the other hand, in a Japanese study confined solely to dementia patients, subjects had a direct association between greater levels of circulating hemoglobin and brain accumulation of Aβ [84].
In the context of translational implications, we observed that miR-346 levels are reduced in later stages of AD, but we cannot necessarily infer from this that prodromal AD or earlier stages of development are necessarily due to deficiency of miR-346. Several cases of reduced miRNAs have been found in association with AD staging, in particular, we have reported that miR-101 and miR-153, both of which downregulate APP expression, are likewise reduced in AD [21,22]. This may reflect underlying etiology, or it may reflect general neurodegeneration and glial invasion. If these miRNAs are critical to normal brain function, and they are likewise specifically expressed in neuronal cells, their loss in brain samples may just as well reflect a change in proportion of neuronal vs. non-neuronal cells in diseased brains. Finally, we wish to make note of possible roles for Fe deficiency in another pervasive brain disorder, schizophrenia. Hippocampal Fe deficiency, both with and without systemic anemia, resulted in impaired prepulse inhibition (PPI) of the acoustic startle reflex. Impaired PPI is a reliable measure of the schizophrenia endophenotype of defective sensorimotor gating [85]. While no APP-Fe-Schizophrenia axis has been found, that APP activity includes significant regulation of Fe homeostasis suggests that the miR-346/IRP-1/Fe pathway may function in other neurological disorders.
From an AD etiology standpoint, Fe influx could be part of a cascade of cellular stresses (e.g., redox stress and inflammation) that would initially upregulate miR-346 and, thereby, APP. In healthy conditions, this would eventually result in negative feedback that reduces miR-346 and APP to pre-stress levels. Under pathogenic conditions, negative feedback to miR-346 might be insufficient to halt an APP pathogenic cascade. Other mechanisms would drive excess APP and Aβ, but miR-346 would have "fallen by the wayside", downregulated as a result of late AD neurobiology. Only further experimental investigation could accurately define the relationships. For example, our future work would consider stimulation of the FeAR nexus by IL-1 [62], and show direct evidence of IRP1-miR-346 competition or how metallic ions other than Fe, such as Cu or Mn, could alter the system. In addition, our future work would use the full UTR sequence, which could add another layer of complexity. It is noteworthy that the 5′-UTR for APP is also transcriptionally active, as we have previously shown [37]. This includes a "CAGA box" that takes part in transforming growth factor β activity in regulating APP transcription [86]. It might be overly simplistic to presume that transcriptional regulation directly interacts with translational regulation merely because both stages happen to have overlapping target regions on the DNA and corresponding RNA sequences. Nevertheless, the presence of such an overlap could open up opportunities for drug modulation that could target both stages through one site. | 2018-12-02T16:19:45.676Z | 2018-09-17T00:00:00.000 | {
"year": 2018,
"sha1": "3c42ee837cb8f268c95f532ee6dd4a1d03a0aec8",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41380-018-0266-3.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "ed77ce5235d41e2b3ec542a26943af9fed4674a2",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
18452101 | pes2o/s2orc | v3-fos-license | $B_K$ Using Staggered Fermions: An Update
Improved results for $B_K$ are discussed. Scaling corrections are argued to be of $O(a^2)$, leading to a reduction in the systematic error. For a kaon composed of degenerate quarks, the quenched result is ${\widehat{B}_K} = 0.825 \pm 0.027 \pm 0.023$.
• Perturbative corrections were not included, because of the uncertainty in the choice of g.
• Contamination from excited states.
• Quenching, although there was preliminary evidence that this was not the dominant source of error [2,3].
• Extrapolation from degenerate quarks (m 1 = m 2 ≈ m s /2) to the physical kaon (m 1 = m s , m 2 ≈ 0). From the quenched data, we estimated that this extrapolation increased B K by 3%, an estimate included in our results for B K . The error in this estimate was, however, large.
A number of these uncertainties have now been substantially reduced, both due to our work and that of the Kyoto-Tsukuba group.
SCALING VIOLATIONS ARE O(a )
According to the standard lore, the staggered fermion action is good to O(a 2 ) (up to logarithms, which will always be kept implicit). I sketched the perturbative argument for this in Ref. [1], and presented some supporting numerical evidence from the spectrum. By contrast, for matrix elements of external operators the corrections are expected to be of O(a). For example, the operators appearing in both numerator and denominator of B K (Eq. 1) have O(a) terms in their tree-level perturbative matrix elements [4]. These O(a) terms turn out to have the wrong flavor to contribute to the matrix element in B K , but in 1991 it seemed possible that terms of O(g 2n a), from n-loop diagrams, might contribute.
In the following I sketch an argument that they do not [5]: the scaling corrections to the matrix elements in B K are of O(a 2 ) to all orders in perturbation theory. The argument is an application of Symanzik's perturbative improvement program [6], It is useful to begin by demonstrating that the staggered action is already "improved", i.e. has no corrections of O(a). In the notation of, e.g., Ref. [4,7], the action is and is invariant under translations, rotations, spatial inversions and charge conjugation. When m → 0, it is also invariant under the axial transformations In order to improve the action, one adds operators of d = 5, with coefficients adjusted order by order in perturbation theory so as to cancel O(a) terms in correlation functions. (In fact, except in scalar theories, only on-shell quantities are improved.) That this improves all on-shell quantities at once is the non-trivial assumption, shown by Symanzik for scalar theories. The d = 5 operators must have the same symmetries as those in the original action. Ignoring axial symmetry, the allowed operators are [5] However, none of these operators is consistent with the axial symmetry-treating m as a spurion field one can show that the bilinear must either contain an even number of links, and be multiplied by an odd function of m, or contain an odd number of links and be multiplied by an even function of m. Since there are no operators available to improve the action, it must already be good to O(a 2 ). Now I proceed to B K . I will discuss the matrix element of the four-fermion operator O B in the numerator of Eq. 1; a similar argument works for the simpler matrix elements in the denominator. To improve a matrix element one must not only improve the action, but also improve the operator itself. Since the staggered action is already improved, one needs only consider the operator. I assume, following the second paper in Ref. [6], that improvement can be accomplished for all on-shell matrix elements by adding d = 6 and 7 operators to the original operator, with coefficients determined order by order in perturbation theory. (Mixing with lower dimension operators is forbidden by the flavor structure.) I assume further that these operators must have the same symmetries as the original operators.
Let O 6 cont be a vector of continuum d = 6 operators. At tree-level these match onto a corresponding vector of lattice operators O 6 lat . These lattice operators will mix with all others of d = 6 having the same symmetries, so one must extend the vector O 6 lat to include all the possibilities. I assume that the continuum vector is similarly extended. Let O 7 lat be a vector containing all operators with d = 7 at tree-level having the same symmetries as the O 6 lat . The assumption of all-orders improvement is then where d is a square matrix which is the identity at tree-level. c = O(g 2 ) represents the fact that d = 7 operators mix back into d = 6 operators.
Reorganizing these equations we find This shows that the O(a) terms in matrix elements of O 6 lat can be obtained from the continuum matrix elements of O 7 cont . The perturbative matrix on the r.h.s. is of the form 1 + O(g 2 )-multiplying by its inverse calculated to, say, 1-loop, removes O(g 2 ) corrections to the matching between continuum and lattice matrix elements.
I now apply this equation to O B . One must transcribe the continuum operator onto the lattice, and then write down the (long) list of operators O 6 lat and O 7 lat . Various choices of lattice operator, all agreeing at tree-level, have been used. We use Landau-gauge operators without gauge links on 2 4 hypercubes, and smeared Landau-gauge operators on 4 4 hypercubes [4] (these have d = O(g 2 )). Gauge-invariant operators have been used in Ref. [8]. The argument is the same for all these choices, because they behave the same way under the relevant symmetries: the hypercubic cube, and the separate axial rotations of the four fermion fields.
The crucial point is this. With staggered fermions the continuum theory has four degenerate versions of each quark, and a corresponding flavor symmetry. The continuum operators of interest have flavor ξ 5 × ξ 5 . It turns out, however, that none of the d = 7 operators has this flavor [5]. Thus, if we take the matrix elements between a K and K both of flavor ξ 5 , the contributions of O 7 cont vanish identically. Thus there are no O(a) corrections to these particular matrix elements: they are automatically improved.
There are other operators in O 6 cont having flavor ξ 5 × ξ 5 , but these are multiplied by coefficients of O(g 2 ) (O(g 4 ) if one uses one-loop matching).
A concern with this argument is that c and c might contain non-perturbative parts of O(a). This will not matter, however, as long as the symmetry properties are retained.
OTHER IMPROVEMENTS
We use the same set of lattices as in Ref. [1], but we now have results with two sets of operators: Landau-gauge unsmeared and smeared. Both have O(a 2 ) scaling corrections, but they should agree in the continuum limit. This provides a consistency check.
We have now included one-loop perturbative corrections. Patel and I have calculated these for both the original and smeared operators [4,7], the former results being in agreement with those of Ref. [9]. The results are of the form To extrapolate to the continuum, we choose a fixed scale: µ = 2 GeV. Lepage and Mackenzie have shown that perturbative corrections are reliable if one uses the correct expansion parameter [10]. For the coefficient of δO, we use g 2 determined from Tr(U) in Landau gauge, which yields g 2 U = 1.82, 1.66, 1.54 for β = 6, 6.2, 6.4. For the coefficient of logarithm, which represents the effect of loop momenta between π/a and µ, we use either g 2 U , or the value obtained by running from g 2 U = 1.82 at β = 6 to µ = 2GeV using the 2-loop N f = 0 formula, assuming that the starting scale is π/a. For 1/a = 1.9GeV at β = 6, this gives g 2 U (2GeV) = 2.72. In the end we take the average of these two methods, and use half the difference as an estimate of the systematic error. We are in the process of reducing this uncertainty using the automatic scale fixing procedure of Ref. [10]. The perturbative corrections are small for unsmeared operators, increasing the final result by ∼ 3%. The corrections are larger (up to 10%) for smeared operators.
Although B K is dimensionless, it has a weak dependence on the lattice spacing because of the anomalous dimension factor in Eq. 7, and through the value of the lattice kaon mass. We use updated values of a determined from m ρ : 1/a = 1.9, 2.5, 3.55 GeV for β = 6, 6.2, 6.4. Repeating the analysis using a determined from f π gives an estimate of the corresponding systematic error.
For β = 6 our bare lattice numbers are unchanged from those presented in Ref. [1], and since confirmed by Ref. [8]. At β = 6.2 and 6.4 our calculation is hampered by the relative shortness of our lattices in the time direction, which leads to contamination from more massive states, particularly ρ mesons. We use two methods of calculation each with different sources of contamination [11]. We now understand how to subtract these contaminations using the data itself [12]. To be conservative, we use the size of the subtractions as an estimate of the systematic error.
An example of the extrapolation to a = 0 is shown in Fig. 1. The data is not good enough to distinguish between linear and quadratic dependence on a, although it favors the latter. Thus we rely on the theoretical argument given above and assume a quadratic dependence. An important consistency check is that smeared and unsmeared operators agree in the continuum limit. It turns out that they also have similar dependence on a. This is only true, however, after inclusion of perturbative corrections. It has been found in Ref. [8] that gauge invariant operators also give consistent results, including the a dependence, once perturbative corrections are included. Thus the relatively large scaling violations do not appear to be an artifact of using gauge non-invariant operators. For our central value we use the average of the extrapolated results from unsmeared and smeared operators, and take half the difference as an estimate of a systematic error. For the statistical error we take the larger of the two errors.
NUMERICAL RESULTS
The preliminary result from our analysis is, in the quenched approximation and for degenerate quarks, B K (NDR, 2GeV) = 0.616 ± 0.020(stat) ± 0.014(g 2 ) ± 0.009(scale) ± 0.004(operator) ± 0.002(contamination) = 0.616 ± 0.020 ± 0.017 , where, in the last line, we have combined all the systematic errors in quadrature. It is more conventional to quote a result for the scale independent B-parameter, B K . Using the continuum α s evaluated at 2 GeV with Λ The major change from Ref. [1] is the use of quadratic extrapolation. Perturbative corrections also increase the result, by ∼ 3%, and a similar increase results from the use of a different (and better) definition of B K . using a different (and better) method of matching to the continuum B K . Errors due to quenching and to the use of degenerate quarks are not included in these results. There are reasons to think, however, that these errors are comparable to those quoted above. Unquenched calculations are now possible for quark masses m q ∼ m s /2, on lattices with spacing 1/a ∼ 2GeV, and find results for B K which agree within errors with quenched results [8,13]. This is surprising and encouraging. It must be tested at smaller lattice spacings to determine whether the full and quenched a dependences are similar.
In most quantities a more important issue would be the dependence on the light quark masses. For B K , however, the dependence enters at non-leading order. If one uses the chiral logarithms to estimate the order of magnitude of the correction, one finds a 3% increase for non-degenerate quarks [14,12]. It is important to check this with unquenched simulations for m s = m d . Quenched data is a not good guide because of contamination from η ′ loops [14].
If the result for B K withstands further scrutiny, it will have considerable phenomenological impact. | 2014-10-01T00:00:00.000Z | 1993-12-02T00:00:00.000 | {
"year": 1993,
"sha1": "7ab765bc5d51544151054a885da7462845fd5824",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-lat/9312009",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "4a8c84f13829165ab6de91aa81fbf518ec5e22b8",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
236261461 | pes2o/s2orc | v3-fos-license | High Risk Sexual Fantasies and Sexual Offending: An Overview of Fundamentals and Interventions
Although deviant sexual fantasizing has been found to be an etiological factor for sexual offending, not all deviant sexual fantasies increase risk equally. The aim of the present overview is to provide readers with an introduction to key terminology, a primer on central clinical theories, and a summary of the research literature on “high risk” sexual fantasies over the past 50 years. First, the important difference between “sexual fantasy” and “sexual fantasizing” is described. Second, the link between sexual fantasizing and sexual offending is discussed, with a focus on principle moderators such as physiological reaction, personality profile, and offense-supportive beliefs. Third, the different methods used to assess sexual fantasies are discussed. Fourth, the principles and techniques behind four evidence-based approaches to treating “high risk” sexual fantasies are discussed: (1) the behavioral approach, (2) the cognitive approach, (3) the imagination approach, and (4) the mindfulness-based approach. Finally, a call is issued for practice-based quantitative and qualitative research to further explore this clinical phenomenon. The findings of such investigations would advance the field’s understanding of assessment, management, and monitoring best practices for this important forensic population.
The prevention of sexually motivated violence is a topic of considerable community interest internationally, as evidenced by 29 countries throughout North America, South America, Europe, Asia, Africa, and Australia having passed legislation mandating regis tration, community notification, and tracking for individuals convicted of sexual offenses at different risk levels (U.S. Department of Justice, 2016). In countries throughout North America, Europe, and Australia, such laws also allow for the preventative detention of individuals who are judged to be at high risk of sexual recidivism at the end of their sentence (McSherry, 2014). Although a number of structured assessment and treatment decision making tools have been developed to establish management plans for sexual recidivism risk and to aid in clinical as well as legal decision-making, such protocols do not explicitly include a potentially important and modifiable factor: "high risk" sexual fantasies (Jackson & Hess, 2007). Hence, the aim of the present overview is to provide readers with an introduction to key terminology, a primer on key clinical theories, and a summary of the research literature on such fantasies over the past 50 years.
Sexual Fantasy vs. Sexual Fantasizing
There is an important difference between "sexual fantasy" and "sexual fantasizing". Ac cording to Bartels, Beech, and Harkins (2021), a sexual fantasy is a knowledge structure in memory, containing information about what targets or behaviors an individual finds sexually appealing (e.g., "My sexual fantasy is having sex with a woman in heeled shoes"). Sexual fantasizing, on the other hand, is a cognitive process. According to the Dual-Process Model of Sexual Thinking (DPM-ST; Bartels & Beech, 2016;Bartels et al., 2021), an external cue (e.g., seeing a pair of heeled shoes) or an internal cue (e.g., a memory of a woman in heeled shoes) activates associated sex-related information stored in one's memory, giving rise to a spontaneous sexual thought. This thought will be fleeting unless it grabs the individual's attention due to eliciting a strong affective response (e.g., sexual arousal). In this instance, the sexual thought will be automatically or deliberately elaborated upon using mental imagery, typically in the form of a moving story or "script". And this elaborative process which can reflect a relived experience, a planned future experience, or a purely wished experience is what is referred to as "sexual fantasizing". Based on this conceptual distinction, someone may have a sexual fantasy that they never fantasize about and, conversely, may fantasize about a sexual act they do not regard as a fantasy. The importance of distinguishing unintentional fleeting sexual thoughts from active sexual fantasizing will be returned to later in the discussion on assessment.
When sexual mental imagery concerns an act that is deemed culturally unacceptable (e.g., pedophilia, voyeurism, frottage, exhibitionism, fetishism, biastophilia, or sadism), it is said to be "deviant". The use of a deviant sexual fantasy is typically aligned with an existing deviant sexual interest (Noorishad, Levaque, Byers, & Shaughnessy, 2019) and deviant sexual fantasizing has been found to be an etiological factor for sexual offending (Seto, 2019) as well as a key risk factor for sexual recidivism (Mann, Hanson, & Thornton, 2010). However, not all deviant sexual fantasies are "high risk", and not all "high risk" fantasies lead to sexual offenses. In fact, sexual fantasies involving deviant behaviors with adults are common in the general population (Bartels & Gannon, 2011;Joyal, Cossette, & Lapierre, 2015), and the prevalence of child-related sexual fantasies ranges from 1.8% to 13% in men (Dombert et al., 2016;Joyal et al., 2015) and 0.4% to 7% in women (Bartova et al., 2021;Tozdan et al., 2020). The question raised, then, is what distinguishes "high risk" sexual fantasies that might lead to actual offenses from those that are a part of "normal" human experience?
The Link Between Sexual Fantasizing and Sexual Offending
Research indicates that fantasizing about a deviant sexual act is often linked with en gagement in the act itself by both non-offending members of the community (Klein, Schmidt, Turner, & Briken, 2015) as well as individuals previously convicted of a sexual offense (Turner-Moore & Waterman, 2017). However, recent studies have found that this relationship is not straightforward, with fantasy-behavior correlations observed in community samples being weaker for deviant content than for non-deviant content (Noorishad et al., 2019). Also, in a sample of only minor-attracted men, sexual fantasizing about children was not correlated with sexual offending behavior (Bailey, Bernhard, & Hsu, 2016). Such findings suggest that the relationship between sexual fantasizing and sexual offending is not direct but rather moderated by facilitatory factors. A review of the literature suggests three principle moderators: (1) physiological reaction, (2) person ality profile, and (3) offense-supportive beliefs.
Physiological Reaction
According to Smid and Wever's (2019) Incentive Motivational Model (IMM) of sexual deviance, a competent stimulus (i.e., a stimulus with incentive value) is one that elicits a strong emotional reaction (i.e., sexual arousal). This reaction signals sexual reward and, thus, gives rise to a feeling of sexual desire; that is, an appeal towards the sexually attrac tive stimulus. Given its incentive value, the stimulus will be sought out for the purpose of sexual gratification (i.e., orgasm), which can include sexual fantasizing. Indeed, sexual fantasizing is often used to induce and increase sexual arousal (Gee, Ward, & Eccleston, 2003). Thus, sexual fantasizing can contribute to one's motivation to sexually offend. And the more one fantasizes about a sexual stimulus, the higher its incentive value will become, further increasing the motivation to act. Additional determinants of the strength of a physiological reaction to sexual fantasies include: (1) the imaginal ability of the individual, as men who can form vivid mental imagery report greater sexual arousal while sexually fantasizing (Smith & Over, 1987); and (2) the emotional valuation of the fantasy content, whereby greater sexual arousal is elicited by positively rather than negatively appraised sexual thoughts (Little & Byers, 2000). Hence, if an individual experiences sexual fantasies about rape -for example -as vivid, highly arousing, and positive, then they are at higher risk of engaging in such behavior. As a corollary, persons convicted of sexual offenses typically fantasize about sexual content that matches their offense, with men who have raped a woman fantasizing about rape, men who have sexually abused children fantasizing about child sexual abuse, and so forth (Gee, Devilly, & Ward, 2004).
Personality Profile
Certain personality traits have been found to be more common in individuals who sexually fantasize about deviant content. Research on both non-offending members of the community (Baughman, Jonason, Veselka, & Vernon, 2014) as well as persons who have been convicted of a sexual offense (Skovran, Huss, & Scalora, 2010) has established psychopathy as one of these traits. Since low inhibition is a core aspect of psychopathy, it may be that psychopathic tendencies affect the (dis)inhibition of sexual desire and motivational goals that arise from sexual fantasizing, thus, increasing the risk of sexual offending. Towards this end, psychopathy has been found to moderate the relationship between sexual fantasizing about deviant content and engaging in such behavior in real life (Visser et al., 2015). A second personality trait that may play a moderating role is fantasy proneness, defined as a deep and profound involvement in fantasy and imagina tion (Rhue & Lynn, 1987). Individuals scoring high on fantasy proneness measures have been found to engage in more frequent sexual fantasizing, including that which involves deviant content (Bartels, Harkins, & Beech, 2020). However, further research is needed to determine the correlation between fantasy proneness and offending behavior.
Offense-Supportive Beliefs
Offense-supportive beliefs are a key etiological factor for sexual offending (Szumski, Bartels, Beech, & Fisher, 2018) as well as a risk factor for sexual recidivism (Helmus, Hanson, Babchishin, & Mann, 2013). These explicit beliefs are thought to be underpinned by implicit theories (Ward, 2000): clusters of subconscious core beliefs developed at a young age concerning oneself, others, the world, and sexuality. Implicit theories bias the processing and perception of social information in a manner that increases the risk of sexual offending (Ward, 2000) and can differ depending on the type of offense committed. For example, the implicit theories of men who have raped adult women can lead to the beliefs that women cannot be trusted, that women are sex objects, that men have a right to sex with whomever they choose, that dominance is necessary in a hostile world, and that men's sex drive is uncontrollable (Polaschek & Gannon, 2004). In contrast, the implicit theories of men who have molested children can lead to the beliefs that children are sexual beings, that sexual abuse is harmless, that men have the right to have sex with minors, and that the adult world is hostile (Ward & Keenan, 1999).
Implicit theories may also affect the processing of sexual fantasy content such that offense-supportive beliefs affect the fantasy-behavior link. In the DPM-ST, it is proposed that spontaneous sexual thoughts (triggered by a cue) undergo a process of appraisal. This involves validating whether the thought content is congruent with one's current beliefs. If the thought is deemed congruent, it is more likely to be elaborated upon via sexual fantasizing. For example, a man may sexually fantasize about rape if it is congru ent with the belief that women are sexual objects who will enjoy sex even if it is forced upon them. Indeed, rape-supportive beliefs in community men have been found to be correlated with sexual fantasies about dominance (Bartels & Gannon, 2009;Zurbriggen & Yost, 2004) and rape (Greendlinger & Byrne, 1987), and hostile beliefs about women have been found to be associated with sexually aggressive fantasies (Bartels et al., 2020).
It could be that a similar appraisal process occurs in relation to enacting offense-re lated behavior, in that fantasy content that is congruent with one's offense-supportive beliefs will be appraised positively and, in turn, be at a greater likelihood to be enacted. In support, sexual aggression in both men and women has been found to be associated with sexual fantasies about dominance, but only when the sexual dominance content is positively appraised (Moyano & Sierra, 2016). Also, men convicted of rape or sexual murder have reported that the content of their offense-supportive beliefs was reflected in the content of their sexual fantasies (Beech, Fisher, & Ward, 2005;Beech, Ward, & Fisher, 2006) and that enacting these fantasies was the motivation for their offending.
Defining "High Risk" Sexual Fantasies
The moderating factors discussed above are key components in the following definition of "high risk" sexual fantasies proposed by Bartels and Gannon (2011, p. 553): "mental imagery involving an elaborate sexual scenario or script with distorted aims and/or means, whose repeated use can increase the risk of the fantasizer committing a sexual offense in the presence of certain contexts and/or dispositions" (see Figure 1). This definition implies that deviant fantasy content may not increase the sexual offense risk of the fantasizer unless they possess traits that facilitate disinhibition or harbor beliefs that validate the content. Bartels and Gannon (2011) also argue that the content of "high risk" fantasies need not be deviant, necessarily, but rather can represent a distorted view of reality. For example, a man may fantasize about meeting an attractive woman in a park, leading her to his car in a display of dominance, and having intercourse with her at his apartment. In this example, there is no culturally deviant content; however, the content may be validated by the man's implicit theories (e.g., women are sex objects and there is a need to be dominant in a hostile world). This may lead the man to enact his fantasy script as intended, even if the woman does not consent, ultimately leading to the commission of a sexual offense.
Figure 1
Operational Definition of "High Risk" Sexual Fantasies (Bartels & Gannon, 2011) Assessing "High Risk" Sexual Fantasies Assessing the content and use of "high risk" sexual fantasies is important for both research and clinical practice. The most common methods documented within the lit erature include interviews, fantasy diaries, questionnaires, and asking clients to write out a favorite sexual fantasy scenario (Leitenberg & Henning, 1995). The first and last methods can provide richer detail about the fantasy content which is useful, since, as Turner-Moore and Waterman (2017) have noted, sexual fantasy content is often comprised of a specific target, behavior, and location (e.g., oral sex with an ex-partner on a beach). Such methods can also help uncover other useful information such as the triggers and functions of sexual fantasies. However, these two important methods have their limitations. For example, it is difficult to quantify and standardize the data obtained via interviews and narratives of fantasy scenarios, and they also do not provide a clear indication of how often clients fantasize about the content they report.
In contrast, fantasy diaries are typically completed over a prespecified time-period, and so can more readily provide frequency data as well as other useful information regarding content, triggers, and context (McKibben et al., 1994). Researchers and clini cians simply need to ensure that the necessary questions/criteria are incorporated into the response sheet of the diaries. The challenge, however, is that this method requires re spondents to be aware of and honestly record this information. This can be problematic, as responses may be affected by social desirability biases, and it may be embarrassing or inconvenient to record fantasy information in real-time (e.g., when in the presence of others).
The final, and most perhaps most common, method for assessing sexual fantasies is through use of a sexual fantasy questionnaire. These questionnaires are typically comprised of a list of sexual behaviors (i.e., items) that a respondent rates on a Likert scale to capture how often they sexually fantasize about each. This enables an array of fantasy content to be assessed, including "high risk" sexual fantasies, meaning that useful information about both content and frequency can be obtained. Some notable exam ples of validated questionnaires include the Wilson Sex Fantasy Questionnaire (WSFQ; Wilson, 1978), the Sexual Fantasy Questionnaire (Gray et al., 2003), the Paraphilic Sexual Fantasy Questionnaire (O'Donohue et al., 1997), and the Multidimensional Assessment of Sex and Aggression (MASA; Knight et al., 1994). A number of these evidence-based instruments are used as part of routine assessment batteries in programs designed to treat sexual offending behavior (see Hudson et al., 2002). However, there are two notable challenges that researchers and clinicians should be aware of when using sexual fantasy questionnaires.
The first issue with sexual fantasy questionnaires is that the majority of sexual fantasy questionnaires do not provide a clear operational definition of "sexual fantasy", or they define the term in an all-encompassing manner (e.g., "any sexual thought that is arousing"). This is important to note when we consider the distinction between fleeting sexual thoughts and active sexual fantasizing. That is, a client who experiences many involuntary, fleeting sexual thoughts about children will score high on a child-related fantasy item. However, someone who frequently and deliberately envisions sexual sce narios involving children for a prolonged period of time would also score high on this item. Hence, this distinction is important to ascertain, as it can help us to better understand the nature and role of a client's sexual thoughts. And this is beneficial for case formulations, management plan development, and monitoring.
The second issue with sexual fantasy questionnaires is that they rarely provide any indication of how the items contained therein may interrelate. A client may actively fantasize about tying up an older adult, passionately kissing a young adult, and having sex with a child. These behaviors may be present within a single sexual fantasy scenario yet, because sexual fantasy questionnaires are comprised of distinct items, these fanta sized behaviors will likely be treated as distinct sexual fantasies. Similarly, different behaviors may be fantasized about with different categories of people (e.g., a man with a non-exclusive interest in children may fantasize about romantic behaviors with children, but sexually aggressive behaviors with adult women). The interplay between different behaviors and targets cannot be ascertained from most questionnaires at this time, and so would require a follow-up exploration by a skilled researcher or clinician (e.g., via an interview).
Treating "High Risk" Sexual Fantasies
Given its theorized and empirical relationship with sexual offending, sexual fantasizing should be a target for treatment if the content is deemed to be "high risk" and is frequently fantasized about. It is not advised that practitioners begin focusing on the reduction of "high risk" fantasies until a therapeutic alliance has been established, as this is when clients will feel most comfortable in disclosing such personal -in some cases shameful -thoughts. When treatment does begin, it is important at the start for practitioners to emphasize that it is possible to control "high risk" fantasies, because men who sexually offend often have the implicit theory that their sexual impulses are uncon trollable (Polaschek & Gannon, 2004). Individuals undergoing sexual offender treatment have also rated it as useful to be taught that fantasizing is not wrong in and of itself, that it is better to accept (rather than to fight) sexual thoughts, and that sexual thoughts are different from actual behavior (Dwyer & Myers, 1990). Finally, it is recommended that a thorough diagnostic assessment be conducted at the beginning of treatment to ensure that sexual fantasies are not simply intrusive thoughts indicative of a serious mental illness such as schizophrenia, obsessive compulsive disorder, or post-traumatic stress disorder. Once these steps have been taken, research suggests that the practitioner can follow one of four approaches to reduce the frequency and intensity of "high risk" sexual fantasies: (1) the behavioral approach, (2) the cognitive approach, (3) the imagination approach, and (4) the mindfulness-based approach.
The Behavioral Approach
There is mixed evidence as to the usefulness of the behavioral approach in reducing the frequency and intensity of "high risk" sexual fantasies, particularly in relation to mastur batory reconditioning techniques designed to reduce the level of sexual arousal elicited by deviant sexual fantasies and/or to increase the level of arousal elicited by non-deviant fantasies. These techniques include thematic shift (Marquis, 1970), fantasy alternation (Abel, Blanchard, Barlow, & Flanagan, 1975), directed masturbation (Maletzky, 1985), and masturbatory satiation (Marshall & Lippens, 1977). Thematic shift involves the client masturbating while imagining acting out their deviant sexual fantasies and then switch ing to a non-deviant sexual fantasy prior to orgasm. Fantasy alternation involves the client masturbating on multiple occasions in a phallometric laboratory setting, alternat ing between deviant sexual fantasies and non-deviant sexual fantasies until the client recognizes they are able to be aroused by non-deviant stimuli.
Directed masturbation involves the practitioner instructing the client only to mastur bate to non-deviant fantasies, often using prepared scripts and visual aids, to reinforce the strength of the client's arousal to this sort of fantasy, hence increasing its frequency. Masturbatory satiation involves the client masturbating while imagining acting out their deviant sexual fantasies and then, after orgasm, continuing to masturbate for an exten ded period of time, resulting in boredom, fatigue, and discomfort.
In a review of these techniques, Laws and Marshall (1991) concluded that direc ted masturbation and masturbatory satiation have empirical support for their effica cy, whereas thematic shift and fantasy alternation do not. More recently, Allen and colleagues (2020) conducted an updated systematic review and found that treatment programs that incorporated behavioral reconditioning techniques (especially those that employed more than one technique) were effective at reducing the frequency of deviant sexual fantasizing. In addition, Gannon and colleagues (2019) recently published a meta analysis which found that the general inclusion of behavioral reconditioning techniques in sexual offending treatment programs added incrementally to the programs' efficacy in reducing sexual recidivism (Gannon, Olver, Mallion, & James, 2019). Hence, further research is necessary to expand the evidence base to clarify how techniques following the behavioral approach can be most helpful to practitioners and their clients.
The Cognitive Approach
The cognitive approach to treatment focuses on disrupting the thought processes under lying "high risk" sexual fantasies. The most commonly employed technique by practi tioners following the cognitive approach is thought suppression, which involves asking clients to actively try to stop imagining deviant sexual fantasies. Qualitative research suggests that men who have sexually offended also self-report engaging in thought sup pression, as they view their deviant sexual fantasies as having had a causal role in their offending, or because the fantasies remind them of the consequences of their offense (Gee et al., 2004). Although thought suppression is commonsensical, the technique has been found to actually increase the frequency of targeted fantasies post-suppression, making it "at best a weak strategy, and at worst dangerous" (Shingler, 2009, p. 51).
A lesser used but empirically more promising cognitive technique involves image ry-competing tasks, which are tasks performed at the same time as the targeted mental imagery. Since both the task and the imagery compete for working memory resources, the experience of the mental imagery becomes impaired. The DPM-ST proposes that sexual fantasizing requires the resources of working memory as it involves finding, manipulating, and holding in mind the information needed to construct sexual mental imagery (Bartels et al., 2020). Based on this premise, Bartels et al. (2018) recruited a non-forensic sample to examine whether the imagery-competing task of bilateral eye-movements (performed while fantasizing sexually) led to an impairment in sexual fantasy content. As hypothesized, taxing working memory via eye-movement led to a significant reduction in sexual fantasy vividness, emotionality, and arousability, relative to the "no eye-moment" condition. This signifies a promising treatment technique that could be implemented across treatment formats (Allen et al., 2020). However, further research is needed to test its efficacy in relation to "high risk" sexual fantasies and forensic populations.
The Imagination and Mindfulness-Based Approaches
Although there is currently a limited evidence base regarding the efficacy of the imag ination and mindfulness-based approaches to reducing the frequency and intensity of "high risk" sexual fantasies, there is theoretical justification for the utility of the methods falling under these modalities. In both approaches, the practitioner plays a central role, inducing deviant sexual fantasies and then helping the client to modify their reaction to the mental representation in real time. The intensity of the fantasy content can be changed depending on the practitioner's level of descriptiveness, as well as the client's moving of their eyes horizontally or vertically during the induction process (Marks, 1973).
The aim of the imagination approach is to reduce the attractiveness of "high risk" sexual fantasies for clients (Urbaniok & Endrass, 2006). This is accomplished by methods such as pausing or modifying aspects of the sexual fantasy to dampen the client's stimulation while fantasizing. For example, practitioners can train clients to add or remove color from fantasies, or to zoom in or out of any particular aspect of their script. By reinforcing these modified fantasies, practitioners may selectively reinforce aspects of the mental representations to support the development of new, lower risk thought patterns. This approach shares similarities to "imagery rescripting", which has shown great promise in other clinical domains such as PTSD, social anxiety disorder, and major depression (Morina, Lancee, & Arntz, 2017) and, thus, is worth exploring in relation to its efficacy with individuals who have sexually offended.
Mindfulness-based methods focus on helping clients experience "high risk" sexual fantasies and associated physiological reactions without engaging with, suppressing, or acting on them (Dafoe, 2011). Clients are taught how to control their attention in order to focus on the present, accepting whatever thoughts, feelings, and sensations they might be experiencing whilst fantasizing. They are not to label them as good or bad, right or wrong, legal or illegal, healthy or unhealthy, and this non-judgmental attention is held until the deviant thought is no longer present. The mindfulness-based approach is recommended for clients whose "high risk" sexual fantasies unfold unwillingly and (per ceivably) uncontrollably. For such clients, mindfulness techniques may assist in reducing impulsivity, developing insight, and improving introspection (Howells, Tennant, Day, & Elmer, 2010). Preliminary research findings on the effectiveness of such techniques are promising (Dafoe, 2011;Singh et al., 2011).
Conclusion
Due to its modifiable nature, the content of sexual fantasizing may represent an impor tant treatment target for practitioners working with people who have sexually offended or populations vulnerable to engaging in such behavior. However, before conclusions can be made or best practice recommendations put forth, further research is necessary. Systematic review research is also needed to establish the common definitional compo nents of "high risk" sexual fantasies across proposed models. Thereafter, qualitative in vestigations of practitioners who routinely work with people who have sexually offended are needed to establish the clinical utility of the identified definitional components. Of particular interest is whether practitioners believe sexual fantasies can be "high risk" in individuals not already at risk of committing a sexual offense. The findings of such quantitative and qualitative research would advance our understanding of assessment, management, and monitoring best practices for this important forensic population.
Funding:
The authors have no funding to report. | 2021-07-26T00:05:29.982Z | 2021-06-15T00:00:00.000 | {
"year": 2021,
"sha1": "9f1c07abbef6214941d699267e939624c233d614",
"oa_license": "CCBY",
"oa_url": "https://sotrap.psychopen.eu/index.php/sotrap/article/download/5291/5291.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d1bba46376b68b0a543049cceb671bf29a685395",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
} |
3858149 | pes2o/s2orc | v3-fos-license | The Network Nullspace Property for Compressed Sensing of Big Data over Networks
We adapt the nullspace property of compressed sensing for sparse vectors to semi-supervised learning of labels for network-structured datasets. In particular, we derive a sufficient condition, which we term the network nullspace property, for convex optimization methods to accurately learn labels which form smooth graph signals. The network nullspace property involves both the network topology and the sampling strategy and can be used to guide the design of efficient sampling strategies, i.e., the selection of those data points whose labels provide the most information for the learning task.
I. INTRODUCTION
We introduce a novel recovery condition, termed the network nullspace property (NNSP), which guarantees accurate recovery of clustered ("piece-wise constant") graph signals from knowledge of its values on a small subset of sampled nodes. The NNSP couples the clustering structure of the underlying data graph to the locations of the sampled nodes via interpreting the underlying graph as a flow network.
The presented results apply to an arbitrary partitioning, but are most useful for a partitioning such that nodes in the same cluster are connected with edges of relatively large weights, whereas edges between clusters have low weights. Our analysis reveals that if cluster boundaries are wellconnected (in a sense made precise) to the sampled nodes, then accurate recovery of clustered graph signals is possible by solving a convex optimization problem.
Most of the existing work applies spectral graph theory to define a notion of band-limited graph signals, e.g. based on principal subspaces of the graph Laplacian matrix, as well as sufficient conditions for recoverability, i.e., sampling theorems, for those signals [4], [16]. In contrast, our approach does not rely on spectral graph theory, but involves structural (connectivity) properties of the underlying data graph.
The problem setup considered in this work is very similar to those of [18], [21], which provide sufficient conditions such that a variant of the Lasso method accurately recovers smooth graph signals from noisy observations. However, in contrast to this line of work, we assume the graph signal values are only observed on a small subset of nodes.
II. PROBLEM FORMULATION
Many important applications involve massive heterogeneous datasets comprised heterogeneous data chunks, e.g., mixtures of audio, video and text data [5]. Moreover, datasets typically contain mostly unlabeled data points; only a small fraction is labeled data. An efficient strategy to handle such heterogenous datasets is to organize them as a network or data graph whose nodes represent individual data points.
II-A. Graph Signal Representation of Big Data
In what follows we consider datasets which are represented by a weighted data graph G = (V, E, W) with nodes V = {1, . . . , N }, each node representing an individual data point. These nodes are connected by edges {i, j} ∈ E. In particular, given some application-specific notion of similarity, the edges of the data graph G connect similar data points i, j ∈ V by an edge {i, j} ∈ E. In some applications it is possible to quantify the extent to which data points are similar, e.g., via the distance between sensors in a wireless sensor network [22]. Given two similar data points i, j ∈ V, we quantify the strength of their connection {i, j} ∈ E by a non-negative edge weight W i,j ≥ 0 which we collect in the symmetric weight matrix W ∈ R N ×N In what follows we will silently assume that the data graph G is oriented by declaring for each edge {i, j} ∈ E one node as the head e + and the other node as the tail e − . For the oriented data graph we define the directed neighbourhoods of a node i ∈ V as N + Beside the edges structure E, network-structured datasets typically also carry label information which induces a graph signal defined over G. We define a graph signal x[·] over the graph G = (V, E, W) as a mapping V → R, which associates (labels) every node i ∈ V with the signal value x[i] ∈ R. In a supervised machine learning application, the signal values x[i] might represent class membership in a classification problem or the target (output) value in a regression problem. We denote the space of all graph signals, which is also known as the vertex space (cf. [6]), by R V .
II-B. Graph Signal Recovery
We aim at recovering (learning) a graph signal x[·] ∈ R V defined over the data graph G, from observing its values The recovery of the entire graph signal x[·] from the incomplete information provided by the signal samples {x[i]} i∈M is possible under a smoothness assumption, which is also underlying many supervised machine learning methods [3]. This smoothness assumption requires the signal values or labels of data points which are close, with respect to the data graph topology, to be similar. More formally, we expect the underlying graph signal x[·] ∈ R V to have a relatively small total variation (TV) The total variation of the graph signal x[·] obtained over a subset S ⊆ E of edges is denoted Some well-known examples of smooth graph signals include low-pass signals in digital signal processing where time samples at adjacent time instants are strongly correlated and close-by pixels in images tend to be coloured likely. The class of graph signals with a small total variation are sparse in the sense of changing significantly over few edges only. In particular, if we stack the signal differences (across the edges {i, j} ∈ E) into a big vector of size |E|, then this vector is sparse in the ordinary sense of having only few significantly large entries [7].
In order to recover a signal with small TV x[·] TV , from its signal values {x[i]} i∈M , a natural strategy iŝ There exist highly efficient methods for solving convex optimization problems of the form (1) (cf. [2], [11], [23] and the references therein).
III. RECOVERY CONDITIONS
The accuracy of any learning method based on solving (1) depends on the deviations between the solutionsx[·] of the optimization problem (1) and the true underlying graph signal x[·] ∈ R V . In what follows, we introduce the network nullspace condition as a sufficient condition on the sampling set and graph topology such that any solutionx[·] of (1) accurately resembles an underlying clustered graph signal Here, we used a fixed partition F = {C 1 , . . . , C |F | } of the entire data graph G into disjoint clusters C l ⊆ V. While our analysis applies to an arbitrary partition F , our results are most useful for reasonable partitions where edges within clusters are connected by many edges with large weight, while nodes of different clusters are loosely connected by few edges with small weights. Such reasonable partitions can be obtained by one of the recent highly scalable clustering methods (cf. [9], [19]). However, we highlight that the knowledge of the partition is only required for the analysis of methods based on solving the recovery problem (1), it is not required for the actual implementation of those methods, as the recovery problem (1) itself does not involve the partition.
We will characterize a partition F by its boundary which is the set of edges connecting nodes from different clusters. We highlight that the recovery problem 1 does not require knowledge of the partition F . Rather, the partition F and corresponding signal model (2) is only used for analyzing the solutions of (1). Consider a clustered graph signal x[·] ∈ R V of the form (2). We observe its values x[i] at the sampled nodes i ∈ M only. In order to have any chance for recovering the complete signal only from the samples {x[i]} i∈M we have to restrict the nullspace of the sampling set, which we define as In order to define the network nullspace property which characterizes the solutions of the recovery problem (1), we need the notion of a flow with demands [14].
at every node i ∈ V.
For a more detailed discussion of the concept of network flows, we refer to [14]. In this paper, we will use the flow concept in order to characterize the connectivity properties or topology of a data graph G = (V, E, W) by interpreting the edge weights W i,j as capacity constraints that limit the amount of flow along the edge {i, j}. In particular, using network flows with demands will allow us to adapt the nullspace property, introduced within the theory of compressed sensing [8], [10] for sparse signals, to the problem of recovering smooth graph signals. It turns out that if NNSP-(M,F ) is satisfied by the sampling set M for a partition F , then the nullspace of the sampling process, i.e., the set of graph signals which vanish on the sampling set, which is precisely the nullspace K(M) (cf. (4)), cannot contain a non-zero clustered graph signal of the form (2).
The formulation of NNSP involves a search over all signatures, whose number is around 2 |∂F | , which might be intractable for large data graphs. However, similar to many results in compressed sensing, we expect using probabilistic models for the data graph to render the verification of NNSP tractable [10]. In particular, we expect that probabilistic statements about how likely the NNSP is satisfied for random data graphs (e.g., conforming to a stochastic block model) can be obtained easily. Now we are ready to state our main result, i.e., the network nullspace condition implies that the solution (1) is unique and coincides with a true underlying clustered graph signal of the form (2).
Theorem 3. Consider a clustered graph signal x c [·] ∈ X (cf. (2)) which is observed only at the sampling set M ⊆ V. If NNSP-(M, F ) holds, then the solution of (1) is unique and coincides with x c [·].
Thus, if NNSP-(M, F ) holds, we can expect recovery algorithms based on solving (1), to accurately learn clustered graph signals x[·] of the form (2).
The scope of Theorem 3 is somewhat limited as it applies only to graph signals which are precisely of the form (2). We now state a more general result applying to any graph signal x[·] ∈ R V .
Thus, as long as the underlying graph signal x[·] can be well approximated by a clustered signal of the form (2), any solutionx[·] of (1) is a graph signal which varies significantly only over the boundary edges ∂F . We highlight that the error bound (5) only controls the TV (semi-)norm of the error signalx[·] − x[·]. In particular, this bound does not directly allow to quantify the size of the global mean squared error One important use of Theorems 3, 4 is that it guides the choice for the sampling set M. In particular, for a suitably chosen partition F and associated signal model (2), one should aim at sampling nodes such that the NNSP is likely to be satisfied. This approach has been studied empirically in [1], [15], verifying accurate recovery by efficient convex optimization methods using sampling sets satisfying the NNSP (cf. Definition 2).
IV. NUMERICAL EXPERIMENTS
We now verify the relevance of NNSP for the graph signal recovery problem using a synthetic data set whose underlying data graph is a chain graph G chain . This data graph contains |V| = 100 nodes which are connected by |E| = 99 undirected edges {i, i + 1}, for i ∈ {1, . . . , 99} and partitioned into |F | = 10 equal-size clusters F = {C l } l=1,2,...,10 , each cluster containing 10 consecutive nodes. The edges connecting nodes in the same cluster have weight W i,j = 4, while those connecting different clusters have weight W i,j = 2. For this data graph we generated a clustered graph signal x[i] of the form with alternating coefficients a l ∈ {1, 5}.
The graph signal x[i] is observed only on the nodes belonging to a sampling set, which is either M 1 or M 2 . The sampling set M 1 contains exactly one node from each cluster C l and thus, as can be verified easily, satisfies the NNSP (cf. Definition 2). While having the same size as M 1 , the sampling set M 2 does not contain any node of clusters C 2 and C 4 .
In Figure 1, we illustrate the recovered signals obtained for each of the two sampling sets by solving (1) using the sparse label propagation (SLP) algorithm [11]. The signal recovered from the sampling set M 1 , which satisfies the NNSP, closely resembles the true underlying clustered graph signal. In contrast, the sampling set M 2 , which does not satisfy the NNSP, results in a recovered signal which significantly deviates from the true signal.
V. CONCLUSIONS
We considered the problem of recovering clustered graph signals, defined over complex networks, from observing its signal values on a small set of sampled nodes. By extending tools from compressed sensing, we derived a sufficient condition, the network nullspace condition, on the graph topology and sampling set such that a convex recovery method is accurate. This condition is based on the connectivity properties of the underlying network. In particular, it requires the existence of certain network flows with the edge weights of the data graph interpreted as capacities. The network nullspace condition involves both, the sampling set and the cluster structure of the data graph. Roughly speaking it requires to sample more densely near the boundaries between different clusters.
VI. PROOFS
The proofs for Theorem 3 and Theorem 4 rely on recognizing the recovery problem (1) as an analysis ℓ 1minimization problem [17]. A sufficient condition for analysis ℓ 1 -minimization to deliver the correct solution x[·] is given by the analysis nullspace property [13], [17]. In particular, the sampling set M is said to satisfy the stable analysis nullspace property w.r.t. an edge set S ⊆ E if u[·] E\S ≥ 2 u[·] S for any u ∈ K(M). (4)). Note that, since x[i] is constant for all nodes i ∈ C l in the same cluster, , for any edge {i, j} ∈ E \ ∂F . (7) By the triangle inequality, and thus, since The next result extends Lemma 5 to graph signals x[·] ∈ R V which are not exactly clustered, but which can be well approximated by a clustered signal of the form (2). Lemma 6. Consider a data graph G and a fixed partition F = {C 1 , . . . , C |F | } of its nodes into disjoint clusters C l ⊆ V. We observe a graph signal x ∈ R V at the sampling set M ⊆ V. If (6) holds for S = ∂F , any solutionx[·] of (1) satisfies Proof. The argument closely follows the proof of [12,Theorem 8]. First note that any solutionx[·] of (1) obeys since x[·] is trivially feasible for (1). From (9), we have (4)). Applying the triangle inequality to (10), Combining (11) with (6) (for the signal Using (6) again, We are allowed to assume this since according to Definition 2, if there exists a flow with f [e] > 0 for some e ∈ ∂F , there also exists a flow with f [e] < 0 for the same edge e ∈ ∂F . Next, we add an extra node s to the data graph G which is connected to all sampled nodes i ∈ M with an edge e i = {s, i} which is oriented such that e + i = s. We assign to each edge e i = {s, i} the flow f [e i ] = g[i]. It can be verified easily that the flow over the augmented graph has zero demands for all nodes. Thus, we can apply Tellegen's theorem [20] to obtain u[·] E\∂F ≥ 2 u[·] ∂F . We obtain Theorem 3 by combining Lemma 7 with Lemma 5. In order to verify Theorem 4 we note that, by Lemma 7, the NNSP according to Definition 2 implies the stable nullspace condition (6) for S = ∂F . Therefore, we can invoke Lemma 6 to reach (5). | 2017-09-06T13:59:19.000Z | 2017-05-11T00:00:00.000 | {
"year": 2018,
"sha1": "f3560360dc37861912b08787211e54e1889415bf",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fams.2018.00009/pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "5fdbdcfad91982389948fba9e4e092ecb94bc224",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
238743094 | pes2o/s2orc | v3-fos-license | Increased Risks of Death and Hospitalization in Influenza/Pneumonia and Sepsis for Individuals Affected by Psychotic Disorders, Bipolar Disorders, and Single Manic Episodes: A Retrospective Cross-Sectional Study
Individuals with severe mental disorders (SMDs) such as psychotic disorders, bipolar disorders, and single manic episodes have increased mortality associated with COVID-19 infection. We set up a population-based study to examine whether individuals with SMD also had a higher risk of hospitalization and death from other infectious conditions. Anonymized and summarized data from multiple Swedish patient registers covering the entire Swedish population were supplied by the Swedish National Board of Health and Welfare. The frequencies of hospitalizations and deaths associated with influenza/pneumonia and sepsis in individuals with SMD were compared with the rest of the population during 2018–2019. Possible contributing comorbidities were also examined, of which diabetes, cardiovascular disease, chronic lung disease, and hypertension were chosen. A total of 7,780,727 individuals were included in the study; 97,034 (1.2%) cases with SMD and 7,683,693 (98.8%) controls. Individuals with SMD had increased risk of death associated with influenza/pneumonia (OR = 2.06, 95% CI [1.87–2.27]) and sepsis (OR = 1.61, 95% CI [1.38–1.89]). They also had an increased risk of hospitalization associated with influenza/pneumonia (OR = 2.12, 95% CI [2.03–2.20]) and sepsis (OR = 1.89, 95% CI [1.75–2.03]). Our results identify a need for further evaluation of whether these individuals should be included in prioritized risk groups for vaccination against infectious diseases other than COVID-19.
Introduction
Severe mental disorders (SMDs) such as bipolar and psychotic disorders affect an estimated 1-3% of the adult population [1,2]. The life expectancy for individuals with SMD is reduced by approximately 10 to 20 years compared with the general population [3,4]. Studies have shown that most of the reduction in life expectancy seems to be due to somatic comorbidities rather than external events such as suicide or accidents [5,6]. Individuals with SMDs have an elevated risk of prematurely dying from cardiovascular disease, diabetes, and chronic obstructive pulmonary disease. The causes behind this elevated risk are not fully understood. However, some established risk factors such as smoking, substance use, obesity, poor diet, lack of exercise, and hypertension are more common in individuals with SMD [7,8]. There are also concerns regarding inequality in diagnosis and treatment of somatic risk factors and enrolment in primary and secondary prophylactic measures compared with the general population [9]. Individuals with SMDs may thus receive fewer and more delayed medical interventions. They may also be less likely to adhere to prescribed treatments and to seek medical attention for somatic diseases when needed [10,11].
The world remains in the grip of the COVID-19 pandemic [12]. Recent studies have shown that individuals with SMDs are at significantly increased risk of COVID-19-associated death compared with the general population [2,13]. Many of the occurring somatic comorbidities in individuals with SMDs match the identified risk factors for severe infection. Yet, even individuals with SMD without any known risk factors seem to have a threefold increased risk of COVID-19-associated death [2,14]. Some guidelines have now included SMDs as a risk group for COVID-19 [15,16]. Pre-COVID-19 studies, analyzing data from 2003-2009, have indicated that individuals with SMDs have an increased risk of death associated with influenza and pneumonia [5,6,17]. These results have not yet led to individuals with SMDs being prioritized for vaccinations against these respiratory infections [18,19].
Therefore, we set up a population-based study to examine whether individuals with SMDs remain at a higher risk of death and hospitalization due to influenza/pneumonia. The risk of death and hospitalization associated with sepsis was also explored as it is a serious condition with similarities with COVID-19 [20] and there are currently few studies examining sepsis in individuals with SMDs. This way, we intended to update the current evidence base as a decision aid for public health officials. Such information could motivate additional actions and strategies to promote health in these individuals, for example, by inclusion in prioritized groups for vaccination against respiratory infectious pathogens such as influenza and pneumococci. To the best of our knowledge, there are currently few other studies that have examined the risk of death associated with sepsis in general for individuals with SMDs.
Study Design
This retrospective cross-sectional study was based on data from the Swedish National Patient Register, the Swedish Cause of Death Register, and the Swedish Prescribed Drug Register, all of which are managed by the Swedish National Board of Health and Welfare. The register data were linked, anonymized, and summarized by a data manager at the Swedish National Board of Health and Welfare. Statistical analysis was thereafter performed by the research team. The study was approved by the Swedish Ethical Review Authority (DNR 2020-02759). As the study was solely register-based, informed consent was not required [21]. The method was checked against the STROBE guidelines [22].
Data Sources
Individuals with SMDs were identified from the Swedish National Patient Register, which covers all inpatient care and outpatient specialist care in Sweden [23]. The database includes demographical data and diagnoses coded according to the International Classification of Disease, 10th revision (ICD-10) [24]. Cause-of-death data were retrieved from the Swedish Cause of Death Register. This register includes all deaths among Swedish residents regardless of having occurred in Sweden or abroad [25]. The causes of death are coded according to ICD-10. Medicine prescription data were retrieved from the Swedish Prescribed Drug Register. This register contains information on all prescribed drugs dispensed at Swedish pharmacies, coded according to the Anatomical Therapeutic Chemical (ATC) Classification System [26,27]. Data from the registers were linked using the unique personal identification number assigned to all Swedish citizens at birth or immigration.
Study Population
Every person in the Swedish population aged 20 years by 31 December 2017 was included. Individuals with SMD were defined as cases; individuals without SMD, i.e., the rest of the population, were defined as controls. Inclusion from the age of 20 years was chosen to enable stratification in 10-year intervals; the number of outcomes in the age range between 18 and 20 years was also assumed to be negligible.
Outcomes
There were two outcomes: death and hospitalization, associated with either influenza/pneumonia or sepsis, occurring in a two-year period between 1 January 2018 and 31 December 2019. The outcome of death associated with pneumonia/influenza was defined as registration of either influenza or any-cause pneumonia, i.e., ICD-10 codes J09-J18, as an underlying or contributing cause of death. The outcome of death associated with sepsis was defined as registration of any of the ICD-10 codes A40, A41, R57.2, and R65 as an underlying or contributing cause of death. Likewise, the outcomes of hospitalization were defined as a discharge diagnosis registered with the above-mentioned ICD-10 codes in the Swedish National Patient Register. Any individuals registered with both influenza/pneumonia and sepsis were included in both categories.
Exposures
The main exposure was SMD, defined as a recurrent diagnosis of either any psychotic disorder (ICD-10 codes F20, F22, and F25) or bipolar disorders/single manic episodes (ICD-10 codes F30 and F31), on at least two separate occasions between 1998 and 2017. The decision to combine psychotic and bipolar disorders into a single category was to ensure a sufficient sample size and to enable comparison with previous Swedish register studies [2,5,6]. Somatic comorbidities were examined in terms of diabetes, hypertension, cardiovascular disease, and chronic respiratory diseases. The comorbidities were included if registered within a five-year interval before the outcomes, i.e., between 1 January 2013 and 31 December 2017, to increase the chance of being currently existent. The comorbidities were chosen for being (a) known as associated with SMDs, (b) sufficiently prevalent to yield statistical power, and (c) of potential etiological significance for the outcomes.
Statistical Methods
All data were linked, anonymized, and summarized by the Swedish National Board of Health and Welfare, who also performed an initial statistical assessment by tabulating the data into stratified age groups and categories of comorbidity. Odds ratios (ORs), 95% confidence intervals (CIs), and p-values were then calculated from the anonymized and tabulated data using Microsoft Excel [28,29]. As only summarized, but not individualized data were available for confidentiality reasons, age groups and comorbidities were only considered separately. Combinations of comorbidities and age groups were not possible with the available data set. In the age group of 20-39 years, small numbers of outcomes were withheld for confidentiality reasons. This missing data were set to 0 in the statistical analysis.
Baseline Characteristics
A total of 7,780,727 individuals were included in the study; 97,034 (1.2%) individuals with SMD and 7,683,693 (98.8%) individuals without SMDs. Compared with the rest of the population, fewer individuals with SMDs were found in the older age groups ≥60 years and the youngest group. All examined comorbidities except cardiovascular disease were more prevalent in the group with SMDs. The largest differences in the prevalence of comorbidities were observed for diabetes and chronic lung disease. All differences in age distribution and comorbidities between the groups were statistically significant (p < 0.001) ( Table 1).
Death and Hospitalization Associated with Influenza/Pneumonia
The percentages of death associated with influenza/pneumonia are presented in Figure 1. There were 439 (0.5%) deaths associated with influenza/pneumonia in the individuals with SMD and 16,902 (0.2%) in the rest of the population. Overall, the group with SMD had double odds of death associated with influenza/pneumonia (OR = 2.06, 95% CI (1.87-2.27)). There were consistently increased odds of death associated with influenza/pneumonia across all age groups for the group with SMDs, with the highest odds in the age category 40-59 years (OR = 6.25, 95% CI (4.72-8.29)) and the lowest odds in the age category 80+ years (OR = 1.94, 95% CI (1.64-2.30)). In the group without any known comorbidity, the odds for death associated with influenza/pneumonia were more than double for individuals with SMDs (OR = 2.68, 95% CI (2.30-3.13)) ( Table 2).
There were 2495 (2.57%) hospitalizations associated with influenza/pneumonia in individuals with SMDs and 94,572 (1.23%) in the rest of the population. Overall, compared with the rest of the population, the group with SMDs had double odds of hospitalization associated with influenza/pneumonia (OR = 2.12, 95% CI (2.03-2.20)). For individuals with SMDs without any known comorbidities, the odds for hospitalization associated with influenza/pneumonia were about 2.6-fold (OR = 2.56, 95% CI (2.41-2.72)). (Table 2). Full data on death and hospitalizations associated with influenza/pneumonia are available in Tables S1 and S2 and Figure S1.
Death and Hospitalization Associated with Sepsis
There were 156 (0.2%) deaths associated with sepsis in individuals with SMDs and 7666 (0.1%) in the rest of the population. The percentages of death associated with sepsis are presented in Figure 2. Overall, individuals with SMDs had increased odds of death associated with sepsis (OR = 1.61, 95% CI (1.38-1.89)). There were also increased odds for individuals with SMDs in the age groups between 40 and 79 years. When examining individuals with at least one known somatic comorbidity, individuals with SMDs had increased risk in the age groups between 40 and 79 years, with the highest odds in the group of 40-59 years. More than double the odds for death associated with sepsis were observed for individuals with SMDs without any known comorbidity (OR = 2.33, 95% CI (1.81-3.00)) ( Table 3). There were 742 (0.76%) hospitalizations associated with sepsis in individuals with SMDs and 7666 (0.41%) in the rest of the population. Compared with the rest of the popu-lation, individuals with SMDs had almost double the odds of hospitalization associated with sepsis (OR = 1.89, 95% CI (1.75-2.03)). Except for 80+ years, increased odds for hospitalization associated with sepsis were found for individuals with SMDs in all age groups. Individuals with SMDs without known comorbidities had more than double the odds of hospitalization compared with the rest of the population (OR = 2.20, 95% CI (1.97-2.47) ( Table 3). Full data on death and hospitalizations associated with sepsis are available in Tables S3 and S4 and Figure S2.
Discussion
This retrospective nationwide register study shows that, compared with the rest of the Swedish population, individuals with SMDs have increased risks of both death and hospitalization associated with pneumonia/influenza and sepsis. Generally, higher odds ratios were observed for death and hospitalization associated with pneumonia/influenza than for sepsis. There was an overall trend of the odds ratios peaking in the age groups 40-59 and 60-69 years, declining thereafter in the older age groups across all examined outcome categories. The smaller odds ratios in the older age groups may have arisen owing to individuals with less severe SMDs being physically healthier and having a longer life expectancy. Except for death associated with sepsis in the age groups 40-59, the highest odds ratios across all examined outcome categories were observed in individuals with SMDs without any of the known comorbidities examined in this study (diabetes, chronic lung disease, hypertension, or cardiovascular disease). However, the overall percentages in the outcome categories were generally higher in individuals with comorbidities and old age.
The increased risk of death associated with pneumonia/influenza in individuals with psychiatric disorders is in line with previous studies. Crump et al. examined causes of mortality in individuals with schizophrenia and bipolar disorders during 2003-2009. Compared with the general population in Sweden, there was an almost sevenfold increased risk of death due to influenza/pneumonia in individuals with schizophrenia and an about 3.5-fold risk in individuals with bipolar disorders [5,6]. Similarly, Miller et al. in the United States (US) reported a standardized mortality ratio for pneumonia/influenza of 6.6 for patients with SMDs, defined as individuals requiring at least one inpatient psychiatric hospitalization [17]. Standardized mortality rates for individuals with severe mental illnesses were also examined in a large retrospective cohort study in Wales [30]. In their study, the overall standardized mortality rate for pneumonia was almost fourfold, and it was ninefold in the group 45-64 years. For sepsis, the standardized mortality rate was threefold. To the best of our knowledge, there are currently few other studies that have examined the risk of death associated with sepsis in general for individuals with SMDs.
The risk of postoperative sepsis and associated death was examined in the United States. In that study, patients with schizophrenia had double the odds of postoperative sepsis compared with patients without schizophrenia. The risk of death was increased sevenfold [31]. A Danish study examined the risk of death within 30 days after infection. In that study, patients with psychotic or bipolar disorders had around 30 percent increased risks of death due to either pneumonia or sepsis compared with the general population [32]. Adverse clinical outcomes among patients hospitalized for pneumonia with and without schizophrenia were examined in a study from Taiwan. In that study, patients with schizophrenia had a 1.3-to 1.8-fold increased risk of ICU admission, mechanical ventilation, and acute respiratory failure [33]. However, the risk of in-hospital death did not differ significantly.
This study has several strengths and limitations. By including the entire Swedish population in the analysis, the risk of selection bias was eliminated, and statistical power was brought to a maximum. The results are thus valid for the Swedish population, but additional studies are needed to evaluate the generalizability to other parts of the world. All data were summarized by a statistician at the Swedish National Board of Health and Welfare, independently from the research group, thereby eliminating the risk of observation bias. The registers used in this study are regarded as highly validated [23]. The combination of influenza and all-cause pneumonia into a single category enabled comparison with other studies of individuals with SMDs such as the studies by Crump et al. [5,6], but also constitutes a limitation as differentiation between specific infectious agents is not possible. As sepsis was identified by ICD-10 codes and not laboratory records, reliable data regarding etiology were not available for the current data set. The most common bacterial pathogen for pneumonia is Streptococcus pneumoniae and one study has reported that individuals with schizophrenia or bipolar disorder are at increased risk of both pneumococcal pneumonia and septicemia [34,35]. Information regarding the causative infectious agent would, for example, be valuable for advising whether vaccination against influenza and/or pneumococci is motivated. Guidelines for influenza vaccination recommend prioritization for everyone above 65 years of age in Europe and above 50 years of age in the United States regardless of other risk factors. Therefore, stratification of age taking account of these age thresholds could have provided more specific information on those currently not prioritized for influenza vaccination [18,36]. Furthermore, the combination of psychotic and bipolar disorder into a single SMD variable at the point of ordering the data prevented separate analyses of the disorders. Nevertheless, an increased risk of death associated with influenza/pneumonia in individuals with SMDs remains in this analogous follow-up to Crump et al. In future research, SMDs should be explored further, by stratification by medication adherence, psychiatric admissions, and use of mental health legislation. Illness duration is another potentially important factor. However, such a more detailed study of SMDs was not possible with the data set available for the current study. We chose to not include depressive disorders into the group of SMD owing to difficulties to quantify severity in this heterogeneous group. Many of the individuals with major depressive disorders are monitored by primary care, and thus not with full certainty included in the Swedish National Patient Register.
There could be also other contributing factors to the results, the exploration of which is beyond the scope of the current study. Important risk factors such as obesity, socioeconomic status, smoking, and excessive alcohol use are not recorded reliably, if at all, in the registers [7,8,[37][38][39][40][41][42][43][44][45]. Owing to the limitations of the registers and the retrospective nature of the current study, further stratification by characteristics or matching of study populations was not possible. Prospective studies are needed in future to reduce confounding. The list of comorbidities to be explored could be expanded in future research. Dementia is one such example, recently reported to be much more prevalent among patients with schizophrenia. Dementia is also a known risk factor for pneumonia [46,47]. There are also other socioeconomic and environmental differences; individuals with SMDs are more likely to be homeless, unemployed, and live in poverty, all of which may affect access to healthcare and increase the risk of infection [48][49][50][51][52][53]. Individuals with SMDs may also receive fewer and more delayed medical interventions for somatic diseases, possibly associated with discrimination and stigmatization [10,54]. Other undiagnosed somatic comorbidities may be more prevalent in individuals with SMDs, and may thus contribute to adverse outcomes [9,55,56]. For any given comorbidity, individuals with SMDs may also have greater clinical severity, for example, uncontrolled diabetes [57]. These unexplored factors most likely also add to the differences in the outcomes of this study, although the magnitudes are difficult to estimate. Furthermore, psychiatric medications commonly have weight gain as a side-effect and some may affect immune function, also likely affecting the outcomes [58][59][60].
Finally, the current COVID-19 pandemic has sparked extensive research and discussion regarding which individuals should be prioritized for vaccination [61]. Emerging data indicate that individuals with SMDs are at increased risk for severe COVID-19 infection and several guidelines have recently included SMDs as a high-risk group for COVID-19 [15,16,52,62]. To the best of our knowledge, SMD without other known chronic medical diseases is not considered as a potential risk group for other respiratory infections such as severe influenza [18,63,64]. The European Centre for Disease Prevention and Control defines risk groups as persons at higher risk of adverse outcomes when infected with influenza and for whom vaccines are demonstrated to reduce the risk of those outcomes [63]. Additional studies are needed to confirm the influenza virus as a definitive cause of the increased risks of death and hospitalization and to explore the potential effects of vaccines against influenza in individuals with SMD. As with COVID-19, the findings of this study suggest that coexisting somatic comorbidities not are enough to explain the increased risks for individuals with SMDs associated with influenza/pneumonia. Therefore, the argument that individuals with SMDs will already be covered by vaccination priority strategies because of their physical health status does not hold.
Conclusions
Compared with the general population, individuals with SMDs have increased risks of death and hospitalization associated with both influenza/pneumonia and sepsis, even without known somatic comorbidities. Our findings identify a need for further evaluation of whether these individuals should be included in prioritized groups for vaccination against infections other than COVID-19.
Supplementary Materials:
The following are available online at https://www.mdpi.com/article/10 .3390/jcm10194411/s1, Figure S1: Hospitalizations associated with influenza/pneumonia between 2018 and 2019 in individuals with severe mental disorders and the general population; Figure S2: Hospitalizations associated with sepsis between 2018 and 2019 in individuals with severe mental disorders and the general population; Table S1: Deaths associated with influenza/pneumonia between 1 January 2018 and 31 December 2019 in patients with severe mental disorder vs. reference population according to age and comorbidities; Table S2: Hospitalizations associated with influenza/pneumonia between 1 January 2018 and 31 December 2019 in patients with severe mental disorder vs. reference population according to age and comorbidities; Table S3: Deaths associated with sepsis between 1 January 2018 and 31 December 2019 in patients with severe mental disorder vs. reference population according to age and comorbidities; Table S4: Hospitalizations associated with sepsis between 1 January 2018 and 31 December 2019 in patients with severe mental disorder vs. reference population according to age and comorbidities. Institutional Review Board Statement: This study was approved by the Swedish Ethical Review Authority (DNR 2020-02759) and was performed according to the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) criteria.
Informed Consent Statement:
Informed consent was not required as the study was solely registerbased [21].
Data Availability Statement:
The raw data supporting the conclusions of this study are available on request from the corresponding author.
Conflicts of Interest: U.W. has received funding for educational activities on behalf of Norrbotten Region (Masterclass Psychiatry Programme 2014-2018 and EAPM 2016, Luleå, Sweden): Astra Zeneca, Eli Lilly, Janssen, Novartis, Otsuka/Lundbeck, Servier, Shire, and Sunovion. U.W. has received lecture honoraria from Lundbeck and is scheduled to deliver further lectures for Lundbeck and Otsuka, receiving honoraria for these activities. All other authors declare no competing interests. | 2021-10-05T20:09:55.002Z | 2021-09-26T00:00:00.000 | {
"year": 2021,
"sha1": "8eaa6a35bedaf5bdde3eb6870f5ce95c8460fc23",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/10/19/4411/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "15e8297d9a67b9e1033c88179d3fe932ef0a9ada",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252596155 | pes2o/s2orc | v3-fos-license | A novel Machine-Learning method for spin classification of neutron resonances
The performance of nuclear reactors and other nuclear systems depends on a precise understanding of the neutron interaction cross sections for materials used in these systems. These cross sections exhibit resonant structure whose shape is determined in part by the angular momentum quantum numbers of the resonances. The correct assignment of the quantum numbers of neutron resonances is, therefore, paramount. In this project, we apply machine learning to automate the quantum number assignments using only the resonances' energies and widths and not relying on detailed transmission or capture measurements. The classifier used for quantum number assignment is trained using stochastically generated resonance sequences whose distributions mimic those of real data. We explore the use of several physics-motivated features for training our classifier. These features amount to out-of-distribution tests of a given resonance's widths and resonance-pair spacings. We pay special attention to situations where either capture widths cannot be trusted for classification purposes or where there is insufficient information to classify resonances by the total spin $J$. We demonstrate the efficacy of our classification approach using simulated and actual $^{52}$Cr resonance data.
I. INTRODUCTION
Neutron scattering and reaction data for neutron energies ranging from 10 −5 eV to 20 MeV are needed for simulations of nuclear systems in nuclear fission and fusion energy production, stockpile stewardship, nonproliferation, etc. [1]. For energies below that typical of fission neutrons, ∼ 1 MeV, normally only elastic and capture (and fission for actinides) channels are open. For all but the lightest nuclei, these reaction channels all exhibit strong resonant structure that we identify with the energy levels of the compound nucleus formed by the capture of the neutron into the target state [2].
The double differential capture or elastic scattering cross sections are completely determined by the set of resonance energies, the decay widths to each of the observed reaction channels and incident neutron orbital angular momentum L and the total angular momentum J characterizing these reaction channels 1 , when described using R-matrix theory [3,4]. We cannot predict the energies and widths of the resonances in any nuclei other than the lightest systems with current theoretical and computational approaches. The resonance energies and widths must be determined by fitting experimental transmission or cross section measurements. To complicate matters, the shape of the R-matrix fitting function is heavily dependent on the quantum numbers (L, J) assigned to the particular resonance.
Codes, such as SAMMY [5] and REFIT [6], use a Generalized Least-Squares Fitting routine derived from a linearized version of Bayes' Equation. Conventional evaluations based on SAMMY or REFIT require significant preparation by an evaluator to establish reliable prior estimates of the widths, energies and (L, J) quantum numbers of the resonances, ensuring that one is sufficiently close to the χ 2 minimum for the fit to be well founded. Unfortunately, the shear number of known resonances in a typical evaluation makes this endeavor tedious and time-consuming. Furthermore, this step of the evaluation is subjective, relying on the experience of the evaluator and, therefore, it is hardly reproducible. This fact leads to significant amounts of unquantified uncertainty in the final evaluation.
There are a number of experimental techniques that can help determine the incident neutron orbital angular momentum L and the total angular momentum J of each resonance including study of the low-energy γ-ray cascades from neutron capture events detected by Ge-Li detectors, γ-ray multiplicity methods, and measurements with polarized neutron beams and polarized targets. In the best case, angular distribution data for scattered neutrons or emitted photons are available and can be used to determine the L and J of the resonance. Between the hundreds, or thousands, of resonances per nuclide in an evaluation and the technical complexity of some of these techniques, they are often not used in practice. Fig. 1 shows a representation of measured cross-section data where two distinct resonance shapes are observable: a wide and asymmetric shape corresponding to s-wave (L = 0) resonances and a narrow and symmetric resonance shape corresponding to p-wave (L = 1) resonances. Note, however, that the visible distinction in the experimental data between the two shapes diminishes with increasing incident neutron energy.
The current practice world-wide is for the resonance evaluator to visually inspect the experimental crosssection, yield or transmission data, such as in Fig. 1, sometimes for thousands of resonances, and make the spin assignments for each resonance. As mentioned before, this part of the process is i) very time consuming for the evaluator, ii) not fully reproducible, iii) does not result in uncertainty estimate on the correct resonance spin assignment, and iv) has significant impact on the angular distributions and, therefore, on the modeling of neutron transport in nuclear systems. Furthermore, visual inspection of the resonance shape in experimental cross-section data can only determine the orbital angular momentum L (s-wave, p-wave) corresponding to each resonance and not the total angular momentum J. The evaluator is left to chose the total angular momentum by observing small changes in the interference pattern between resonances of the same orbital angular momenta.
Moving beyond a pure experimental approach, there are some early attempts at information-theoretic techniques for resonance spin classification. Ref. [7] was the first to suggest using random matrix theory (RMT) inspired metrics to determine the fraction of missing levels using stochastically generated resonances. Ref. [8, p. 81] suggests probabilistic assignment based on consideration of width distribution. This concept is expanded on by Mitchell et al. [9]. The series of papers by Mulhall et al. examine the use of ∆ 3 statistic to infer the purity of a spin sequence [10][11][12][13]. Finally, there is a pair of reports by Mitchell and Shriner estimating the fraction of missing or misclassified resonances [14,15] using various RMT-inspired metrics.
In this study, we aim to develop a more reliable, automated and reproducible method through the utilization of a variety of standard machine-learning classification algorithms. The classifiers used in this study can be found in the scikit-learn python module [16]. In the recent years many statistical and computational tools aiming to mimic the way the human brain functions to identify patterns and learn how to solve problems, broadly named Machine-Learning (ML) methods, have been optimized and packaged for general purpose. These have been applied to an extremely wide variety of applications.
Our goal is to leverage such methods and foundation to develop a new and reproducible approach to the classification of neutron resonances. This paper is organized as follows. In Section II, we review the relevant statistical and average properties of neutron resonances. Using these properties, we develop a set of machine-learning features allowing us to recast the quantum number assignment problem as a machinelearning problem in Section III. In Section IV, we apply our machine-learning approach to n+ 52 Cr neutron resonances. In Section V, we provide a summary and outlook. As a reference, we present in Appendix A the definition of ML terms and concepts used throughout the text.
II. STATISTICAL PROPERTIES OF RESONANCES
Although the experimental situation is complicated, there are results from both nuclear reaction and RMTs that will make our classification problem more tractable.
Here we do not aim for a review of theory of neutron resonances as there are many other sources for that (e.g., Refs. [8,17]). Rather, we highlight results that impact our resonance classification task.
A. JLS coupling
Our classification scheme focuses on the L and J quantum numbers. As R-matrix analysis of neutron resonances is nearly always done using the JLS coupling scheme [3], it is, therefore, useful to expand on it. The JLS scheme describes the coupling of the incident neutron with orbital angular momentum L, spin 1/2, and target nucleus spin I t to the total angular momentum J.
In the JLS coupling scheme, the two particles participating in a reaction channel have their spins coupled to a total channel spin S. For an entrance channel with target nucleus spin and parity I Π t and incident neutron with spin and parity I n = 1 2 + , the total channel spin may take values S = |I t − I n |, . . . , I t + I n . Since neutrons have spin 1/2, this limits S to at most two values, S = |I t − 1/2| and I t + 1/2. For a spin zero nucleus, only S = 1/2 is allowed.
The total channel spin J then may take values J = |L − S|, . . . , L + S. For a spin zero nucleus, J is limited to 1/2 for s-wave resonances (L = 0). For L > 0, J takes two values L − 1/2 and L + 1/2. For higher spin target nuclei, J can take many values.
Additional consideration of the parity of the neutron and target limits the potential values of J somewhat but does not change the essential problem that there are usually many possible values of J for a given L. Fröhner provides a table of allowed values in Ref. [8, In any event, these considerations of angular momentum limit the possible labels we can assign to a resonance A portion of a typical resonance region cross section; namely, the elastic cross section for 238 U extracted from the ENDF/B-VIII.0 evaluated file [1], as a illustrative representation of resonance properties. In this figure we show several L = 0, 1, and 2 resonances and label the spacing DL between a pair of L = 0 and L = 1 resonances, respectively. We note that it is often quite difficult to discern between a L = 1 and L = 2 resonance. Determining the J quantum number is significantly more challenging as indicated in the main text.
sequence to a tractable number. In some cases, these considerations completely determine the J value for a given L, at least in the case of nuclei with a spin zero ground state.
B. Random matrix theory
Within a sequence of resonances with the same L and J (and perhaps S), which defines a spingroup, the question arises as to whether there are qualities of the resonances and/or the entire sequence that can inform the classification task. The answer is affirmative if one considers the direct results of RMT.
In random matrix theory, we make a bold and somewhat surprising assumption about the compound nuclear states and their couplings to the outside space: we assume that the Hamiltonian governing the system's couplings between states obeys all relevant symmetries (so it is invariant under an orthogonal transformation) but is otherwise made of random numbers drawn from a normal distribution. The collection of all such Hamiltonians with a given dimension and coupling scale D is the Gaussian Orthogonal Ensemble (GOE). It can be shown that the eigenvalues of these GOE Hamiltonians (which we identify with compound nuclear states and hence resonance energies) have a joint probability density given by Refs. [18,19] Here dO is the de Haar measure of the integral, N 0 is a normalization constant, N is the dimension of the space (assumed large), E µ are the eigenvalues of the Hamiltonian H, and the constant λ = N * D/π with D being the mean spacing between states. By itself, Eq. (1) cannot be used as a ML feature in our problem. Even for small N , the probabilities of a given configuration of energies is numerically very small even if a particular configuration has a high relative probability compared to other configurations. Thus, use of this as a feature would be plagued by numerical precision issues.
Eq. (1) can be used to derive correlations between the resonance energies of spin group sequences of nearly any length. This will allow us to develop classification features that are "local" in that they depend only on a resonance and its nearest neighbors in the sequence. Thus, classification errors in the sequence far from a given resonance will not impact its own classification. The most interesting correlations for our purposes are the short-range correlations characterized by the nearest neighbor spacing distribution (NNSD) and the spacing-spacing distribution (SSD). Eq. (1) alone does not fully motivate the last interesting set of correlations, the width distributions as we will discuss below. a. Nearest neighbor spacing distribution (NNSD) -The spacing between the n th resonance and the (n + 1) th resonance is D n = E n+1 − E n . From Eq. (1), one can show that the distribution of D n 's follows a distribution colloquially known as "Wigner's surmise" [19]: Here x = D/D, where D is the average spacing. We note several things about this distribution: it favors spacings approximately near D; the fact that it approaches zero for small spacings elegantly explains level repulsion; and it does not forbid large spacings, but strongly discourages them. In this way, Wigner's surmise prefers a "picket fence" like sequence of resonances within a spingroup. We note that a spacing distribution made of resonances from many spingroups will destroy the correlations encoded in Wigner's surmise and the nearest neighbor spacing distribution will tend toward a Poisson distribution.
b. Spacing-spacing distribution (SSD) -A slightly longer range correlation is the spacing-spacing correlation, denoted ρ: The distribution of spacing-spacing correlations P ssc (ρ) is not known analytically but has been mapped out numerically [19]. The mean spacing-spacing correlation is known to be ρ = n ρ n /N ≈ −0.27. The implication of the average anticorrelation between spacings is that resonance spacings tend to follow a short-long-short-long pattern. c. Channel width distributions (CWD) -We can imagine expanding our random Hamiltonian to include random couplings to continuum states outside of the considered space, then looking to the poles of the resulting scattering matrix [20]. This train of reasoning eventually explains the empirically known Porter-Thomas distribution of resonance widths [17,21]: Here x = Γ/Γ (where Γ is the average width). We may also write this in terms of the reduced width amplitudes, x = γ 2 /γ 2 , where Γ = 2P c γ 2 and P c is the penetrability factor for the channel 2 c in question [8,17]. This distribution is a χ 2 distribution with ν degrees of freedom where ν represents the number of channels coupled to this spingroup with matching quantum numbers. For moderate to large ν 5, width distributions peak at the average channel width. For small ν, width distributions are strongly peaked toward zero widths. This complicates fitting widths distributions mainly because small width resonances are more likely to be lost in the noise of an experiment.
For elastic scattering, ν el = 1 and, owing to the strong energy dependence of the neutron penetrability factor, one typically uses reduced width amplitudes to avoid bias. For capture, in which the compound nucleus can couple to a very large number of states below it, ν γ is assumed to be very large (ν γ → ∞). In practice, we may also determine ν γ from a fit to the width distribution, provided detailed capture width data is available. For fission, ν f is observed empirically to lie around 2-3 [17].
C. Energy dependence of average resonance parameters
The correlations we seek to exploit from RMT rely on knowing the average widths or mean spacings for resonances within a spingroup. Here we quickly review relevant results. We will remind the reader that the mean spacing and the average widths vary slowly on the energy scales of the typical resonance width or inter-resonance spacing. Thus, we can use an entire resonance sequence to determine these parameters without worrying about an energy dependent bias.
a. Average level spacing -Phenomenologically, we know that for light nuclei, the average spacing D is of the order of ∼ MeV, so there are very few resonances, and our classification algorithm should not be applicable. A direct fit with R matrix code is the best option and, as there are very few resonances, there is no real need for automation. For medium mass nuclei, D ∼ keV, so there are enough resonances to enable robust classification by L and a potential for classification by J. Here we can begin to address poor classification of resonances at high energy that impact neutron capture and leakage. For heavy nuclei, D ∼ eV, so there are many resonances very close together. This is the ideal situation for our classification code. The average level spacing is inversely proportional to the level density for the corresponding spins and parities. From consideration of back-shifted Fermi gas models of level density, we expect the energy dependence of D(E) to be rather weak and only noticeable on energy scales of ∼ MeV [17,22].
b. Average neutron width -The neutron (or elastic) width of a given resonance is directly related to the reduced width [8,17] Γ nc = 2P c γ 2 nc = Γ nc (|E n |) Here the neutron penetrability factor P c is related to the imaginary part of the logarithmic derivative of the neutron-target relative wavefunction at the channel radius boundary a c in the R-matrix approach [8]. In the case of neutron projectiles, the penetrability only depends on the orbital angular momentum L. Thus we have a handle on the average neutron width through the average reduced neutron width γ 2 el . The average reduced neutron width γ 2 el is independent of the incident energy and all energy dependence of the average neutron width comes from the penetrability factor whose energy dependence is weak on the energy scales of the inter-resonance spacing. Also, the average reduced width is proportional to the pole strength, s c = γ 2 c /D, and, therefore, the neutron strength function, S = 2k c a c s c (1eV )/E = 2k c a c γ 2 c D (1eV ) E [8,17]. Here k c is the neutron wavenumber and a c is the channel radius in the R-matrix formalism. This suggests that we can compute the average width directly from either systematics or using an optical model calculation. Either way, it varies slowly on the energy scale of interest, so we may take it as constant. While reduced neutron widths may be slowly varying with energy in accordance with the neutron strength function, the average neutron width decreases with energy on average because of the additional factor of the neutron penetrability.
c. Average capture width -The gamma width of a given resonance can be written in terms of a penetrability in a way analogous to neutrons, but using a very different language: Here γ is the energy of a specific gamma and equals the difference in energy of the resonance n (including the separation energy) and a given state in the residual nucleus and γ 2 γXLn is reduced width amplitude squared for the particular gamma with multipolarity XL from resonance n.
Unfortunately, it is rare that transitions from a resonance to a given state in the residual are measured. More often we only measure the total radiative width of a resonance Here the sum runs over all gamma transitions starting from resonance n and having the same multipolarity XL. Thus, while Γ γXLn (single γ) in Eq. (6) would be distributed by χ 2 distribution with ν γ = 1, the same cannot be said for the total radiative widths Γ γXLn . Usually the direct average of the measured widths is all that can be determined empirically and the fluctuations in the capture widths are strongly damped. In these cases, the large number of open capture channels causes ν γ → ∞ and the capture width distribution to approaches a delta function. On the other hand, for closed shell or light nuclei, one may expect ν γ to be rather small. Nevertheless, starting from Eq. (6), one can relate the average gamma width to the gamma strength function in analogy with the neutron strength function [17]: Here γ is the gamma energy, XL is the gamma multipolarity and f XL is the gamma strength function. γ and f XL vary slowly on the energy scale of D [17]. d. Average fission width -The average fission width is expected to be related to the fission barrier penetration probability and in the Hill-Wheeler approach, is estimated to be [17] Here V f is the fission barrier height and ω is related to the curvature of the barrier. For actinides, ω is typically ∼ 0.5 MeV and V f ∼ 5 − 6 MeV [22] so the average fission width is also slowly varying. As our understanding of the fission channel is still very much phenomenological, we cannot write the widths in terms of a "fission penetrability" factor.
III. RECASTING SPINGROUP ASSIGNMENT AS A MACHINE LEARNING PROBLEM
We assume we have a collection of N resonances, each one of index n with an associated energy E n , a prior spingroup assignment (L prior n , S prior n , J prior n ), and widths associated with each open channel Γ el,n , Γ γ,n , and possibly Γ f,n . In the language of machine learning, we seek to reclassify the resonances according to labels (in our case, the L or both L and J of a sequence) using a series of quantities which are built from this collection of resonances which we believe are important in distinguishing characteristics of the data. These distinguishing characteristics are called features. In subsection III A, we describe our use of labels, and in subsection III B, our feature choices.
All classifiers require a training step in order to properly be able to make predictions. This step can be as simple as fitting a function or a more complex statistical study of the input features. While the nature of this training is algorithm-dependent, we require test data that can be used to perform this training. Once the classifier is trained, we use a second set of data to validate the quality of the now-trained classifier. Subsection III C describes our training data and our training strategy in this initial study.
Each classification algorithm has its own strategy and pros and cons. In subsection III D, we discuss our classifier choice and how we optimize its operation.
A. Labels
We seek to assign the quantum numbers L and J (and by extension S) to a sequence of resonances. Collectively we refer to the full set of quantum numbers as the "spingroup" of the resonances. In general, it is much easier experimentally to assign L than J. Often the correct L can be assigned on the basis of the shape of a resonance; this is particularly true of s-wave resonances. The J quantum number is usually assigned using a shape analysis of the outgoing neutron angular distributions in a scattering experiment, a detailed study of the post capture gamma cascade in a capture experiment, or some other complex and expensive experiment or series of experiments. To complicate the situation, multiple J are possible for a given L, each with no obvious distinguishing characteristic other than the interference pattern between resonances with the same quantum numbers. As a result, in some situations, we may not have enough information to reliably classify resonances by the J quantum number.
Given this situation and the fact that we are using the classifiers in the scikit-learn package [16], we either label by L or by spingroup. We will note below that certain features only make sense when classifying by spingroup as the features require "pure" sequences corresponding to resonances with common quantum numbers. We have considered a multistage approach where we first classify by L, then by full spingroup, but this would be outside of the scope of present work and it is thus not discussed in this paper. Table I lists the features used for classification in our approach. The overall principle is that we de-fine enough relevant features to characterize the resonances within their spingroups, at the same time that we avoid overloading the classifier with redundant or nondiscriminative features. We have experimented with a much larger feature list [23], and detailed studies using the SHAP metric [24,25] demonstrated that only an handful of targeted features are needed. The features J_prior and L_prior can be thought of as labels that may be "overridden" in the classification process and should be viewed as prior estimates based on experimental inference.
B. Features
Several of the features test whether a given feature is consistent with a known distribution, otherwise known as Out-Of-Distribution (OOD) tests [26]. These tests require knowledge of feature distributions and, therefore, we exploit the predictions of RMT as discussed above: the tendency of resonances to be relatively even-spaced with a spacing distribution given by the Wigner surmise distribution and the tendency of spacings to follow a "short-long-short-long" pattern. The final group of features exploit the known or expected width distributions of the resonances.
We note the features d(D left ) and d(D right ) (and by extension p (D left ), p (D right )) are poor proxies for d(ρ), but can be used when classifying by L alone. I: List of features used by our classifier. The "Labels" column denotes whether the particular feature is used when classifying by L alone ("L") or by the full set of spingroup quantum numbers ("sg"). The "Indep. Params." column lists the resonance independent parameters needed to compute this feature. Similarly, the "Dep. Params." column lists the parameters of a given resonance (or neighboring resonance) needed to compute this feature.
Description
L_prior L, sg n/a Ln The orbital angular momentum of the n th resonance. Assigned prior to classification.
J_prior sg n/a Jn Total angular momentum of the n th resonance. Assigned prior to classification.
pos/len L, sg n/a n/N The position n of the resonance within the sequence divided by the length of the sequence N . Experimentally, resonances of higher energy are more likely to be misplaced or missed, so this feature is a way to predict whether or not a resonance in a given region of the sequence may be problematic. We note for the training data described below, pos/len does not help as the training data is not biased in this way.
The quadratic difference (see Eq. (14)) between the n th spacing and the average spacing, The quadratic difference (see Eq. (14)) between the (n + 1) th spacing and the average spacing, Small values signal OOD. Continued on next page For the current energy En, this is the signed P-value (see Eq. (13)) for the spacing between the current energy andthe next higher energy, The quadratic difference between (see Eq. (14)) the n th spacingspacing correlation and the expected correlation coefficient, The quadratic difference (see Eq. (14)) between the n th elastic width and the average elastic width, |Γ el,n − Γ el | 2 /Γ 2 el . Small values signal OOD. In the future, we will explore the use of the p-value to replace this feature.
The quadratic difference (see Eq. (14)) between the n th capture width and the average capture width, |Γγ,n − Γγ| 2 /Γ 2 γ Small values signal OOD. We use this rather than p-value because this feature is insensitive to uncertainty or bias in νγ.
Out-Of-Distribution Tests
We need a mechanism to test whether a given value of x is consistent with a given distribution or not. In other words, an "out-of-distribution" (OOD) test [26].
In the subsequent OOD feature definitions, we adopt the following. For a given Probability Density Function (PDF) P (x) defined on the interval (x min , x max ), we define the Cumulative Distribution Function (CDF) and the Survival Function (SF) For the OOD tests considered, we use one of four classes of metrics: which gives the probability that a more extreme value of x may be drawn. Here x is the mean of the distribution in question.
3. "Signed" P-value For the spacing distribution, it is useful to distinguish whether a spacing is too small (indicating an extra resonance is found in the current sequence) or too large (indicating a resonance is missing from the current sequence).
4. Distance to mean, normalized by the mean value to remove the overall scale from the metric The various OOD testing features are illustrated in Figures 2 and 3. In other Extreme Value Testing (EVT) methods, one assigns a criteria for an OOD data point (say more than 3-sigma). Here we use the OOD metric as a feature and provide properly labeled training data so that the classifier can learn what criteria should be used for OOD detection.
Spacing features
Several feature distributions require a predetermined value of the average spacing D LJ for each spingroup. We can achieve this several ways: Spacing distribution OOD features. The use of the "signed" p-value allows us to distinguish between overly small spacings (indicating one or more extra resonances in the sequence) and overly large spacings (indicating one or more missing resonances from the sequence).
1. Direct averaging of spacings 2. Fit the cumulative level distribution to extract 1/D
Fit the nearest neighbor spacing distribution to
Wigner's surmise distribution
Take values from a pre-existing compilation
Options #1-3 can be performed as an initial training step for our classifiers or even iteratively improved as we reclassify resonances. Both #2 and #3 can be achieved by fitting empirical distributions (either cumulative level distribution for #2 or cumulative Wigner surmise distribution for #3). We note that the breadth of the Wigner surmise means that #3 converges slowly as the number of spacings increases. Options #1 and #2 may also be used if one does not have robust J assignments to determine D L . Simple consideration of the number of energies on a given interval leads one to the follow sum rules for the resonance spacings for the full sequence D, the subsequence of resonances with a given orbital angular momentum D L and the subsequence of resonances within a spingroup D LJ : and
Width features
Many features distributions require knowledge of the average width Γ and number of degrees of freedom pa-rameter ν of the appropriate Porter-Thomas distribution. We will approach each width and ν pair the same way. As a technical aside, small-width resonances tend to be missed experimentally, and we need a method for determining these widths that is robust against this bias. When determining the average widths, we fit the width survival function of the Porter-Thomas distribution. By integrating from large to small widths, the dominant part of the integral comes from the region in widths that are most accurately determined experimentally. This also can be used to yield the ν for the fission channels and the total width.
For elastic reactions, ν is assumed to be unity when classifying by spingroup or the number of allowed J values when classifying by L. Also, when fitting elastic width distributions, we can either fit the experimental width distribution or the reduced neutron width distribution. We note that the presence of doorway states may distort the neutron width distribution [27]. We may explore this effect in future work.
For neutron capture, the width distribution is often very narrow and ν γ → ∞. In this case, it is appropriate to directly average the capture widths. We note that in many older data sets, the capture widths were assigned based on the average widths which can introduce serious bias in the classification. To counter this bias, we implemented in our codes the option to "turn off" capture widths as an active feature. When the distribution is not so narrow, we must approach the capture distribution in the same manner as elastic or fission widths.
C. Training
Supervised machine learning algorithms, such as those used in this work, rely on having a large amount of labeled data for training purposes. With this training data, the machine learning algorithm will "learn" the solution physics, without a need for an explicit solution formulation. While experimental resonance data might be used for training, there are several problems with such approach: • the number of resonances available for a given nucleus are often only on the order of hundreds of resonances, on the borderline of what is needed for robust training • experimental data is not guaranteed to have correct labeling by either L or spingroup • experimental data may be missing smaller resonances or have "contamination" by resonances from other nuclei in the target or surrounding experimental apparatus.
Compilations, such as the Atlas of Neutron Resonances [17] and/or evaluations such as the ENDF library [1], are attractive sources of training data, but even these do not always have enough statistics and/or are not guaranteed to have correct labeling either. Thus, we are forced to consider synthetic training data.
Synthetic data can be constructed in a way nearly indistinguishable from real data and can be generated from the well understood statistical properties of nuclear scattering physics described in Section II B. In Ref. [28], the authors describe the addition of a stochastic resonance generator to the FUDGE processing system [29]. This tool takes advantage of many known results from GOE random matrices [18][19][20]: • Realizations are GOE consistent by construction since a GOE Hamiltonian matrix is generated as the first step in making a resonance realization and the eigenvalues of this matrix provide the resonance energies.
• The eigenvalues of this matrix are not quite the resonance energies, since the mean level spacing D is incorrect. We rescale the eigenspectrum so that the mid-range of the spectrum's level spacing matches the required D.
• The widths are drawn from a Porter-Thomas dis-tribution as in traditional ladder generators found in nuclear data processing codes.
• The reconstructed pointwise cross sections generated from this resonance realization can be generated using any level of approximation to the Rmatrix. Although we could use the Reich-Moore approximation as it is generally regarded as the most appropriate and accurate approximation for nuclei with Z > 10, we do not need the reconstructed cross section for this project.
In order to simulate the quantum number misassignments seen in the real world, we randomly misassigned a fraction of the resonances in these synthetic sequences. The fraction of reassigned resonances can be varied to test the reliability of our method. Because such reassignments occur independent of either resonance energy or width, they do not currently fully mimic actual experimental effects. We also do not consider other experimental effects, such as resonance energy shifts caused by moderation in the neutron source, Doppler broadening in the target, or target contamination. These and other effects impact the initial resonance quantum number assignments in an uneven way -in a shape analysis L = 0 resonances are easy to identify but higher L resonances have less certain assignments at higher energies. Other methods of spingroup assignments have their own biases. We will explore these experimental impacts in future works. We have considered adding additional metadata to each resonance to help the classifier understand the quality of the spingroup assignment, and this is another topic for a future work.
In this first incarnation of a machine learning tool, we used the scikit-learn test_train function [16] to split input data into training data and test data. The fraction of data randomly selected for training, with the remaining input data reserved for testing, can be chosen through a parameter in the function call. In the future, we aim to improve the training regimen using a combination of expert knowledge and numerical experimentation.
D. Classifier
The approach presented in Section III A defines labels, and the approach described in Section III B converts sequences of neutron resonances into sets of features which can be coupled into any machine learning classifier. As the main focus of the work is on the methodology of spin classification of neutron resonances through machine learning, we employed pre-packaged ML classifiers from scikit-learn [16]. While we performed a preliminary assessment of different classifiers and associated hyperparametrizations, we illustrate the approach with a Multi-Layer Perceptron classifier. Multi-Layer Perceptrons belong to the family of Neural Network algorithms.
This assessment with multiple classifiers and hyperparametrizations was done in a preliminary fashion, using only training and test data sets, with bias mitigated through multiple training events. Ideally, however, independent validation sets should be used in a rigorous optimization and/or using approaches such as K-fold crossvalidation [30]. Nevertheless, the choice of classifier and hyperparameters should not significantly impact the conclusions presented in this work, and the results should be transferable to other choices of classifier and hyperparametrizations. Now that the proof of principle is established in this work, we leave the optimization step for a future work.
Multi-Layer Perceptron
As with other supervised learning algorithms, the Multi-Layer Perceptron (MLP) "learns" a function that defines a hyperplane that optimizes the separation of data points with different labels. One difference from other ML algorithms, such as logistic regression, for example, is that in MLP, there can be one or more nonlinear layers, called hidden layers, between the input and the output layers [16,31]. The learning process is done by training on a dataset, whose data is characterized by a set of features, for which the labels are known. The training uses backpropagation [32][33][34], which adjusts the weights in each hidden layer to approximate the non-linear relationship between the input and the output layers.
While the MLP can learn a non-linear function approximator for either classification or regression, we use the MLPClassifier function from scikit-learn [16] solely for classification. Our MLP has the number of non-linear hidden layers as an input hyperparameter and optimizes the log-loss function using the L-BFGS solver for weight optimization [35]. The L-BFGS is an optimizer based on quasi-Newton methods which approximates the Broyden-Fletcher-Goldfarb-Shanno algorithm (BFGS), requiring significantly less memory. For smaller datasets, L-BFGS is expected to converge faster and with a better performance [16] than alternatives, such as Stochastic Gradient Descent (SGD) [36] or Adam [37]. Our MLP trains iteratively since at each step the partial derivatives of the loss function with respect to the model parameters are computed to update the parameters, with the maximum number of iterations also being a model hyperparameter. In our calculations, we ensured convergence relative to the maximum number of iterations. The strength of the L2 regularization term, which is divided by the sample size when added to the loss, can be used to avoid overfitting by introducing a penalty term in the loss function. Apart from those aforementioned hyperparameters, we assumed scikit-learn default values for all other parameters. Performance could likely be improved by testing different classifiers and by performing a grid search to optimize the hyperparametrizations. As a matter of fact, preliminary investigations in that direction have been done by the authors. However, the scope of this current work is to define and present the method as a proof-ofprinciple. We, therefore, leave to present optimization efforts for a future publication.
The classifiers from scikit-learn are set up to randomly split the input data into training and testing subsets. The algorithm is trained only on the training set while the testing one serves as a somewhat independent test of the quality of the training process. Because the splitting of data points (resonances) is random, the classifier is trained in each run with a different training data set, leading to slightly different predictions. We define a training seed as the particular training set obtained through a given random split, and a training event as each pass of the input data through the training process, which includes the random split into a training seed and complementary testing subset, defining slightly different classifiers. For this reason, in the application of the method shown in Section IV, we define an averaged classifier by averaging the performance and predictions of many different training events, each with different training seeds.
IV. APPLICATION TO 52 Cr
To assess the efficacy of our approach, we applied our method to the analysis of the 52 Cr resonances from the most recent evaluation for chromium isotopes [38]. The average resonance parameters are presented in Table II. 52 Cr has ground state 0 + spin/parity so it has five spingroups for 0 L 2.
The 52 Cr resonance evaluation in Ref. [38] is taken from the ENDF/B-VIII.0 evaluation published in Ref. [1] and described in Leal et al. [39]. The Leal et al. evaluation is a Reich-Moore fit using SAMMY [5] to a combination of published and unpublished data from ORELA. Below 100 keV, the fit relied on nat Cr (83.789% 52 Cr) data of Guber et al. [40]. Above 100 keV, the evaluation relied on unpublished high resolution transmission data of Harvey et al. on a pair of enriched 52 Cr samples. No neutron capture or scattering angular distribution data were available above 600 keV, therefore, above 600 keV, the spingroup assignments in Ref. [39] are purely based on a shape analysis and evaluator judgement. Neither data set used in Ref. [39] were used in the Atlas of Neutron Resonance compilation [17].
The ENDF/B-VIII.0 evaluation extends from 10 −5 to 1.450 MeV. Above 1.450 MeV, resonances were included mainly to provide background and interference effects to the resonances below 1.450 MeV. This is a common practice in ENDF evaluations and is done to ensure an accurate representation of the reconstructed cross section over the given energy region.
In order to illustrate the approach adopted in the current work, which will be described in detail in the following sections, and to facilitate its understanding, we present in Fig. 4 a flow chart summarizing the steps taken. The reader is encouraged to use Fig. 4 as a guiding reference while reading the text that will follow.
A. Training with synthetic data
We generated a train/test set in accordance with the methods in Section III C. The train/test simulated data consists of 4,823 resonances over an energy range 0 − 20 MeV. In Table III we list the spingroups taken from the ENDF/B-VIII.0 evaluation and the average parameters in the train/test set that correspond to the ENDF/B-VIII.0 spingroups. We note that although the ν γ is known for each ENDF/B-VIII.0 spingroup, we assume that ν γ → ∞ in our train/test data set.
To simulate the misassignments seen in real data, we randomly misassign resonances in the train/test set in accordance to the prescription in subsection III C. In Fig. 5, we show the cumulative level distributions for the L 2 spingroups for the original simulated set and three different levels of random misassignment. In the following, we refer to the fraction of resonances that receive a random misassignment as the Random Misassignment Fraction (RMF). In each case, we extract the average spacing for the simulated sets. In Fig. 6, we show the extracted average spacing for each L after combining the spacings from each spingroup in accordance with Eq. (16). We note that as the degree of misassignment increases, each spingroup's average spacing tends to the global average value of D sg = 5 * 4.14 keV = 20.7 keV. Thus, the extracted average spacing tends to 20.7 keV for L=0 and 10.4 keV for both L=1 and 2.
With these sets, we trained a MLP algorithm, employing the L-BFGS solver, regularizer α set to 1.0 and maximum number of iterations set to 2000 with 20 hidden layers. Unless noted otherwise, the results shown consider 50 training events. Each training event corresponds to the training of the classifier using one random training seed, using the complementary testing dataset for benchmarking the training. In each synthetic set used for training, we randomly reserve 60% of the data points in each training event for the actual training while 40% is used for testing, as explained in Sec. III D 1. This is done as a way to assess the quality of the training process, or how well the algorithm can be trained to describe the training data set specifically.
The training was performed both with and without the use of features that use the capture widths and categorizing either by L or full spingroup. Fig. 7 shows examples of the typical confusion matrices that are obtained by the classifier in the training process, taken from a single training event when training with a synthetic sequence with RMF=50%. We see the excellent training performance when capture widths are considered. However, as it will be further discussed in the text, this may be due to a strong training bias that may not translate to highquality predictions if the trained classifier is applied to real resonance data. Many of the aspects seen in Fig. 7 are discussed in more detail later in the current work, where we consider results averaged over many training events and training sequences with different RMFs.
To quantify the performance of the classifier, we calculated accuracies based on the fraction of resonances that have the correct label. We are aware that there are many other important performance metrics (precision, recall, ROC curves, etc.) [41, chapter 3] that would complement the accuracy analysis and help develop a full picture of the results and optimization pathways. However, being a work focused on the proof-of-principle of the method, we leave such more complete analysis for a future work. Fig. 8 shows the average training accuracy of the classifier as function of the misassigned fraction of the training set for all the combinations of label mode option (by L or spingroup) and usage of capture width features. Each curve represents an average obtained with a different number of maximum training events, showing that by 50 training events the accuracy of each run has converged to the corresponding average accuracy.
When using capture widths, whether classifying by L alone or by spingroup, we achieve nearly perfect reclassification. However, even though capture widths can be very discriminative, this may not be a reasonable fea- ture option when applying to the classification of real resonance distributions that may contain biases towards average widths, as discussed in Section II C 0 c. Indeed, because we chose ν γ → ∞, our capture width distributions are essentially delta functions so a perfect capture width match is needed to be considered in the distribu-tion for a given label. When capture width features are not employed, we see a consistent pattern of highest training accuracy for lowmisassigned datasets, with average accuracies decreasing as misassignment increases until it flattens or even upends with mostly misclassified sets. As a reference, each plot in Fig. 8 shows the "do nothing" line -a line representing the degree of accuracy of the training set (basically a line function 1 -RMF), which indicates the overall accuracy of the set if no classification attempt is made. We also plot reference lines corresponding to "random guess," being the accuracy one obtains by randomly selecting among the allowed labels. The accuracies obtained, though, are much higher than the random guess, so the lines are off the scale in Fig. 8.
For classification by L, the training accuracy decreases from about 99% accuracy at low RMF to a minimum of ∼60% average accuracy at 70-80% RMF. It is noteworthy, however, that this is still much more accurate than the "no classification" baseline. On the other hand, when classifying by spingroup, the classifier has more difficulty to assign the correct label, with average training accuracies remaining closer to the "no classification" line at low RMFs and stabilizing at ∼40% for higher RMFs, although still much higher than simply guessing. This is somewhat expected as there are five possible labels when classifying by spingroups instead of only three with label mode L, making it a more difficult problem to solve.
B. Validating on synthetic data
Once the performance of the classifier during the training process was better understood and benchmarked, we validated the method by applying the fully trained MLP algorithm to a second realization of synthetic data based off 52 Cr. We again implemented random misassignments to this second realization of synthetic data with RMFs ranging from 1% to 99%. Fig. 9 shows representative results of the validation analysis.
In Fig. 9, we show the validation accuracies averaged over 50 training events for the validation sequences having RMFs of 1%, 50%, and 80%, as a function of the RMFs in the training set, represented by the solid lines. We also display as dashed lines of corresponding colors the starting accuracies of each validation sequence (e.g., the validation sequence with 80% RMF is 20% correct). A comparison from the dashed line with the solid line of the same color shows how much the machine-learning classification has improved (or worsened) the set relative to the original resonance sequence. In all cases, we see that the maximum validation accuracies happen when the training sequences have around the same RMFs as the sequences being validated. This is somewhat expected as those are the cases in which the validation sequences are the most statistically similar to the training sequences. Interestingly though, for classifications both by L and by spingroup, these peaks in accuracy are much sharper when employing capture width features and much smoother when not using them. This indicates that capture widths are very discriminative. However, given the known bias in the use of the capture widths, we focus our discussion in the cases where capture widths were not used as a feature.
Firstly, we shall focus on the case of label mode L without capture widths from Fig. 9, bottom panel. In the case of validation sequence with RMF of 1% (blue line), the original sequence was already very accurate and for low RMFs in the training set the classifier preserves that, worsening it minimally with training sequences up to around 20% RMF. Above that, the reclassification accuracy decreases quickly. For a validation sequence of RMF=50% (red lines), the reclassified sequence is consistently more accurate than the original one, up to training RMF of around 90%. For the validation sequence with RMF=80%, the machine-learning algorithm provides a substantially more accurate sequence regardless of how much is the RMF for the training set. This shows that, with the appropriate training set (or range of training sets), the classifier is able to deliver a resonance sequence that is more accurate than the one provided as input. This suggests that an iterative process in which, under the appropriate conditions, a sequence of arbitrarily low accuracy related to L assignments could be incrementally improved until being fully correct. The development of such iterative method will be pursued in a future work.
We now turn to the validation results of Fig. 9, second panel from top, corresponding to label mode by spingroup, without capture widths. In this case, similar considerations can be made when the validation sequence initial accuracy is low (meaning high RMF), as is the case of the solid green curve corresponding to RMF=80%. We see that the reclassified accuracy is consistently better than the original accuracy (dashed green curve), for all values of RMF in the training set. For lower validation RMFs (solid red and blue curves), the accuracies as function of training RMF are similar to the case of label mode L, although a little lower. Also, resulting average accuracies seem closer to or lower than the initial accuracies (corresponding dashed lines) in a larger training RMF range, indicating that an iterative process may be trickier for spingroup classification than it would be for label mode L. This may be explained by the fact that the classification by spingroup is much more challenging than by L: the number of possible labels is larger as for each L = 0 as there will be two spingroups allowed per L. Still, an iterative method for spingroups may still be ef-FIG. 8. Training accuracies for considering different number of training events, as a function of the RMF in the training set. Each panel shows a different combination of label mode (L and spingroup) and adoption or not of features related to capture widths. We show also a "No classification" curve corresponding to the original accuracy of the training set (1 minus the training RMF), which is the accuracy if no classification effort is made on that particular resonance sequence. We also plot, although it is off-scale, the "naïve" constant accuracy that one would get if choosing randomly among the allowed labels (1/3 for classification by L; 1/5 for classification by spingroups).
fective if one tackles it in two steps: first classifying by L, and later by spingroup within fixed L values. Again, this is outside of the scope of this work and will be investigated in the future. , which is the accuracy if no classification effort is made on that particular resonance sequence, with the same color of the validation accuracy for the corresponding sequence. We also plot, although it is sometimes off-scale, the "naïve" constant accuracy that one would get if choosing randomly among the allowed labels (1/3 for classification by L; 1/5 for classification by spingroups).
C. Reclassifying real resonance data
After validating the reclassification method in synthetic data with known RMF, we applied the trained algorithm to the ENDF/B-VIII.0 52 Cr resonance data from Ref. [38].
The first step is to estimate how many training events are needed to allow us to assume the reclassification pro-cess has converged. For that, we determined the average fraction of evaluated resonances that were reclassified as a function of the maximum number of training events considered as shown in Fig. 10. Here we show the resulting fraction of reclassified ENDF resonances for different values of RMF in the training set.
We see that by 1000 training events, all values of fraction of reclassified resonances have clearly converged to their average value. As a matter of fact, for all cases the average fraction of reclassified resonances converge after around 200-400 training events. While for label mode L without capture widths, the average fraction of reclassified resonances seem to always increase as the training RMF increases; this is not an observed trend in the other cases.
We turn our attention for the individual resonances from the evaluated file that are being reclassified. From the discussion above, it is clear that we cannot trust results using the capture width distribution. Further, from the discussions in Section IV B, we see that the most reliable reclassification process is obtained by classifying only by L. With these, we require a training set that has a RMF that is similar to the one being reclassified. However, it is challenging to define a priori what is the real RMF of a resonance sequence in an ENDF-evaluated file that originates from real measured data. To proceed, some realistic considerations based on expert judgement is necessary. It is very unlikely that the resonances for the major isotope of a well-known, well-measured material, such as chromium would have more wrong spin assignments than correct ones. At the same time, it is unrealistic to assume that practically all assignments are correct. It is thus reasonable to assume that the RMF in real data of 52 Cr is somewhere in the range between ∼ 10% to 50%. From Fig. 10, for the cases without capture widths, we see that the fraction of reclassified resonances does not change much around training RMF=20%, with RMF=50% beginning to distance from lower RMFs, indicating that the reclassification process for the evaluated resonance data is somewhat stable at RMF=20%. For this reason, we show in Fig. 11 the normalized number of times each ENDF resonance was reclassified by the MLP algorithm trained on synthetic data with 20% RMF over the course of 1000 training events, as a function of the resonance energy. As a stability test, we also plot the results using training sets with RMF=10% and 30%.
We see in Fig. 11 that indeed there is very little difference among the calculations with training data of the different RMFs listed. In general, we observe many regions in which no resonances are reclassified, or some of them very rarely. There are, on the other hand, some resonances, and sometimes, cluster of resonances, that are frequently, if not almost always reclassified. In particular, we note the two clusters of reclassified resonances: one near the beginning of the sequence and the other at the end, above ∼1.6 MeV. To rule out any intrinsic bias from this classification process, we repeated the exact same calculations, but this time, instead of ap-plying the trained algorithm into real data, we applied it to an independent realization of synthetic data with 20% RMF. This is shown in Fig. 12. We see that the peaks of resonances reclassified most times for the synthetic sequence seen in Fig. 12 are more randomly distributed, without significant clusters. This is expected since the synthetic sequence had 20% of its resonances missasigned randomly. This lends confidence that the real resonances reclassified in multiple training events, with multiple training seeds, seen in Fig. 11 may actually correspond to incorrect assignments.
It is instructive to deconstruct the results shown in Fig. 11 by orbital angular momentum in order to see if there are correlations in the resonances the reclassified resonance assignments. This is shown in Fig. 13, broken into 10 equally spaced energy groups. We see that the resonances above 1450 keV are originally assigned to L = 0 and the reclassifier is attempting to reclassify them mainly to L = 2. In this evaluated set of resonances, the resonances above 1450 keV were added to provide a background and are not expected to be correctly classified. Interestingly, we see a similar behavior of the reclassifier in the lowest energy group. However, instead of reclassifying the L = 0 resonances, it is reclassifying the L = 1 resonances to L = 2. It is clear that the classifier expects more L = 2 resonances than are observed in the evaluation. What is less clear is whether we should trust the classifier's assignments any more than the original evaluator's expert judgement.
To further explore the classifier's choices, we show which were the real resonances that were reclassified more than 50% of the time and the distribution probability of the reclassification label in Table IV. Here only the 44 most reclassified resonances are listed which corresponds to about ∼12% of the total number of real resonances in the evaluated file. This fraction is consistent with the asymptotic converged value for the average fraction of reclassified resonances for label mode L, without capture width features, and 20% training RMF, as seen in Fig. 10. In Table IV, we also show the L assignments from other resonance quantum number determinations in the literature. These references and the methods used to make their quantum number determinations are given in Table V. Interestingly, the 25 most commonly reclassified resonances were not observed by any of the authors in Table V and the L determination is based solely on the shape analysis of Leal et al. [39]. If we were to adopt the reclassifier's assignments over those in Ref. [39], it would not have much measurable impact on the reconstructed cross section values simply because the resonances in question are far enough apart with very narrow widths so the interference patterns between resonances cannot be seen. It would, nevertheless, change the scattering angular distributions somewhat. However, the distributions are usually very close to isotropic at low energies so this too would have a small impact.
V. SUMMARY AND CONCLUSIONS
In this paper, we have outlined the first application of machine learning to the long-standing problem of classifying neutron resonances by their appropriate quantum numbers. We have outlined how we map statistical prop-erties of resonances into OOD tests and then into features that can be used for resonance classification. We have demonstrated the efficacy of our approach both with synthetic data and with a real study of the 52 Cr ENDF/B-VIII.0 evaluation. We noted problems with the use of capture widths when confronting older datasets.
It is clear that our approach has many avenues for 52 Cr resonances most reclassified in more than 50% of the training events. The references corresponding to the lead authors below are given in Table V. * Indicates that the given resonance's energy is above the upper limit of the resolved resonance region and that this resonance is present to provide a background contribution to the reconstructed cross section. L values in brackets indicate multiple possible assignments as per the original author. improvement: • There are many other features we wish to exploit including a) Dyson-Mehta ∆ 3 statistic and associated distribution, b) use of the full spacing-spacing correlation, c) better capture width distributions, and d) per-resonance metadata, such as how were the quantum numbers determined and how confident are we in the determination. Some methods provide quite robust quantum number assignments while others only work well only for S-wave resonances.
• We would like to continue testing the method, especially against experimental data where the full spingroup assignment is believed to be correct (e.g., polarized neutron and target experiments on actinides or from TRIPLE collaboration).
• We would like to refine our classification strategies, including a) adopting iteration, namely refitting all OODs after each round of classification since Fig. 9 demonstrates convergence under certain conditions; b) adopting a staged approach where we first determine L, then move to full spingroup determination; c) optimizing choice of classifier and corresponding hyperparametrization; d) training and validating in sections of real resonance sequence data that are well-constrained experimentally; e) exploring transfer learning to determine to what extent we can train on one nucleus's data and apply the classifier to another; and f) benchmark the quality of the classifier by incorporating additional performance metrics (such as precision, recall, ROC curves, etc.) in the analysis, better determining improvement routes. [42,43], in which partial complete spingroup assignments are given. These do not necessarily agree with the choices of the ENDF/B-VIII.0 evaluators in Ref. [39].
• In connection with the previous bullet, we would like to explore different measures of classification accuracy. In this work, we used total accuracy. As there are different numbers of resonances in each class (whether classifying by L or spingroup), we have imbalanced sets of data. In such this case, a balanced accuracy metric may be more appropriate [56].
• We would like to start a much broader discussion of the development of reproducible Uncertainty Quantification methods. Such methods must address sensitivities to hyperparameters, feature weight, reclassification frequency, etc., to both the results of our classification and to the reconstructed neutron integrated and differential cross sections with the chosen spingroup assignments.
In addition to these improvements, there are many other issues we must consider. We have not attempted reclassification of a target nuclei with ground state I Π = 0 + . Therefore, we were able to ignore the S quantum number and parity for the most part. We also have not attempted to use fission resonances. Also, there are questions about how doorway states and intermediate states might impact neutron width distributions. Finally, we would like to understand what experimental effects may impact our results including, but not limited to, resonance sequence contamination from other isotopes and missing resonances.
Appendix A: Glossary of Machine-Learning terms
To assist the reader who may not be fully familiarized with some of the common terms and expressions employed in Machine-Learning (ML) works, we briefly summarize some of the definitions as commonly adopted and/or as used in the current work: • Features: set of relevant quantities used to describe and characterize the data points associated with the ML problem. Features can be vectorized and define a feature space that is assumed to represent well the input data.
• Labels: Quantities associated with the output of a ML process. In other words, what the ML algorithm is attempting to predict. If labels are discrete quantities or objects or concepts, the ML algorithm is said to be a classifier.
• Training dataset: Collection of data points of known labels that are used to train the ML algorithm. A trained algorithm is tuned to optimize the identification of labels from the training dataset.
• Testing dataset: Collection of data points of known labels of similar origin as of the training set but that are not used in training. Their purpose is to assess how well the ML algorithm was trained to recognize data points similar to the training data set.
• Validation dataset: Collection of data points of known labels that are compatible but independent (not of the same origin) of the training dataset. Their purpose is to assess how well the trained algorithm can perform in data points that it has never encountered before.
• Hyperparameters: Parameters of the ML algorithm that can not be fully constrained by the model, and may be tuned to optimize the performance of the ML algorithm.
• Training seed: The training subset randomly obtained after the input training data is randomly split in the classifier training process into a training and testing data set.
• Training event: The definition of a trained classifier using a particular training seed. Because each training seed is a different sample of the complete training data, each training event will lead to a different classifier, and thus a different set of predictions. | 2022-09-30T01:15:25.822Z | 2022-09-28T00:00:00.000 | {
"year": 2022,
"sha1": "70f38a46e4ced7cb58067553bac052b332de3d89",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "70f38a46e4ced7cb58067553bac052b332de3d89",
"s2fieldsofstudy": [
"Physics",
"Computer Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
195772325 | pes2o/s2orc | v3-fos-license | Transcriptomes of antigen presenting cells in human thymus
Antigen presenting cells (APCs) in the thymus play an essential role in the establishment of central tolerance, i.e. the generation of a repertoire of functional and self-tolerant T cells to prevent autoimmunity. In this study, we have compared the transcriptomes of four primary APCs from human thymus (mTECs, CD19+ B cells, CD141+ and CD123+ DCs). We investigated a set of genes including the HLA genes, genes encoding transcriptional regulators and finally, tissue-enriched genes, i.e, genes with a five-fold higher expression in a particular human tissue. We show that thymic CD141+ DCs express the highest levels of all classical HLA genes and 67% (14/21) of the HLA class I and II pathway genes investigated in this study. CD141+ DCs also expressed the highest levels of the transcriptional regulator DEAF1, whereas AIRE and FEZF2 expression were mainly found in primary human mTECs. We found expression of “tissue enriched genes” from the Human Protein Atlas (HPA) in all four APC types, but the mTECs were clearly dominating in the number of uniquely expressed tissue enriched genes (20% in mTECs, 7% in CD19+ B cells, 4% in CD123+ DCs and 2% in CD141+ DCs). The tissue enriched genes also overlapped with reported human autoantigens. This is, to our knowledge, the first study that performs RNA sequencing of mTECs, CD19+ B cells, CD141+ and CD123+ DCs isolated from the same individuals and provides insight into the transcriptomes of these human thymic APCs.
Introduction
Antigen presenting cells (APCs) in the thymus are essential for the establishment of central tolerance. By presenting self-peptides to the developing thymocytes, they contribute in the critical process of selecting thymocytes with functionally competent T-cell receptors tolerant to the body's tissues and organs. Due to comprehensive studies performed the last decades, mainly in a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 mice, extensive knowledge about how thymic APCs mediate central tolerance has been obtained [1][2][3][4][5][6][7][8][9]. Here, we focus on four different types of APCs; CD141 + and CD123 + dendritic cells (DCs), CD19 + B cells and medullary thymic epithelial cells (mTECs).
The most widely studied thymic APC, the mTEC, is a specialized cell type that transcribes a large number of tissue-specific genes [2]. Expression of these genes encoding tissue-restricted antigens (TRAs) contrasts with the tight spatio-temporal control of gene expression in peripheral tissues during pre-and post-natal development and has been termed "promiscuous gene expression" (PGE) [10][11][12][13][14][15]. A given TRA is lowly expressed and only expressed in a minority of mTECs (1-3%) at any given time [1]. Approximately 40% of the TRAs [16] are under the transcriptional control of the autoimmune regulator (Aire). This regulator protein is crucial for the establishment of central tolerance, and loss-of-function mutations in AIRE cause a recessive autoimmune syndrome termed autoimmune polyendocrinopathy-candidiasis-ectodermal dystrophy (APECED) in humans [17]. Takaba et al. recently reported a second regulator in mTECs, the Fez family zinc-finger 2 (Fezf2), which mediates the expression of Aireindependent TRAs [3]. Furthermore, the major subset of DCs found in the thymus belong to the conventional DC (cDC) lineage, and can be classified as CD8α + SIRPα -cDCs in mice [1] or CD141 + in humans [18,19]. CD8α + SIRPα -cDCs originate intrathymically and can present TRAs that have been transferred by mTECs [20]. CD123 + plasmacytoid DCs are also present, however, these cells acquire antigens from the blood stream before migrating into the thymus where they present them to the developing T-lymphocytes [4,5]. In this way, they may contribute with self-peptides that are not already included in the spectrum of TRAs promiscuously expressed by mTECs [21]. Whether thymic DCs also transcribe and express their own TRAs is not clear. Finally, thymic B cells are also capable of presenting peptides to the developing thymocytes and induce negative selection [6][7][8]. The origin of thymic B cells is not fully understood, as both development from intrathymic progenitors [6] and migration from the peripheral circulation [8] have been suggested. In thymic B cells, a subset expressing Aire and Aire-dependent TRAs have been reported in mice [8,22] and recently, expression of AIRE and a few TRA genes were also detected in human thymic B cells [23].
Aire and Fezf2 are not the only known transcriptional regulators of TRA expression. The deformed autoregulatory factor 1 (Deaf1) controls the expression of approximately 600 genes in the pancreatic lymph nodes [24] where around half of the genes were upregulated and half were downregulated in Deaf1knockout. Among the downregulated genes, almost three quarters were encoding potential peripheral tissue-antigens. Deaf1 therefore acts as a potential transcriptional regulator of TRAs, and to date, DEAF1 expression in human thymic APC has yet not been examined.
To our knowledge, RNA sequencing of human thymic APCs has only been performed in B cells [23]. In this study, we have performed high-throughput RNA sequencing to compare the transcriptomes of four different, primary APCs (mTECs, CD141 + DCs, CD123 + DCs and CD19 + B cells) from human thymus, and investigated a set of genes including the HLA genes, genes encoding transcriptional regulators and tissue-enriched genes.
Purification of primary thymic APC
We isolated four different cell populations (mTECs, CD141 + DCs, CD123 + DCs and CD19 + B cells) from six human thymic samples, where one biological replicate was removed from the mTECs due to contamination. After RNA sequencing and trimming of the data, the average library size was 59.5 ± 8.3 million mapped paired reads for the APC samples (S1 Table). The final APC dataset comprised 15245 Ensembl genes, where transcript levels were quantified in Fragments per Kilobase of transcript per Million mapped reads (FPKM). A multi-dimensional scaling (MDS) plot ( Fig 1A) of leading log 2 -fold-change showed that the samples from the same thymic APC subtype tended to group together in distinct clusters, indicating that the variation between the four cell populations is higher than within the four populations. To validate the purity of our cell populations, we investigated the expression of indicator genes (S1 and S2 Figs) previously established in the human thymic cell types (S2 Table). Taken together, we regarded these cell populations as sufficiently pure to pursue comparative gene expression analyses.
Genes differentially expressed and spliced between the thymic APCs
First, we investigated how genes with expression levels FPKM > 1 were distributed between the thymic APCs ( Fig 1B). We found that 46% (n = 7043) of the 15245 genes present in our dataset were expressed (FPKM >1) in all four thymic APC cell types, while 13% of the genes in the dataset had FPKM levels below 1 in all APC populations. The DCs (i.e. the CD123+ and/or the CD141+) shared 66% (n = 7043 + 1939 + 169 + 146 + 250 + 440) of the expressed genes with B cells whereas only 51% (n = 7043 + 139 + 104 + 250 + 146 + 101) were shared between the DCs and the mTECs. B cells and mTECs also shared 51% (n = 7043 + 269 + 146 + 250) of the expressed genes in the dataset. The percentage of all expressed genes that were detected uniquely was 7% in mTECs (n = 1084), 4% (n = 685) in CD19 + B cells, 2% (n = 309) in CD123 + DCs, and finally 2% (n = 268) in CD141 + DCs. A list of the unique and commonly expressed genes (FPKM > 1) between the four thymic APCs has been provided in S3 Table. We continued by exploring significantly differentially expressed (DE) and differentially spliced (DS) genes between the thymic APCs (Table 1 and S3 Fig). The largest number of significantly DE and DS genes was seen between mTECs and non-epithelial APCs (2787-3093 DE genes and 339-372 DS genes). Fewer DE and DS genes were observed when the CD19 + B cells were compared to the DC subsets (284-769 DE genes and 26-47 DS genes) and between the CD123 + and the CD141 + DCs (139 DE genes and 28 DS genes). A list of the significantly DE genes has been provided in S4 Table. Furthermore, we performed a gene ontology (GO) enrichment analysis of the significant DE genes (log 2 FC > 1, FDR < 0.05) pairwise between the APCs (S4-S9 Figs). The most significant GO term in CD141 + , CD123 + and CD19 + compared to mTECs was "immune system process". Conversely, the most significant GO term in mTECs compared to CD141 + , CD123 + and CD19 + B cells was "anatomical structure development". Interestingly, we also observed GO term enrichment branching down to more specific terms in the mTECs, such as "regulation of nervous system development" and "muscle system process".
HLA and genes involved in the HLA class I and II pathways
We next analyzed the expression levels of the classical HLA genes and the genes involved in the HLA class I and class II antigen presentation pathways [25]. For all the classical HLA genes, the CD141 + DCs expressed the highest levels (Fig 2).
The class I genes HLA-B and HLA-C displayed the highest and lowest expression levels in all APCs, respectively. The class II genes HLA-DRA1 and HLA-DRB1 obtained the highest expression levels, whereas the lowest expression levels varied between HLA-DQA1 and HLA-DQB1. The classical HLA genes significantly DE (FDR < 0.05) between the thymic APCs have been listed in S5 Table. Among the genes involved in the HLA class I antigen presentation pathway (S10 Fig), we observed that the CD141 + DCs also expressed the highest levels of B2M (β2-microglobulin), CALR (calreticulin), ERAP2 (endoplasmic reticulum aminopeptidase 2), PDIA3 (protein disulfide isomerase family A member 3, also known as ERp57), PSMB8 (immunoproteaosome subunit β5i), TAP1, TAP2 and TAPBP (the TAP transporters and tapasin, respectively). CANX (calnexin) and ERAP1 was highest in the CD123+ DCs, whereas PSMB5 (house-keeping proteaosome subunit β5) was highest in mTECs. Among the genes involved in the HLA class II antigen presentation pathway (S11 Fig), the CD141 + DCs expressed the highest levels of CD74 (invariant chain), HLA-DMA, HLA-DMB, HLA-DOA, HLA-DOB (the heterodimeric glycoproteins DM and DO) and IFI30 (the lysosomal thiol reductase, also known as GILT). The genes CTSB and CTSS, encoding cathepsins B and S, and LGMN, encoding the asparaginyl endopeptidase (all three are involved in proteolysis of antigens in the endosomal and lysosomal compartments before HLA class II loading) showed the
Transcriptional regulator genes
We continued by investigating the expression level of three transcriptional regulator genes, AIRE, FEZF2 and DEAF1 (Fig 3) in the human thymic APCs. The mTECs were the only APC type that showed a median expression level of AIRE and FEZF2 above FPKM = 1. On the contrary, DEAF1 was expressed in all APCs, but the highest levels were clearly detected in the CD141 + DCs.
Tissue enriched genes in the thymic APCs
Furthermore, we wanted to explore to what extent genes encoding TRAs were expressed in the thymic APCs. However, this turned out to be a more complicated task than anticipated, as large-scale projects such as the Human protein atlas (HPA) have shown that many "tissue-specific" proteins from the literature are in fact expressed in several tissues [26]. We therefore used the list of "tissue enriched" genes from the HPA, (i.e. genes where mRNA levels in one tissue type are at least five times the maximum levels of all other tissues analyzed) and investigated the expression level of these genes in the thymic APCs. A total of 601 tissue enriched genes were present in our APC dataset, and the percentage of all expressed genes (FPKM >1) that were detected uniquely was 20% in mTECs (n = 121), 7% (n = 43) in CD19 + B cells, 4% (n = 23) in CD123 + DCs and 2% (n = 13) in CD141 + DCs (Fig 4A). A total of 15% (n = 91) of the tissue enriched genes were expressed in all four APCs while 26% (n = 157) of the genes had FPKM levels < 1. It has been reported in the literature that each individual TRA is only expressed in a minority of the mTECs (1-3%) at any given time [1]. We therefore questioned whether our strict edgeR data threshold (generally set to avoid false positives), stating that genes need to be present in five biological replicates to be included in the APC dataset, restricted the number of tissue enriched genes in our analysis. Therefore, we reanalyzed the data with a lower threshold, where genes only needed to be present in one biological replicate to be included in the dataset (Fig 4A and S7 Table). The number of unique tissue enriched genes then increased remarkably in the mTECs (from 121 to 178, Δ = 57) compared to the other APCs (Δ = 5 in CD19 + B cells, Δ = 2 in CD123 + DCs and 1 in CD141 + DCs). The percentage of tissue enriched genes shared between the four APCs was 7%, however, the number of genes with FPKM < 1 increased from 26% to 63% (n = 852), indicating that quite a lot of the tissue enriched genes are very lowly expressed. We therefore plotted the number of tissue enriched genes in the four APCs across FPKM thresholds ranging from 0.5-11 (S12 Fig). We then observed that that the mTECs expressed the highest number of tissue enriched genes, regardless of the FPKM threshold. Finally, we investigated whether there was any overlap between the tissue enriched genes and genes encoding human autoantigens (Fig 4B) from the Immune Epitope database (see materials and methods). All the APCs expressed tissue enriched genes overlapping with autoantigen genes (S8 Table).
Autoimmune expression quantitative trait loci genes in the APCs
We have previously reported autoimmune disease associated expression quantitative trait loci (eQTL) in whole human thymus [27,28]. As the thymic tissue is composed of 98% thymocytes and only 2% APCs [29], it is conceivable that the expression of these eQTL genes (eGenes) originated from the developing T cells. Nonetheless, we questioned whether five of the eGenes (FCRL3, ERAP2, RNASET2, SIRPG and SYS1) were expressed in the thymic APCs, and also included 10 suggested eGenes with P-values < 7.4 x 10 −4 that did not reach the significance threshold in the eQTL study [27]. We observed that all the eGenes were expressed (FPKM > 1) in at least one of the APCs (S13 and S14 Figs). While IP6K1, PARK7, SYS1 and TROVE2 were expressed (FPKM > 1) in all four cell types, SLC16A14 was the only eGene with a median FPKM expression level above 1 uniquely in one of the APC subsets (the CD141 + DCs). However, we could detect variable levels of SLC16A14 among the biological replicates in the mTECs and in the CD123+ DCs, indicating that this gene is not strictly CD141+ specific.
Discussion
In this study, we compare the transcriptome data of four human thymic APC subsets obtained from children undergoing heart surgery at very young age. We show that the thymic CD141 + DC is the most active APC in terms of expressing HLA and HLA pathway genes. Our data also supports that mTECs express AIRE, FEZF2 and a high variety of tissue enriched genes. This insight contributes to the field of transcriptomics and the gene lists generated in this study provide a rich resource to the scientific community. This is, to our knowledge, the first study that performs RNA sequencing of mTECs, CD141 + DCs, CD123 + DCs and CD19 + B cells isolated from the same individuals.
The finding that nearly half (46%) of all the genes in our dataset were expressed (FPKM > 1) in all four thymic APCs suggests that these gene products are needed for common house-keeping functions, energy generation, cell growth and basic metabolism. We have also seen from the analysis (Fig 2 and S10 Fig) that the four thymic APCs share certain genes involved in antigen processing and HLA presentation. Furthermore, we observed that a larger number of genes were shared between the DCs and the B cells, whereas the mTECs expressed more unique genes. Also, the largest number of both DE and DS genes was found between mTECs and the non-epithelial APCs. This could partly reflect the fact that mTECs derive from a non-hematopoietic cell lineage, whereas both B cells and DCs derives from hematopoietic cell precursors. When we compared CD123 + , CD141 + and CD19 + cells to mTECs, we found that these hematopoietic cells expressed more genes involved in immune system processes, reflecting their function as immune cells. Conversely, mTECs seemed to express more genes involved in "anatomical structure development", reflecting their function as epithelial cells. The GO terms "regulation of nervous system development" and "muscle system process" also turned up, which could be due to the expression of tissue enriched genes. However, it could also be caused by the presence of rare epithelial cells with an expression phenotype resembling that of cells from muscle and neurons [29].
One interesting finding was that the human CD141 + DCs expressed (FPKM > 1) the highest levels of all classical HLA genes and 67% of the HLA class I and II antigen presentation pathway genes investigated in this study. The high HLA expression in CD141 + DCs, and simultaneously low expression of tissue enriched genes compared to the other APCs, could indicate that this cell type focuses on the presentation of extracellular peptides derived from the thymic environment. Conversely, primary mTECs expressed the lowest levels of HLA compared to the other thymic APCs. Among the HLA pathway genes, where CD141 + DCs expressed the highest levels, mTECs consequently expressed the lowest levels, whereas thymic B cells and CD123 + DCs had more similar levels. It should be noted that surface protein expression levels cannot be directly deduced from the mRNA levels, due to variations in e.g. transcriptional and translational rates, and/or mRNA and protein stabilities [30]. However, former studies [30,31] have shown a moderate correlation between mRNA and protein levels which was better than previously thought. It is therefore conceivable that the HLA protein level reflects, to a certain degree, the mRNA level in the thymic APCs.
Consistent with the literature, we found AIRE and FEZF2 expression levels in human mTECs. In the thymic B cells, AIRE was expressed at extremely low levels (FPKM = 0.03) compared to mTECs. However, as only 5% of human thymic B cells are AIRE-positive [23], this could explain the low expression levels. We also discovered that DEAF1 was expressed in all four APCs, with the highest expression levels in CD141 + DCs. As Deaf1 is involved in regulating certain genes encoding peripheral tissue antigens in peripheral lymphoid tissues [24] and the thymic APCs are presenters of antigens from the periphery, it would be interesting to further investigate the role of this transcription factor in the thymic APCs.
Furthermore, we found tissue enriched genes in all four thymic APCs, but when we only considered the uniquely expressed tissue enriched genes in these cell types, the mTECs clearly expressed the largest number, especially when we lowered the data threshold. Additionally, several of the expressed tissue enriched genes in all the APCs overlapped with reported human autoantigen genes from the Immune Epitope Database. However, it still remains uncertain to what extent these low-level transcripts are actually translated and presented on the APC surface [32]. More studies concerning the HLA-peptide repertoire in APCs are needed to confirm which tissue enriched peptides that are presented to the developing thymocytes in human thymus, and as stated by others [32,33], these types of experiments are currently limited by available technology.
Lastly, even though thymocytes are the most abundant cell type in whole thymic tissue, previously reported thymic autoimmune disease associated eGenes were clearly expressed in the thymic APCs. This suggests that gene expression levels might be influenced by risk variants in the thymic APCs. None of the eGenes were clearly cell type-specific. However, the number of eGenes was limited, as the study from where they were obtained [27] was underpowered due to the low number of thymi (n = 42). In the future, larger studies including more thymic tissue samples will most likely reveal more autoimmune disease associated eQTLs, which further encourages to search for autoimmune disease related eQTLs in individual thymic APCs.
In this study, we used five to six biological replicates for each APC type, as it has been reported that the number of DE genes increases with the number of biological replicates (n = 2-6) [34]. This has given strength to our study regarding improved accuracy for log 2 FC estimates. Furthermore, Liu et al. also reports that, for DE studies, sequencing more than 10 million reads per sample gives diminishing returns compared with adding replication [34]. The majority of the APC samples (22 of 23) comprised between 17 and 37 million paired reads in their libraries (see S1 Table), indicating sufficient sequencing depth for differential expression analysis. One mTEC sample only had 7,219,276 paired reads, but this sample still clustered together with the other mTECs in the MDS plot and was therefore kept in the analyses. Sequencing deeply is also advantageous when analyzing differential expression of exons [34].
To conclude, this study provides data on the transcriptomes of four human thymic APCs, as well as insight into the expression profiles of genes important for the APCs and genes associated with risk for autoimmune diseases.
Materials and methods
The project is approved by the Regional Ethics Committee (REC) South-East, the Norwegian Social Science Data Service, and the Norwegian Directorate of Health.
Sample material
Human thymus tissue was obtained from six children undergoing cardiac surgery. All six children were boys within an age range of 24 days-16 months. None of the patients had any known syndromes. This project was approved by the regional ethical committee and written informed consent was given by all parents. All tissue samples were made anonymous.
Thymus dissociation
A half thymus (~10 g) was collected each time and immediately washed twice in PBS (Gibco, Thermo Fischer #14190-094, MA, USA) and then stored in a medium consisting of 90% RPMI (Sigma-Aldrich # R7509, MO, USA) and 10% heat inactivated FCS (PAAlab #15-102, Pasching, Austria) for 30 min. The thymic tissue was then divided and treated in two C-tubes (5mg in each) with Collagenase D (Roche Life Science #11088858001, Basel, Switzerland) three times and Liberase (Roche Life Science #05401119001) [35] twice on a gentle MACS Octo Dissociator (Miltenyi Biotec # 130-096-427, Bergisch Gladbach, Germany) to completely dissolve the tissue. The first C-tube was intended for TEC isolation and was treated at 37˚C [29], while the second C-tube, intended for B cells and DC isolation, was treated at 20˚C [36]. After each dissociation, the supernatants from each tube was filtered and pooled to final, respective cell suspensions, before the cells were counted.
Isolation of thymic APCs
Four different cell types (mTECs, CD19 + B cells, CD123 + and CD141 + DCs) were isolated from each thymus. After counting the cells in the two total cell suspensions, OptiPrep Density Gradient Medium (Axis Shield, Alere, Oslo, Norway) was used to separate the light density APC cells from the T-lymphocytes. After centrifugation, the layer with the purified APCs treated at 37˚C was transferred to one tube, while the layer of APCs treated at 20˚C were divided and transferred to two new tubes. The first tube (37˚C) was first depleted for CD45 + positive cells with the EasySep Human CD45 Depletion kit (STEMCELL Technologies #18259) before TECs were EpCAM-positively selected with CELLection Epithelial Enrich (Thermo Fischer #16203). cTECs were further separated from the mTECs by using anti-CDR2 [37]-biotin and employing EasySep biotin selection kit (STEMCELL Technologies #18553). The cTECs were not used in this study. The second tube (20˚C) was used for isolating CD123 + and CD141 + DCs. These cell types were separated on an autoMACS Pro Separator (MACS Miltenyi Biotec), where the fraction was first treated with MACS Miltenyi kit CD303(BDCA-2) to isolate CD123 + , then the remaining supernatant was treated with MACS Miltenyi kit CD141 (BDCA-3) (Miltenyi Biotec GmbH, #130-090-509, #130-090-512, Bergisch Gladbach, Germany) to isolate CD141 + DCs. The third tube (20˚C) was used to isolate B cells with Easy-Sep Human CD19 Positive Selection Kit (STEMCELL Technologies #18054 Vancouver, Canada). The markers chosen to isolate the APCs in this study (EpCAM, CD123, CD141 and CD19) are well established markers for human APCs, and have also previously been found in the respective human thymic APCs (see S2 Table). The cells were stored in RNAprotect Cell Reagent (Qiagen #76526) at -80˚C. RNA was extracted from all cell types with RNeasy Plus Micro Kit (Qiagen #74034, Hilde, Germany).
RNA-seq preparation
Because of low RNA yields, 100 pg of RNA from each cell type was used for amplifying cDNA with SMART-Seq v4 Ultra Low Input RNA Kit for Sequencing (Clontech Laboratories, CA, US). 1 ng of the cDNA was further prepared with MicroPlex Library Preparation Kit v2 (Diagenode, Seraing, Belgium). RNA sequencing (125 bp paired end) was performed at the Norwegian Sequencing center (NSC) on Illumina HiSeq 2500 (Illumina, CA, US) with 4 samples per lane.
Data processing in edgeR
The two datasets, quantified on gene level and exon level respectively, were further processed using the Bioconductor package edgeR [42] (Version 3.3.3). Tags expressed with less than 1 count-per-million (CPM) and 0.1 CPM and present in less than five biological replicates were filtered from the datasets quantified on gene level and exon level, respectively. Normalization was performed by using the edgeR calcNormFactors function. This function finds a a set of scaling factors for the library sizes that minimizes the log-fold changes between the samples for most genes [43]. To compute these scale factors, edgeR uses a trimmed mean of M-values (TMM) between each pair of samples.
After these processing steps, our APC dataset comprised 15245 Ensembl genes, where transcript levels were quantified in Fragments per Kilobase of transcript per Million mapped reads (FPKM). For all analyses, except the differential expression, differential splicing and GO enrichment analyses, median gene expression levels have been used.
Quality control after purification of primary thymic APCs
An MDS plot for the 23 samples (five biological replicates for the mTECs, respectively, and six replicates for the CD19 + B cells, CD123 + and CD141 + DCs) was made in edgeR, based on the gene level quantifications from FeatureCounts. The two axes in the MDS plot correspond to the leading log 2 -fold change between each pair of samples. Leading log 2 -fold change is the root-mean square average of the largest log 2 -fold changes between each pair of samples. A venn diagram of the four APCs was made with the R package VennDiagram [44].
Differential expression and GO enrichment analysis
Differential expression analysis was carried out in edgeR using generalized linear models (GLM) and GLM likelihood ratio tests to determine DE genes (log 2 FC > 1 and FDR of < 0.05) between the cell types trough pairwise comparisons [42]. Differential exon usage analysis was performed by applying the edgeR F-test. A GO enrichment analysis was performed with all significant DE genes (log 2 FC > 1 and FDR < 0.05) in the GOrilla software [45,46](http://cbl-gorilla.cs.technion.ac.il/) using two unranked lists of genes (target and background lists). Only the pairwise comparisons involving mTECs obtained enough DE genes to return GO results. In order to reduce the figure sizes, the P-value threshold was set to 10 −9 when DE genes in either CD141 + , CD123 + or CD19 + cells were used as target list, whereas a P-value threshold of 10 −6 was sufficient when DE genes in mTECs were used as target list. Box plots were made with the R package ggplot2 [47].
Transcriptional regulator genes, tissue enriched genes and genes encoding autoantigens in the thymic APCs
Boxplot of AIRE, FEZF2 and DEAF1 was made with the R package ggplot2. TRA genes were obtained from the HPA (Version 18) [26] (www.proteinatlas.org), where we downloaded the list of 2608 tissue-enriched genes from the tissue specific proteome. Genes annotated as "Tissue enriched" implicates that mRNA levels for these genes have been found to be at least fivefold higher in a particular tissue as compared to all other tissues in the database (data available from v16.proteinatlas.org). This list was further merged with our gene expression dataset. Our gene expression dataset only includes genes that are present in at least five biological replicates, and after merging, we found 601 of the tissue enriched genes in the APC dataset. However, as it has been reported in the literature that each individual TRA is lowly expressed in only 1-3% of mTECs at any given time, we lowered the filtering criteria in edgeR for this analysis and included genes present in at least one biological replicate in the dataset. We then found 1362 tissue enriched genes from our gene expression dataset. Furthermore, we searched for autoantigens associated with autoimmune diseases in the Immune Epitope Database [48](www.iedb. org) by setting the parameter "Disease" to "Autoimmune Disease" and "Host" to "Humans". Among the 1733 autoantigens, 1402 were encoded by human genes. The list of the 1402 autoantigen genes was further merged with the 1362 tissue enriched genes in our dataset where we found 117 tissue enriched genes that overlapped with autoantigen genes. The venn diagram in Fig 4 were made with edgeR package VennDiagram [44]. The plot of total gene number across different FPKM thresholds was made with the R package ggplot2.
Autoimmune disease genes
The eQTL genes were chosen from S1 Table in [27]. The microarray probes for the 10 eGenes with suggested significance were quality controlled as described in [27]. We further searched for these genes in our RNA sequencing data in the four thymic APCs. | 2019-07-03T13:05:23.019Z | 2019-07-01T00:00:00.000 | {
"year": 2019,
"sha1": "77a6ac6c72e4c15268f34eaa348343fdc2d807a4",
"oa_license": "CC0",
"oa_url": "https://doi.org/10.1371/journal.pone.0218858",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "77a6ac6c72e4c15268f34eaa348343fdc2d807a4",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
232023686 | pes2o/s2orc | v3-fos-license | Evaluation of compressive strength, shear bond strength, and microhardness values of glass‐ionomer cement Type IX and Cention N
Aim: This study aimed to compare the compressive strength, shear bond strength, and microhardness of glass‐ionomer cement (GIC) Type IX and Cention N. Materials and Methods: Five samples each of GIC Type IX and Cention N were prepared for testing the shear bond strength, tensile strength, and microhardness. Cylinders of the samples measuring 1 cm diameter and 6 mm height were prepared for compressive strength and shear bond strength. For shear bond strength, these samples were embedded into acrylic blocks of dimensions 2 cm × 2 cm. Testing of shear bond strength and compressive strength was done by mounting the samples in a universal testing machine with a crosshead speed of 1 mm/min. The samples for microhardness were 1 cm diameter and 5 mm height. The samples were mounted on Vickers microhardness testing machine to test the microhardness. Results: The values for shear bond strength of Cention N were statistically highly significant (P < 0.01) as compared to GIC Type IX, whereas the compressive strength and microhardness values of Cention N were statistically significant (P < 0.5) as compared to GIC Type IX. Conclusion: The results suggest significantly higher values for mechanical properties of Cention N as compared to GIC Type IX.
INTRODUCTION
The initial signs of dental caries include surface softening, however, when the lesion progresses to the point of breaks in the continuity of the enamel surface, microcavitations occur. Once cavitations occur, it is a critical stage in caries process as bacteria can easily invade into the dentin. [1] Historically, the management of dental caries was based on the belief that caries was a progressive disease that eventually destroyed the tooth unless there was a surgical and restorative intervention. [2] Consequently, the present-day management of dental caries includes identification of an individual's risk for caries progression, and to assess disease progression alongside management with appropriate preventive services, accompanied by restorative therapy when indicated. Conversely, some carious lesions may not progress and, therefore, may not need restoration. [3] The benefits of restorative therapy include: removing cavitations or defects to eliminate areas that are susceptible to caries, stopping the progression of tooth demineralization, restoring the integrity of tooth structure, preventing the spread of infection into the dental pulp, and preventing the shifting of teeth due to loss of tooth structure. [4] Among the dental restorative materials, silver amalgam has been used for >100 years for the restoration of posterior teeth owing to its good mechanical properties.
Journal of Conservative Dentistry | Volume 23 | Issue 6 | November-December 2020 However, the controversy regarding amalgam due to the safety of mercury and any causal link with a variety of diseases is one of the oldest ongoing arguments in medicine. [5] Numerous direct filling materials are available for the modern dental practitioner for posterior load-bearing restorations from silver amalgam through to modern-day bulk-fill composites. The prime concern of a restorative material for pediatric patients takes account of factors such as their ability to bear stress, durability, integrity of marginal sealing, esthetics, and time taken for the restoration. In posterior tooth restorations, mechanical and physical properties play a vital role as it is subjected to heavy occlusal load. [6] A leap in the direct restorative was made with the introduction of light-cured composites. Composites were introduced in the 1960s and have been available for nearly 50 years. [7] Although composite resin materials have good physical properties, the main limitations are polymerization shrinkage resulting in marginal microleakage, postoperative sensitivity, and secondary caries. [8] Glass-ionomer cement (GIC) can be viewed as basic filling materials; they are long established, economical, and simple to use. They are usually applied in bulk without an adhesive, are self-curing, and do not require complicated dental equipment. [5] Recently, a tooth-colored, basic filling material for direct restorations, Cention N, has gained importance in restorative dentistry. It is self-curing with optional additional light-curing. The alkasite Cention N thus redefines the basic filling, combining bulk placement, ion release, and durability in a dual-curing, esthetic product -satisfying the demands of both dentists and patients. Cention N has been suggested to have strength comparable to amalgam and the esthetics of GIC. [5] In the quest to further study the properties of Cention N and to compare the compressive strength, shear bond strength, and microhardness of GIC Type IX and Cention N, the following study was conducted to establish Cention N as a material for the restoration of primary teeth.
MATERIALS AND METHODS
The present study was conducted in the Department of Pedodontics and Preventive Dentistry and Centre for Advanced Research of the institute.
The materials used in the study were Fuji IX GIC (GC Gold Label) and Cention N (Ivoclar Vivadent). The composition of GIC Type IX with powder consisting of alumina, silica and calcium fluoride and liquid consists of mainly polyacrylic acid and tartaric acid. Unlike the composition of GIC type IX Cention N consist of liquid made of monomer which is a combination of UDMA, DCP, an aromatic aliphaticUDMA, and PEG400 DMA and the powder consists of ytterbium trifluoride and barium aluminum silicate glass along with photoinitiator Ivocerin Five samples each of GIC Type IX (Group 1) and Cention N (Group 2) were prepared for testing the shear bond strength, tensile strength, and microhardness.
Sample preparation for shear bond strength and compressive strength
Cylinders of the samples measuring 1 cm diameter and 6 mm height were prepared [ Figure 1]. Initially, molds using modeling wax were prepared with the measured dimensions. After this, the molds were filled with the restorative material by mixing the powder and liquid according to the manufacturer's instructions. The molds were filled up to the height of the cylindrical mold, and the sample was covered with mylar strip, followed by covering with glass slab. The samples were then de-molded, and finishing was done using finishing burs.
For shear bond strength, these samples were embedded into acrylic blocks of dimensions 2 cm × 2 cm. The samples were embedded to a height so that 2 mm of the sample was above the acrylic block. Following this, the samples were stored in distilled water for 24 h.
Testing of shear bond strength and compressive strength was done by mounting the samples in a universal testing machine with a crosshead speed of 1 mm/min.
Sample preparation for microhardness [Figure 1]
The samples for testing the microhardness were prepared similarly to the samples for compressive strength. The dimensions for the samples were 1 cm diameter and 5 mm height. The samples were mounted on Vickers microhardness testing machine, and three indents were taken at three different points for each sample, followed by measurement of the Vickers hardness number at these points.
Statistical analysis
The data collected were tabulated accordingly and were subjected to statistical analysis using Statistical Package for the Social Sciences -version-20-(IBM SPSS Statistics.) Mean and standard deviations were calculated for each group and analyzed using Student's t-test used for the equality of means and Levene's test for the equality of variances.
RESULTS
The shear bond strength, compressive strength, and microhardness of GIC Type IX and Cention N are shown in Table 1. The results suggest that the values for shear bond strength of Cention N are statistically highly significant (P < 0.01) as compared to GIC Type IX. Furthermore, the compressive strength and microhardness of Cention N have values which are statistically significant (P < 0.5).
DISCUSSION
The long used economic, basic filling materials, i.e. amalgam and glass ionomers both remain popular under particular dental circumstance. Numerous direct filling materials are available to the modern dental practice from amalgams through to modern bulk-fill composites. [9] Amalgam materials were first introduced to Western dentistry in the 19 th century. Amalgams offer unparalleled longevity and strength but are coupled with poor esthetics and controversial ingredients. [5] However, the longevity of the restoration is no longer the primary factor in selecting a restorative material. Esthetics also play an integral part in selecting a restorative material. Along with this, the tooth preparation has now shifted from conventional to minimal intervention. Coupled with the increasing rate of avoidance of dental amalgam because of its mercury content and the excessive replacement of serviceable amalgam restorations, amalgam has lost popularity as a restorative material. [10] GIC systems have become important dental restorative materials for use in children as they are easy and practical to use, leach fluoride, adhere to tooth structure, require conservative preparation, and undoubtedly offer better esthetics as compared to amalgam.
Thus, the quest for a real alternative to amalgam or GIC has always been explored which is costeffective, fluoridereleasing, is quick and easy to use without complicated equipment, and that offers both strength and good esthetics.
Cention N, a tooth-colored, basic filling material for direct restorations, is self-curing with optional additional light-curing. It is available in the tooth shade A2, is radiopaque, and releases fluoride, calcium, and hydroxide ions. [5] The clinical success of restorative material depends on a good adhesion with dentinal surface so as to resist various dislodging forces acting within the oral cavity. Shear bond strength is important to the restorative material clinically because of the fact that the major dislodging forces at the tooth restoration interface have a shearing effect. [11] Therefore, higher shear bond strength implies better bonding of the material to tooth. The results of the present study suggest that the shear bond strength of Cention N was comparatively higher as than that of GIC Type IX. Manuja et al. suggested that GIC Type IX has the lowest shear bond strength values when comparing it with giomer, ormocer-based composite, and nanoceramic restorative material. [11] The compressive strength is an important property in restorative materials, particularly in the process of mastication. The results of the present study advocate that Cention N has compressive strength values significantly higher than GIC Type IX. Sadananda et al. in their study reported high compressive strength and flexural strength values on comparing Cention N and GIC. [12] The higher values for Cention N could be attributed to the fact that monomers together with initiators, catalysts, and other additives form the reactive part of a resin-based restorative. The strong mechanical properties and good long-term stability can be attributed to the combination of UDMA, DCP, an aromatic aliphatic-UDMA and PEG-400 DMA, which interconnects (cross-links) during polymerization. UDMA is the main component of the monomer matrix. It exhibits moderate viscosity and yields strong mechanical properties. The highly cross-linked polymer structure is responsible for the high flexural strength. [5] Hardness is the resistance of a material to indentation or penetration. It has been used to predict the wear resistance of a material and its ability to abrade or be abraded by opposing tooth structures. In the present study, it was also seen that the Vickers hardness number The increased microhardness of Cention N is probably related to the nanoparticle size of inorganic filling. It includes a special patented filler (partially functionalized by silanes) which keeps shrinkage stress to a minimum. This isofiller acts as a shrinkage stress reliever which minimizes shrinkage force, whereas the organic/inorganic ratio, as well as the monomer composition of the material, is responsible for the low volumetric shrinkage. [13] Along with the high strength, other properties such as the dual-cured mechanism, fluoride ion release, calcium and hydroxide ion release, low polymerization shrinkage, and the capacity to remineralize make Cention N as a preferred restorative material in pediatric dentistry.
CONCLUSION
The results of the present study indicate significantly higher values for mechanical properties of Cention N as compared to GIC Type IX, thus recommending its use as a restorative material for pediatric dental patients. Further, in vivo studies are, however, required to authenticate it as an ideal restorative material. | 2021-02-24T14:29:26.570Z | 2020-11-01T00:00:00.000 | {
"year": 2020,
"sha1": "7000d367363dc788bcf2114b93327c1d1ed0b169",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8095699",
"oa_status": "GREEN",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "5e82292e94e6976e70f9c8db276ba90104674f64",
"s2fieldsofstudy": [
"Materials Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
96717860 | pes2o/s2orc | v3-fos-license | Connecting heterocycles via a catalyzed lithiation †
Recent results in the field of the arene-catalyzed lithiation of different heterocycles are presented in this account. This process allows the transformation of several heterocyclic systems into a series of functionalized organolithium compounds by a regioselective ring-opening of the heterocycle. The further reaction of the mentioned organolithium intermediates with different electrophiles affords, after hydrolysis, the corresponding functionalized molecules. Some of these products, derived from carbonyl compounds can be easily cyclized again to give a new series of heterocycles in which the electrophilic fragment has been incorporated to the corresponding starting heterocycle.
Introduction
Functionalized organolithium compounds 1 can be achieved following classical procedures (i.e., halogen-lithium exchange or tin-lithium transmetallation) 2 or, in some cases, through new methodologies, among them the reductive opening of different appropriate oxygen-, nitrogenand sulfur-containing heterocycles. 3The interest of functionalized organolithium compounds lies in their applicability in organic synthesis, due to the fact that by reaction with electrophiles, polyfunctionalized molecules are obtained in a single synthetic operation.Lithium metal itself or lithium in the presence of a stoichiometric or catalytic amount of an arene [naphthalene, 4,4'-ditert-butylbiphenyl (DTBB), biphenyl, 1-(N,N-dimethylamino)naphthalene being the most commonly used] have been used as the lithiating reagents in the reductive opening of heterocycles. 3Only small heterocycles (three and four membered-rings), which are prone to release the strain energy, and heterocycles with activated bonds can undergo a reductive opening lithiation.For instance, benzylic carbon-oxygen bonds are susceptible to reductive cleavage by means of a lithiating reagent to generate benzylic organolithium compounds through a SET process.Phthalan (1, n = 1) is a special kind of cyclic benzyl ether and is opened reductively with an excess of lithium in the presence of a sub-stoichiometric amount of DTBB 4 or naphthalene 5 to give the dianionic intermediate 4. Thus, after a first electron transfer to phthalan 1, the radical anion 2 is formed and it decomposes to give a more stable radical anion 3, which after a second electron transfer, leads to the dilithium derivative 4 in almost quantitative yield.The reaction of 4 with different electrophiles allows the preparation of functionalized alcohols 5 (Scheme 1).The intermediate 4 has also been transformed into the corresponding functionalized organozinc derivative by a lithium-zinc transmetallation process with zinc bromide, and its reaction with allylic bromides, 6 aryl halides in the presence of palladium, 7,8 electrophilic olefins 9,10 and acylating reagents was studied.Diols 5 derived from the reaction of intermediate 4 with carbonyl compounds (E + = R 1 R 2 CO), are easily cyclized under acidic conditions to give the corresponding six-membered benzo-condensed cyclic ethers 6 (Scheme 1).The same process starting from isochroman (1, n = 2) leads to functionalized alcohols 5 with n = 2 and sevenmembered heterocycles 6. 11
Scheme 1
With these antecedents, we considered it of interest to study the reductive opening lithiation of different heterocycles with benzylic carbon-oxygen and carbon-sulfur bonds.In addition, the regiochemistry of the reductive opening lithiation of different non-symmetrical phthalan derivatives by an arene-catalyzed lithiation would be studied also in order to know how the aromatic moiety of these compounds affects the process.
Lithiation of 2,7-dihydrobenzothiepin
The treatment of 2,7-dihydrodibenzothiepin (7) with an excess of lithium and a catalytic amount of DTBB at -78 ºC leads to the intermediate 8, which reacts with carbonyl compounds to give the corresponding alkoxides, 9, and after acidic hydrolysis to the sulfanyl alcohols, 10.However, when alkoxides 9 are stirred at room temperature in the presence of an excess of the lithiating mixture, the remaining benzylic carbon-sulfur bond is cleaved leading to new intermediates 11, which after reaction with a second electrophile and final hydrolysis with water lead to polyfunctionalized compounds 12 (Scheme 2). 12,137 Li, DTBB (5%)
Lithiation of 2,7-dihydrodinaphthoxepine and -thiepine
The reaction of 2,7-dihydro-3H-dinaphtho[2,1-c:1',2'-e]oxepine (13, Y = O) and -thiepine (13, Y = S) with an excess of lithium and a catalytic amount of DTBB, under the same reaction conditions shown in Scheme 1 (THF, -78 ºC), gave, after treatment with different electrophiles at the same temperature, and final hydrolysis, the corresponding compounds 17, resulting from a double condensation at both benzylic positions involving the intermediate 16 (Scheme 3).In contrast to the behavior observed for the starting material 7, in the case of compounds 13 it seems that after the first reductive ring-opening, the organolithium intermediate 14 initially formed suffers a rapid second lithiation to give the dilithium compound 16, which can survive under the essayed conditions until the addition of the electrophile.In order to avoid the mentioned second lithiation we used the less active stoichiometric version of the arene-promoted lithiation.Thus, treatment of the starting materials 13 with a THF solution of lithium naphthalene (1:2.2 molar ratio) in THF at -78 ºC, followed by reaction with an electrophile at the same temperature gave, after hydrolysis under acidic conditions, the corresponding monosubstituted products 15 (Scheme 3).Chiral starting materials 13 are accessible from commercially available (R)-or (S)-binaphthol (>99% ee), thus applying this methodology it is possible to prepare enantiomerically pure binaphthyl derivatives of general structure 15 and 17.Scheme 3
Lithiation of 1H,3H-benzo[de]isochromene
The reaction of 1H,3H-benzo[de]isochroman (18) with an excess of lithium (1/10 molar ratio) in the presence of a catalytic amount of DTBB (5 mol%) in THF at -50 ºC led, after 6 h, to a solution of the dianion 19, which reacted with different electrophiles at the same temperature for 15 min yielding, after hydrolysis with water, the expected functionalized alcohols 20 (Scheme 4).
A second lithiation took place leading to dianionic intermediate 22, when 19 reacted with a carbonyl compound as the first electrophile, and the resulting alcoholate was stirred at 0 ºC for 2 h in the presence of the excess of the lithiating agent.The reaction of 22 with a second electrophile followed by hydrolysis with water led to 110 ºC
Lithiation of 1,3-dihydronaphtho[1,2-c]furan
The starting 1,3-dihydronaphtho[1,2-c]furan (24) was prepared from commercially available 1,2dimethylnaphthalene in only two steps and in 49% overall yield.The reaction of compound 24 with an excess of lithium (1/10 molar ratio) in the presence of a catalytic amount of DTBB (5 mol%) in THF at temperatures ranging from -78 to -50 ºC for 3 h, followed by addition of different electrophiles [H 2 O, Bu t CHO, Me 2 CO, (EtO) 2 CO] at -78 ºC and final hydrolysis, led to a mixture of functionalized alcohols 27 and 28 (Scheme 5).A 6:1 mixture (based on the study of the 1 H NMR spectrum of the crude product) of alcohols 27 (E = H) and 28 (E = H) was obtained when H 2 O was used as electrophile.In the other cases, only products 27 were detected in the reaction crudes (NMR).Two reductive cleavages in the starting heterocycle 24 can occur under these reaction conditions: the major one leads to intermediate 25 and the minor one to intermediate 26, through the two possible benzylic carbon-oxygen bond cleavages.Diols 27 (E = R 1 R 2 COH) and a lactone derived from intermediate 25 were the only reaction products isolated and characterized when Bu t CHO, Me 2 CO and (EtO) 2 CO were respectively used as electrophiles (Scheme 5). 16RKAT USA, Inc.
Lithiation of 1,3-dihydrofurophthalan
The reaction of 1,3-dihydrofurophthalan 41 with an excess of lithium (1/10 molar ratio) in the presence of a catalytic amount of DTBB (2.5 mol.%) in THF at -78 ºC for 30 min and then for 2 h at 0 ºC, followed by addition of H 2 O and benzaldehyde as electrophiles at -78 ºC and final hydrolysis, led to a mixture of functionalized alcohols 44 and diols 45 in a regioselective manner (Scheme 7).According to these results, the intermediates 42 and 43 are involved in this process.Thus, after reductive cleavage of compound 41 (the four benzylic carbon-oxygen bonds are equivalent), the dianion 42 initially formed undergoes a second and selective reductive cleavage leading to the dialkoxide 43.When the reaction is performed for a longer reaction time or at higher temperatures in order to complete the transformation of intermediate 42 into 43, yields become significantly lower and variable amounts of 1,2,4,5-tetramethylbenzene are detected by GC/MS (Scheme 7).
Lithiation of halophthalans
The treatment of halophthalans 46, 49 and 52 with a THF solution of lithium naphthalene (1:2.1 molar ratio) in THF at -78 ºC (0 ºC for the fluoro derivative 52), followed by reaction with an electrophile at low temperature gave, after hydrolysis with water, compounds 48, 51 and 54, respectively (Scheme 8).Halogen-lithium exchange took place in the case of bromo and chloro derivatives 46 and 49, respectively, leading to the aryllithium intermediates 47 and 50, however, 4-fluorophthalan (52) undergoes selective reductive cleavage at C(3)-O benzylic carbon-oxygen bond, leading to the dilithium derivative 53 (Scheme 8). 17This completely different behavior can be explained taking into account halogen-carbon bond energies, because the fluorine-carbon bond is stronger than chlorine-and bromine-carbon bonds.
Regiochemistry of the reductive opening lithiation of substituted phthalans
The reductive cleavage at the benzylic carbon-oxygen bond in phthalan derivatives 24 (Scheme 5), 29, 32, 35, 38 (Scheme 6) and 52 (Scheme 8) takes place at the position bonded to the carbon of the aromatic ring with the higher electron density in the intermediate anion radical (one electron transfer) or dianion (two-electron transfer). 18The semi-empirical PM3 calculations of the Mulliken charges of the dianions 55-60 (Chart 1) resulting from a two-electron transfer to compounds 24, 29, 32, 35, 38 and 52, respectively, are shown in Chart 1.The reductive cleavage in these dianions occurs predominantly at the oxygen-benzylic carbon bond which is attached to the aromatic carbon atom with the highest electron density.This statement is true in the case of all the previously mentioned phthalan derivatives but not for the methoxy derivative 35 (dianion 58, Chart 1).Thus, taking into account the electron density both in the dianion and in the radical anion (which are in agreement), it is possible to explain the regiochemistry of the reductive opening lithiation of substituted phthalans. 16
Scheme 9
Concerning a possible mechanistic pathway for the formation of compounds 63 and 21, it could be possible that in the first step, a benzylic cleavage takes place, giving dianionic intermediates of the type I and IV, respectively.These intermediates could afford either (a) directly the di-alcoholates II and V, which are the precursors after hydrolysis with water of the final diols 62 and 65, respectively, or (b) complexes of type III and VI between a benzylic dianion and a carbonyl compound generated by an elimination from intermediates I and IV, respectively (Chart 2). | 2018-10-15T12:07:51.448Z | 2008-08-14T00:00:00.000 | {
"year": 2008,
"sha1": "0bd02f3789fccabf2645bf9e53455d7c5335e034",
"oa_license": "CCBY",
"oa_url": "https://www.arkat-usa.org/get-file/25976/",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "7351bd538a3407df33cd764bf8f8dd12886184bc",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
239730039 | pes2o/s2orc | v3-fos-license | Peripheral neuropathy in patients with multiple myeloma: molecular effects of bortezomib
Multiple myeloma (MM) is a B cell neoplasm characterized by uncontrolled growth of malignant plasma cells within the bone marrow. The introduction of new treatment regimens and medicinal substances, particularly proteasome inhibi - tors (e.g. bortezomib or carfilzomib) and immunomodulatory drugs (e
Introduction
Multiple myeloma (MM) is a B cell neoplasm characterized by uncontrolled growth of malignant plasma cells within the bone marrow (BM).These cells are ordinarily able to produce monoclonal proteins.MM constitutes 1% of all neoplasms and 10% of all hematological malignancies [1].The American Cancer Society estimates that 34,920 new cases of MM and 12,410 MM-related deaths will occur in 2021.MM is one of the most intractable malignancies and is characterized by the infiltration and growth of malignant plasma cells in the BM [2].
The second mechanism underlying the malignant transformation of plasma cells is hyperdiploidy, which is observed in approximately 55% of MM patients.For unknown reasons, odd-numbered chromosomes, such as 3, 5, 7, 9, 11, 15, 19 and 21 are increased in hyperdiploidy.The most prevalent hyperdiploidy (c.30%) is trisomy 11, which may cause cyclin D1 overexpression due to an increase in gene dosage [10].The pathogenesis and survival time of patients is very heterogeneous.Introducing new treatment regimens and medicinal substances, particularly proteasome inhibitors [e.g.bortezomib (BTZ) or carfilzomib] and immunomodulatory drugs (e.g.lenalidomide and pomalidomide, and monoclonal antibodies), have radically changed MM therapy by improving the response rate and progression-free survival.State-of-the-art chimeric antigen receptor (CAR) T-cell immunotherapy uses mechanisms other than basic MM therapies.The CAR-T method involves the modification of patient or donor T cells to target specific cell surface antigens.The results of the latest clinical trials with anti-BCMA CAR-T lymphocytes have shown that patients with relapsed and/or refractory MM can achieve an objective response [11].
Unfortunately, similarly to the vast majority of drugs, those used in the treatment of MM also have a specific spectrum of side effects.One of the most important clinical problems seems to be chemotherapy-induced peripheral neuropathy (CiPN), which is mainly due to the symptom frequency, inconvenience for patients and dose-limiting effects [12].CiPN occurs at varying severities during therapy, and its symptoms are observed in c.40% of MM patients with BTZ treatment [13] and up to 70% with longterm thalidomide treatment [14].The incidence of CiPN depends on the dose, schedule and method of administration [15,16].
The degree of neuropathy is determined according to various scales.However, the most common scale is the National Cancer Institute Common Terminology Criteria for Adverse Events (NCI-CTCAE) [18].This scale includes three types of neuropathy: a) sensory, b) autonomic-sensory, and c) sensorimotor.Moreover, in each type of neuropathy, its degree can be determined depending on the severity of symptoms (where 0 means no symptoms and 4 means permanent functional impairment) [19].
To elucidate the pathogenesis of CiPN, global research has focused on several areas, as shown in Figure 1.
This review focuses on the pathophysiology of CiPN based on the latest scientific data and our own research.
Pathophysiology of BiPN
Bortezomib is a boron-containing organic compound that specifically and reversibly inhibits the chymotrypsin-like activity of the 26S proteasome.Inhibition of proteasome activity by BTZ disrupts the processes necessary for proper functioning, which consequently leads to cell death [20].The mechanism of action of BZT is disruption of the cell cycle, induction of apoptosis, disturbance of bone marrow microenvironment, and inhibition of nuclear factor kappa B (NFκB) [21].
One of the first studies on the mechanisms of bortezomib-induced neurotoxicity was conducted by Cavaletti et al. in 2007 [22] using a rat model.Studies have shown that BTZ causes disturbances in satellite cells and Schwann cells of the sensory nerves.Meregalli et al. [23] proved that the drug also affects synapses and causes unmyelinated C-fiber axonopathy.BTZ cytotoxicity is also attributed to disturbances in cellular calcium homeostasis as a consequence of abnormal mitochondrial function [24].The accumulation of Ca 2+ ions in mitochondria causes rupture of the outer membrane and then the release of mitochondrial proapoptotic factors into the cytosol [25,26].In addition, the downregulation of genes responsible for calcium metabolism, such as ITPR1 and Car8, may have a significant impact on the functioning of the nervous system, including the excitability of neurons, the growth of neurites and the release of neurotransmitters [27].Protein neuroprotective factors, especially neurotrophins (NTs), play a special role in the context of nerve cell homeostasis.The family of classic neurotrophins includes nerve growth factor (NGF), brain-derived neurotrophic factor (BDNF), neurotrophin 3 (NT-3) and neurotrophin 4/neurotrophin 5 (NT-4/5).These proteins are synthesized and released mainly by nerve cells [28,29] but also by muscle [30], endothelium [31,32], spleen, adipose tissue, liver, lung and hematopoietic cells [33][34][35].Neurotrophins influence the proliferation, differentiation, viability and death of neuronal and non-neuronal cells.Due to the significant influence of NTs on the nervous system, lowering their concentration in tissue may contribute to the development of neuropathy [36,37].This hypothesis seems to be confirmed by Azoulay et al. [38], who described decreased BDNF concentrations in the plasma of patients with MM and symptoms of BiPN relative to patients treated with the same regimen but without symptoms of BiPN.
Peripheral neuropathy is associated with an increase in reactive oxygen species and a decrease in endogenous antioxidants [39].BTZ inhibits the actions of the proteasome, which causes the accumulation of misfolded proteins that would be degraded under physiological conditions.Consequently, subsequent protein folding attempts generate high levels of reactive oxygen species (ROS) [40].Thus, the development of BiPN may be related to mitotoxicity in primary axons (PNSAs) resulting from reduced mitochondrial bioenergetics.This association is confirmed by the fact that the development of mechano-hypersensitivity induced by BTZ is prevented by the administration of MnTE-2-PyP(5+), which belongs to the group of peroxynitrite decomposition catalysts (PNDCs, i.e. compounds with redox activity that detoxify peroxynitrite by catalyzing its isomerization or reduction to nitrates or nitrites).In addition, the action of BTZ is related to the nitration and inactivation of superoxide dismutase in the mitochondria and a meaningful decrease in adenosine triphosphate (ATP) production [41].
BTZ also causes higher proteotoxic stress associated with increased expression of heat shock proteins, reduced membrane potential of mitochondria, and ubiquitination of protein K48.Furthermore, BTZ downregulates the content of mitochondrial oxidative phosphorylation complexes, thereby decoupling protein 2 (UCP2) and voltage-dependent anion channel 1 (VDAC1) [42].
Proinflammatory cytokines are another area of research that may bring us closer to solving the problem of the pathomechanism of BiPN [43].One of the most extensively studied proinflammatory factors is tumor necrosis factor alpha (TNFα).Zhao et al. [44] showed that during the administration of BTZ to rats, the expression of TNFα was significantly increased.Another study confirmed that the expression of TNFα was upregulated in the dorsal root ganglia after treatment with BTZ in a mouse model [45].Furthermore, the same study showed increased expression of other proinflammatory cytokines, such as interleukin (IL) 6, transforming growth factor β1 (TGF-β1) and IL-1β, in the dorsal root ganglia, which was directly related to the administration of BTZ [45].
An increasing number of reports have focused the influence of BTZ on gene expression and epigenetic mechanisms.Although BTZ contributes to the inhibition of tumor progression, it also causes disturbances in cells that lead to the development of BiPN and other side effects such as thrombocytopenia, neutropenia or anemia.The activity profile of BTZ includes damage to DNA strands and inhibition of repair and replication processes and the cell cycle [46].
Epigenetics describes inherited gene expression mechanisms that are not dependent on changes in DNA sequences and provide diversity in the functioning of cells based on identical genetic materials.Epigenetic mechanisms include histone modification, DNA methylation, miRNA-based gene regulation, and monoallelic gene expression (parental imprinting, inactivation of the X chromosome) [47].Fernández de Larrea et al. [48] demonstrated a relationship between the degree of total DNA methylation and the survival time of patients with relapsed MM who received treatment regimens based on BTZ.Patients with total DNA methylation >3.95% achieved longer overall survival (OS).In addition, patients with a relatively low percentage of methylation (<1.07%) of the NFKB1 gene also showed a longer overall survival after BTZ therapy [48].Epigenetic mechanisms include the regulation of gene expression with small single-stranded noncoding microRNAs (miRNAs).During BTZ therapy, a decreased level of Let-7f has been observed, which promotes vascular neoplastic processes by lowering the expression of genes responsible for antiangiogenic effects [49].Administering anti-Let-7f enhances apoptosis and reduces the proliferation rate of established MM cell lines [50].
Moreover, BTZ induces changes in the expression of miRNA molecules whose genes are involved in inhibiting the development of cancer cells or in the functioning of the nervous system.For example, miRNA-181, miRNA-20a, miR-342-3p, miR-128, miR-17-92 and miR-29b regulate genes involved in the process of neurogenesis and neuronal differentiation, and their plasma concentration is significantly lowered during BTZ therapy while the level of miRNA-34a is then elevated, which results in inhibition of BDNF expression and activation of the neuronal apoptosis process [51].
Currently, our research group is focused on gene expression and epigenetic changes that may influence the development of BiPN, which has not been well explored.We have shown changes at the molecular level that may contribute to inhibiting the development of both cancer [52] and neuropathy [53].Two representative established cell lines, a) SH-SY5Y neuroblastoma and b) a PC12-derived nerve cell line, were used in these studies.Cells were treated with BTZ (50 nM/L) for 24 h, and global gene expression and miRNA expression were analyzed using genome-wide RNA and miRNA microarray technologies.Studies have shown that BTZ might exert toxic effects on both neuroblastoma cancer and PC12 nerve cells and regulate miRNA/mRNA interactions that affect important cellular functions.BTZ has been shown to exert a meaningful inhibitory effect on the proliferation (TFAP2B, PEG10) and apoptosis (HSPA1B, CLU, HMOX1) of human neuroblastoma cells.These mechanisms could be responsible for the advantages of using BTZ for cancer treatment.In contrast, in nerve cells, BTZ primarily inhibits the cell cycle (Bex2, Cdk1b, Lin9), DNA repair processes (Top2a, TopBP1, Lig1, Ercc6), neuronal morphogenesis (Egfr, Bmp7, Ilk), and neurotransmitter secretion (Syt1, Cacna1b, Lin7a).The obtained outcomes show differences in the major metabolic pathways and biological processes that are disturbed as a result of the action of BTZ in cancer and nerve cells.
In subsequent studies, we revealed a significant effect of the immune response in myeloma patients on the development of CiPN.We observed increases in the levels of proinflammatory cytokines (CCL2, IL-1β, IFN gamma, properdin) and complement proteins (complement 9, factor D) at both the transcript and protein levels.In addition to understanding the pathogenesis of BiPN, an important goal is identifying biomarkers for faster diagnosis of neuropathy.Our recent studies have identified miR-22-5p as a potential marker of CiPN in patients with MM.
Resistance to bortezomib in multiple myeloma
Resistance to BTZ development in MM patients is a serious therapeutic problem.Current scientific reports show the involvement of PSMB5 mutations and proteasome subunit upregulation, changes in protein and gene expression in response to cell survival, stress, and antiapoptotic pathways in the development of resistance to BTZ [54,55].The epigenetic changes triggered by BTZ may contribute to the development of resistance.Class I histone deacetylases (HDACs) determine the sensitivity of proteasome inhibitors, and histone methyltransferase (EZH2) alters the transcription of antiapoptotic genes during the acquisition of cell adhesion-mediated drug resistance (CAM-DR) by myeloma cells.In addition, histone methyltransferase (MMSET) has been shown to confer drug resistance to myeloma cells, thereby facilitating DNA repair [56].
Additional research by our group in this area focused on analyzing the methylation profile following exposure of neuroblastoma cells to BTZ.The study consisted of treating neuroblastoma cells with BTZ for 24 hours and then leaving them for 12 days (in medium without BTZ) to examine the methylation profile in the daughter cells and assess the extent of proliferation after subsequent doses of BTZ.The obtained results showed that BTZ induced marked genome-wide methylation changes in cells.The obtained results showed a significantly altered global methylation profile after treatment of the cells with BTZ, manifested by hypermethylation of genes which were hypomethylated in control cells and a decrease in the degree of methylation in hypermethylated genes.The observed changes mainly concerned the pathology of cancer pathways.
The consequence of these changes may be to bypass the primary antitumor activity of BTZ and develop a treatment-resistant phenotype.To investigate the acquisition of a proliferative phenotype, cells that had recovered after the first round of BTZ treatment were treated three times.Repeated treatment led to the induction of an unusual cell proliferation potential that increased with subsequent treatments (Figure 2) [57].
Conclusion
The pathogenesis of BiPN is still extremely unclear, and its development involves many molecular mechanisms; therefore.A relatively new area of research in this field is focused on the epigenetic mechanisms that may constitute the basis for the development of PN due to the global regulation of gene expression in many processes.Thorough elucidation of the mechanisms responsible for the development of BiPN will allow us to reduce/eliminate this side effect and improve the quality of life of patients.
Figure
Figure 2A, B. MTT test results showing induction of unusual cell proliferation potential that increased with subsequent treatments | 2021-10-26T00:08:45.117Z | 2021-08-31T00:00:00.000 | {
"year": 2021,
"sha1": "b678aa92899338c12a0421f7a6f785449a3910ce",
"oa_license": null,
"oa_url": "https://journals.viamedica.pl/acta_haematologica_polonica/article/download/AHP.2021.0071/63947",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c572eeae6ccd5e6beb90314051e7e582741b8f2c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233580577 | pes2o/s2orc | v3-fos-license | Use of the Purse-String Suture to Conservatively Manage a Cornual Ectopic Pregnancy
We report the successful management of a 31-year-old female, treated by cornual wedge resection. The patient suffered from vaginal spotting and lower abdominal pain. Transvaginal ultrasonography revealed a 4-5 cm right cornual pregnancy and beta-human chorionic gonadotropin was measured to be 614.7 IU/L. This ectopic pregnancy was removed via a laparotomy with cornual wedge resection and right salpingectomy.
Introduction
Cornual (interstitial) pregnancy is a rare form of ectopic pregnancy and accounts for only 2%-4% of all tubal pregnancies, yet has a maternal mortality of 2%-2.5% [1]. A cornual pregnancy is a pregnancy that is implanted in the proximal part of the fallopian tube, lying with the muscular wall of the uterus [2]. This section of the fallopian tube is thick and highly vascularized; thus, rupture presents later and with more severe bleeding compared to those of other ectopic pregnancies, leading to catastrophic hemorrhage [3]. The typical rupture of cornual pregnancies usually occurs later than nine weeks and can occur as late as 20 weeks [4]. The mortality rate for a cornual pregnancy is seven times greater than that of other forms of ectopic pregnancy [5]. Risk factors for cornual pregnancy include history of ectopic pregnancy, rudimentary horn, in vitro fertilization, and ipsilateral salpingectomy [1,6]. Clinical findings specific for ectopic pregnancy include the absence of an intrauterine gestational sac and beta-human chorionic gonadotropin (hCG) levels higher than 1,500 mIU per mL [7]. Transvaginal ultrasound (TVUS) has been the mainstay tool used to diagnose interstitial pregnancies (IPs), while MRI may be used in patients who are clinically stable and whose diagnosis remains unclear despite having a TVUS. Historically, IPs have been managed with the use of wedge resection by either laparoscopic or open surgery. Hysterectomies have also been used to manage IPs [8]. Current management of cornual pregnancies is less invasive, and centered on limiting hemorrhage and improving long-term fertility and obstetric outcomes. We report a case of a 31-year-old woman, who was diagnosed preoperatively with a cornual pregnancy via TVUS and a positive beta-hCG. This ectopic pregnancy was removed via a laparotomy with cornual wedge resection and right salpingectomy. The encircling suture method was used to remove the ectopic pregnancy, which has been shown to be simple, safe, effective, and nearly bloodless [9].
Case Presentation
This is a case of a 31-year-old Hispanic female, gravida 4 para 0 aborta 3, at five weeks gestation with no significant past medical history, who presented to the emergency department (ED) after her initial prenatal visit revealed the possible presence of a cornual pregnancy on ultrasound ( Figure 1). Her physician sent her to the ED for an emergent removal of the ectopic pregnancy. Her chief complaint was vaginal spotting and lower abdominal pain for the duration of one day. Clinically, the patient appeared hemodynamically stable, with a heart rate of 81 bpm and blood pressure of 130/81. Quantitative beta-hCG was 614.7 and TVUS revealed an anteverted uterus that appears grossly homogeneous. Endometrial stripe measured up to 7 mm in thickness with no gestational sac noted within the midline uterine fundus or body. In the right adnexa there was a 4.9 x 4.2 x 2.2 cm thick rind from the ovary. The location and appearance were concerning for a cornual pregnancy/IP given the positive beta-hCG and empty uterine cavity. The right ovary measured approximately 3.5 x 2.1 x 2.8 cm and contained a simple follicle. The left ovary measured 2.9 x 1.5 x 1.4 cm and appeared grossly normal. No pelvic free fluid was identified. The patient underwent emergency removal of the ectopic pregnancy.
Treatment and follow-up
We identified a 4-5 cm cornual ectopic pregnancy ( Figure 2). The fimbria and round ligament were cut on the right side, completing a total right salpingectomy to avoid reoccurrence, and a vicryl zero suture was put in the upper part of the uterine artery three times for hemostasis. Another suture was put in medially, followed by another suture anteriorly and posteriorly in the uterine wall, around the base ectopic pregnancy. The cornual was incised and the conceptus was extracted using an encircling suture around the base of the cornual pregnancy (Figure 2A). The encircling suture was tied to produce a tourniquet effect. While tension was kept on the knot, electric cauterization was used to incise the cornua and remove the conceptus. This procedure of using encircling sutures to produce a tourniquet around the ectopic pregnancy leads to secure hemostasis. The patient's estimated blood loss was less than 75 mL. It is routine that after surgery, hCG should be measured multiple times at different time periods to ensure the efficacy of the therapy. On postoperative day 2, her quantitative beta-hCG was measured to be 65.9 IU/L. Patient followed up with her provider for post-operative visits and quantitative beta-hCG measurements.
Discussion
Cornual pregnancies represent 2%-4% of all ectopic pregnancies [1]. The three diagnostic criteria for cornual pregnancy described by Timor-Tritsch et al. include (1) an empty uterine cavity, (2) a chorionic sac seen separately and >1 cm from the most lateral edge of the uterine cavity, and (3) a thin myometrial layer surrounding the gestational sac [10]. The findings of our patient were consistent with all of these criteria. Rupture of a cornual pregnancy may result in intra-abdominal bleeding, hence the urgency of treatment. However, treatment of this clinical presentation still raises the concern of severe hemorrhage due to the highly vascularized region of cornual pregnancies and later time of diagnosis [3]. Management of a cornual pregnancy is dependent on the size of the ectopic pregnancy. A cornual pregnancy of medium size (<5 cm) can be managed conservatively with methotrexate if there are no contraindications, such as intra-abdominal bleeding and concomitant intrauterine pregnancy. However, treatment with methotrexate has been associated with a failure rate as high as 65% [1,8,11,12,13]. Large cornual pregnancies of 5 cm or larger should be managed surgically due to increased risk of rupture. This patient's cornual pregnancy of 4-5 cm was concerning for rupture and thus was managed surgically through the use of a tourniquet purse-string suture. This technique not only aids in excision and minimizes blood loss, but also preserves fertility [14].
Conclusions
Early diagnosis and treatment are of significant importance in managing cornual pregnancies as cornual pregnancies are at risk of rupture. This rupture presents later and with more severe bleeding compared to those of other ectopic pregnancies, leading to catastrophic hemorrhage. Management of cornual pregnancies is centered on limiting hemorrhage. The tourniquet purse-string suture technique aids in removal of this pregnancy, as well as minimizes blood loss and preserves the patient's fertility.
Additional Information Disclosures
Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work. | 2021-05-04T22:06:06.688Z | 2021-04-01T00:00:00.000 | {
"year": 2021,
"sha1": "90a3535689f4e638a88f4ec4302d18732f0f60d5",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/52036-use-of-the-purse-string-suture-to-conservatively-manage-a-cornual-ectopic-pregnancy.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2e3b3d90ac817e99283577d2ce76ac369fe4a293",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248253331 | pes2o/s2orc | v3-fos-license | Distance learning in a pandemic: the experience of sociological monitoring of students in Russia and Kazakhstan
. The article is analysing the experience of using distance learning technologies in higher educational institutions of Russia and Kazakhstan in the context of a coronavirus pandemic. Before the start of the pandemic, in both Russian and Kazakhstan systems of higher education, the practice of using distance learning was limited despite the digitalization trend. The challenges of supporting the continuity of the educational process have led to the fact that, in record time, teachers and students have mastered a number of platforms that ensure their online interaction. Several months of a full-scale distance learning allowed students to form a stable opinion on the new form of education. Today we have empirical material for the period of the coronavirus pandemic and we may assess the possibilities of distance learning. The authors refer to specific case studies in which students from Ufa (Russian Federation) and Kara-ganda (Republic of Kazakhstan) took part. In general, it can be stated that the majority of students in both Ufa and Karaganda, at the beginning of the pandemic, were not satisfied with distance learning. However, the data from 2021 revealed the opposite trend and indicated an increase in respondents’ satisfaction with distance learning. The complete transition to distance education is not a pertinent question, since the potential of the traditional form of education will remain in demand. The use of proven teaching technologies will guarantee the quality and e ffi ciency of the educational process as a whole. It doesn’t matter if we’re talking about digital classes or traditional ones.
In the 2000s, there were trends all over the world in the transformation of public institutions and spheres of human activity under the influence of information and communication technologies.Progress has become noticeable in the production and implementation of modern technologies, an advanced information environment has been formed, corresponding to the tasks of socio-economic development.States sought to provide all citizens with access to information resources.Preparing and ensuring the effective implementation of the transition to the digital economy also presupposed significant changes at all levels of the education system, it required expanding the information skills, increasing computer literacy, and forming critical thinking.New educational standards have emerged that secured the formation of modern competencies, including in the field of working with information, as expected learning outcomes.
In connection with the development of information and communication technologies, distance learning has become an object of increased attention among researchers.In par-ticular, an article by Gagarina and Koldaeva [1] is devoted to innovative educational technologies of distance learning.The use of distance e-learning courses in the educational process of higher education is studied by Leontyeva and Rebrina [2], and so on.Over the course of years, the goal of creating an electronic information educational environment was set before the universities of the Russian Federation and the Republic of Kazakhstan, steps were taken in this direction on the part of both teachers and students.However, the transition to distance learning became the agenda in 2020 due to the total spread of the coronavirus pandemic.Internet resources made it possible not only to ensure the operation of educational process, but also transfered it to a more intensive and effective level.All those advantages of digital technologies that researchers have been writing about for a long time have been actualized.Thus, according to Shurygin and Krasnova, e-education enables students to develop the skills of independent work, and, consequently, to increase its effectiveness [3].Developing this position, Shabanov asserts that e-education stimulates the student to form individual system of knowledge, skills and abilities [4].Several researchers have pointed to the great potential of distance learning in specific academic disciplines.For instance, Seregina argued the high efficiency of distance learning in relation to mastering foreign languages [5].A broader, comprehensive approach to the analysis of distance learning was proposed by Orlova and Koshkina [6].Having conducted an insight into the practice of using distance education in European countries, the United States and Canada, authors analysed in detail the Russian experience before the start of the coronavirus pandemic, in which the strengths and weaknesses of this technology were tied to the financial, technical, personnel and psychological characteristics of its application.To the peculiarities of the Russian system of distance education the researchers attributed, firstly, students' habit of using printed publications, teachers relying only on their own courses, the lack of equipment for creating high-quality distance learning systems, etc.They stated that digital technologies were used as a "mediocre surrogate" of traditional education.
Indeed, the pandemic has made a significant contribution to both Russian and Kazakhstan higher education systems, as distance learning was not properly set up before.The tasks of keeping educational process operational led to the fact that teachers and students mastered a number-Zoom, BigBlueButton, Discord.The preparation of distance courses and Power-Point presentations was also carried out intensively.There were other technical problems as well.In March 2020, when secondary school, college and university students in most countries of the world, including Russia and Kazakhstan, switched to distance learning due to the COVID-19 pandemic, almost immediately problems arose with the population's access to high-quality Internet.It was especially difficult for those who live in rural areas and remote villages to adapt to the new reality.It was necessary to take emergency measures, which, in particular, was reported by Bagdat Musin, the Minister of Digital Development, Innovation and Aerospace Industry of the Republic of Kazakhstan.From March to September 2020, Kazakhstan telecom operators invested about 60 billion tenge on a fiber-optic network in 1200 settlements of the republic.In total, according to the Ministry's Telecommunications Committee, there are 6,459 rural settlements in Kazakhstan, 4,646 of which already had the Internet in September 2020.No less ambitious processes for providing students with Internet communication resources took place in Russia.
After several months, students and teachers already had a clear opinion about the new form of education.Today we have empirical material for the period of the coronavirus pandemic and we may assess the possibilities of distance learning.To record the prevailing opinion on the issue under consideration, questionnaires of students were conducted (May 2020, June 2021) using a single methodology and tools.In total, 290 (2020) and 215 (2021) university students in the Republic of Bashkortostan were interviewed.The data obtained make it possible to establish the main trends in the understanding of distance learning by students in terms of its strengths and weaknesses.
The qualitative self-assessment of students in terms of the success of their studies is as follows: 14.8% of the respondents consider themselves to be "A level students", 66.8% define themselves as a "straight B student", 16% are "low-performing", and the last 2.4% are "weak students".In terms of computer technology proficiency, 17% of the respondents rated themselves as "excellent", 58.9% as "good", 21.7% as "satisfactory", and 2.4% as "poor".
The question of the advantages of distance learning is key, in our opinion.The answers to this question made it possible to build a rating line of "advantages" of distance learning.It is presented in table 1.The efficiency of training sessions is higher than with face-to-face classes 10 19 9 The educational material is more informative than with the traditional form of education 8 26 10 There are no "advantages" 4 0 The above data show that students see the benefits of distance learning mainly in extracurricular infrastructure components.More than two thirds (67% and 80%) of the respondents note as the main advantage "saving time and money for travel and for meals during lunchtime"; half of the respondents (42% and 54%) found that they had more time to communicate with their families.Only a quarter of respondents in 2020 and already 48% in 2021 admitted that their self-organization has increased.One in five in 2020 and half of respondents (49%) in 2021 found it easier to study.However, only 10% in 2020 and 27% of respondents in 2021 noted that their interest in learning increased and the efficiency of training sessions became higher than with traditional education (19%).It is indicative that only 8% of respondents in 2020 and 26% in 2021 admitted that the educational material in distance learning has enriched in content.It is also noteworthy that 4% of respondents (2020) do not see any "advantages" of distance learning at all.Thus, the following conclusion suggests itself: the positive potential of distance learning in students' perception in 2020 is very modest.This can be explained, in particular, by the fact that students (and teachers), due to the rapid spread of coronavirus pandemic and hasty transition to distance learning, did not have the opportunity to thoroughly prepare for the specifics of this educational format.However, the 2021 survey showed a sharp increase in the positive potential of distance learning in the eyes of students.For almost every question, the percentage of students who answered positively in 2021 exceeded the answers to a similar question in 2020.
Nevertheless, the potential of distance learning has been revealed to an insignificant extent, and its importance in the future, in our opinion, will grow rapidly, meeting the needs of a dynamic change in the entire education system in terms of its digitalization.
It should be noted that in comparison with the traditional form of education distance learning has its drawbacks.They are ranked in descending order and are presented in table 2. As follows from the above data, in 2020 more than half of the students surveyed expressed a negative attitude towards distance learning for a very important reason-a decrease in the quality of education.More than a third (67%) of those surveyed in 2020 and 43% in 2021 recognized the lack of "live" communication and feedback in the educational process as a disadvantage.
It is important to emphasize that 59% (32% in 2021) of respondents feel the need for face-to-face discussion of educational material with a teacher and fellow students.And this is not accidental, because university education is, first of all, a live communication between a teacher and a student, as well as between the students.The concept of live dialogue is the main component of learning, through which knowledge is acquired, rethought and renewed.According to a third of the students surveyed in 2021, the online format does not reproduce social experience, which is acquired only within the walls of the university.
In 2021, almost two-thirds of respondents (64%) noted the negative health impact of distance learning.They believed that the negative impact on vision and hearing increased (since they are using headphones in both online lectures and seminars), as well as on the spine from constant sitting in front of the computer.The psychological stress from new learning technologies was also significantly high.However, adaptation to the new realities led to the fact that in 2021 the percentage of those who complained of deteriorating health dropped to 23%.
More than half of the respondents in 2020 (57%) suffered from low-quality Internet connection and obsolete computer equipment, some of them experience discomfort from the household noises of neighbors (41%).Poor sound and image quality also negatively affected the overall psychological well-being of the students.For the above reasons, more than half of the students surveyed believed that learning material is more difficult to assimilate (54% in 2020) than with the traditional form of education.By 2021, the number of dissatisfied people dropped to 21%, largely due to the improvement in the technical provision of both students and teachers.According to the respondents (39% and 36%, respectively), some teachers have a poor command of computer technologies, which additionally complicates the assimilation of educational material.In 2020, more than a third of students (39%) admited that online education demobilizes them, discourages them, and every fifth (21%) noted that teachers underestimate the level of exactingness, which affects the quality of control over material.In 2021, these indicators turned out to be significantly lower, which also indicates the effectiveness of work done in this direction.
The respondents' answers about the problems that have arisen in the implementation of distance learning, only complement the main question.Thus, 74% of respondents in 2020 and 41.7% in 2021 noted an increase in the volume of unsupervised activities.In our opinion, this is due to the very format of educational work, and the attempt of the teaching staff to play it safe against possible gaps in students competencies.According to students, in the traditional form of education, the volume of unsupervised tasks was noticeably lower.Nevertheless, students learned to cope with the growing volume of independent work, and the percentage of those concerned about this issue decreased.Among other problems, the following problems were named: health problem (55% in 2020 and 24% in 2021), mastering the material (49% and 13%), assessment of knowledge and accumulation of points (42% and 15%), technical issues (38.7% and 37%).All this also speaks of a high level of adaptation to digital educational technologies.At the same time, 8% of students in 2020 and 33% in 2021 note that "there were no special problems".
Comparing the digital indicators of answers to questions in 2020 and 2021, one can observe a sharp decrease in the number of students indicating the negative effects of distance learning.In this regard, it can be argued that during the year of study in distance learning mode, students have deeper understood its specifics and advantages and have shown a growing loyalty to the new format.
The above conclusion correlates with the students' answers to the question of their choice between online and traditional forms of education.In 2020 more than half (55.6%) of the surveyed students chose the traditional form of education, and almost every tenth (9.1%) respondent picked the distance education.In 2021 only 21.3% of respondents chose the traditional form, and 42.6% wanted to study via virtual teaching environment.Although a third of the respondents (35.3%) would prefer half of the educational process to take place in the traditional form, and half of it in the distance form.
Further, the students expressed their opinion on the effectiveness of distance learning.Almost half of the students surveyed (49% in 2020 and 44% in 2021) rated it as average, 25% of students (2020) and 8% (2021) rated it as low, 15.2% of respondents (2020) and 41% (2021) as high, and 9.7% (2020) and 5.3% (2021) found it difficult to answer this question.Thus, in 2021, students of universities in the Republic of Bashkortostan began to rate distance learning higher.
As for the issue of the quality of education received in the distance format, only 9.4% in 2020 and 15% in 2021 considered that it is higher than with traditional education; 35.1% and 55% (respectively) of students believe that it is commensurate with traditional education, and more than half of the respondents (55.6%) in 2020 and only 19.4% in 2021 believe that the quality of distance learning training is lower than with traditional training.
Fully satisfied with distance learning in 2020 were 25% of the respondents, and in 2021already 56% of them, partially satisfied-49% in 2020 and 36% in 2021, 20% of respondents were not satisfied in 2020 compared to 6.3% in 2021, about 6% and 2% (respectively) found it difficult to answer.
Of interest are the answers to the question, what determines the quality of education.According to students, it depends more on the professionalism of teachers than on the form of education.For example, in 2020, 31.4% of respondents agreed and 43.4% of them partially agreed with the statement that the effectiveness of the educational process mainly depends on the professionalism of teachers.At the same time, 17.2% of respondents did not agree with this statement, and the rest found it difficult to answer (8%).The students were quite critical of themselves, answering the question about the connection between effectiveness of the educational process and their personal motivation, their attitude to learning.41% of the respondents fully agreed and 40% partially agreed with the statement that everything depends on the student's attitude to study, his or her motivation, diligence, rather than on the form of education, 12.4% of the respondents disagreed, and the rest found it difficult to answer (6.6%).
In general, it can be stated that the majority of students in 2020 were not satisfied with distance learning.However, a similar survey in 2021 revealed the opposite trend and indicated an increase in the satisfaction of distance learning among students.This phenomenon, in our opinion, can be explained by the fact that in 2020 students did not receive sufficient training in distance learning and experienced stress from a sharp and forced transition to a new learning technology.In 2021, this format turned out to be the only acceptable one in situation of total isolation.The forced transition to online learning made it possible to test new learning technologies not as an addition to traditional learning, but as the main tool capable of fully supporting the educational process.
Nevertheless, the experience of widespread introduction of distance learning into the educational process is still insignificant.The existing technical, methodological, communication and psychological barriers, which currently prevent teachers and students from improving the effectiveness of distance learning, are not of a fundamental nature.In our opinion, having eliminated the identified shortcomings, this form of education could become not only an important addition to traditional education, but it would be able to act as an independent form of education in some disciplines.
As for the students themselves, they identified as the priority areas for improving distance learning the following: • facilitation of teacher-student and student-student communication (47%); • provision of learning options (lecture recordings, basic, additional and reference literature, assignments for unsupervised work, etc.) (46%); • elimination of technical interference (39%); • adaptation of the teaching methodology for distance classes (37%); • improvement in qualification and competence of teachers in the use of distance technologies (31%); • ability to attend lectures by prominent lecturers from other universities inside the country and worldwide (28%) [2].
We now turn to the assessment of distance learning by students of Karaganda universities.We would consider the data from a study, in which the transition to distance learning was analysed.As part of the project, implemented in March-May 2021, interviews and focus groups were conducted, in which more than 110 students from Karaganda universities took part.Here's an example from an interview with one of them.
Edil is 19-year-old, and he is currently a third-year sociology student at the Karaganda University.His tuition is free, as he is studying on a state educational grant.In order to attend lectures, participate in seminars and keep up with the educational program, Edil bought a laptop on credit and returned home-to the village of Zhairem, Karaganda region, since living in Karaganda is expensive.He was told that the Internet was installed in his home village.In fact, it turned out that the speed of the Internet does not allow to fully engage in online activities: Connection freezes, sometimes I can't connect, because of this I lag behind, teachers complain about me.Even when you watch a record, the speed is still not enough [. . .] Since the beginning of the online [classes], I have already connected mobile internet several times, as [the local Internet] does not really work.It takes a lot of money.
The introduction of information technology into the educational process does not make sense if electricity is cut off in rural areas.This is exactly what happens in Edil's village, where electricity is often cut off.Although, how you can listen to lectures using a slow mobile connection is also a big question.
Last year I was unable to pass the entire session.Nowadays the light is not turned off so often, but before that it was off three or four times a week.Therefore, I warn teachers in advance.And when I send out my home assignments, I also worry that the light might be turned off.
Edil is worried that he was considered lately as a "low scorer".Teachers are unhappy with his knowledge, although they understand that in rural areas there are problems with access to a high-quality Internet connection: I may need to sit in Zoom classes for five hours.This program requires a good Internet, but it comes and goes.If the Internet is bad, then the Zoom does not load, every time I wait for 10-15 minutes and then I come late to classes.Loading tasks, it is generally better [to do that] after 12 at night, when the Internet works normally.Only after midnight I can download all the tasks.Edil wants to return to the old format of education, and not only because of the poor Internet connection.He also complains about health problems, when he has to sit in front of the computer for hours, which causes problems with his back and vision.
I have vision problems.Now, due to the fact that I sit in front of the computer much longer than before I've noticed that my vision is deteriorating, although a little time has passed.Studying is from 8 am to 5 pm, and I study almost without getting up, I don't even have time to dine.I want to study offline.When you sit in front of the teacher, you understand everything.So far I cannot say that I am fully acquiring knowledge.
The situation described is typical for students living in rural areas.Lucky considered those students who have relatives in the city that allow them to temporarily stay and study properly.For example, Dilyara has been living in Karaganda with relatives since September.She comes from one of the small towns of the Karaganda region, which is located 40 kilometers from the city: This is a temporary measure . . .I won't stay with [relatives] forever . . .If there was Internet at home, I would of course stay at home and study online.I get out of the situation by living with my aunt . . .We've been promised to have Internet in our house in the fall.Last year the money was collected and the Internet was supposed to be installed, but it didn't happen.Now we look forward to it this year.
As a positive side of the transition to online education, students of universities in Kazakhstan noted the possibility of combining studies with work.Working in the service economy, in the positions of waiters, salesmen, consultants, managers, students have the opportunity to be employed and progress in studies instead of taking an academic leave or spoiling their reputation as "good students" by skipping classes and constantly asking for time off in order to go to work.In addition, they retain the ability to independently pay for their education or make a certain contribution to the family budget.Here is a typical opinion of a working student: It is convenient when you need to combine work with personal affairs.For example, I work from morning to evening.During online classes I just go to the break room, do the tasks.If the classes were offline, then I would have to leave work, spend time on the road, ask the superiors for time off and explain where I am going.
The interviewed students said that over the past year they got used to the distance learning format and adapted to its shortcomings: I feel good about it, since we have been sitting at home for 1.5 years already, the lectures are in Zoom, and everyone is already used to it.We have already begun to understand how everything needs to be done.
When assessing the quality of teaching with distance learning, students said that most of the teachers quickly mastered the new software, although there were those who could not adapt to the online format: The teachers got ready very quickly and got used to it . . .From the first classes we tried various platforms, where it was possible to work in pairs.According to the results of the research carried out in Karaganda, it can be stated that the assessment of distance education from the point of view of students from Kazakhstan is ambiguous.On the one hand, students complain about the instability and high cost of the Internet, on the other hand, they like the fact that it is possible to save money on travel and combine study with work.Their comments on the future of distance education are also ambivalent: some believe that it is necessary to preserve the traditional format of education, others-that it should be a blended learning, and still others find that distance learning should prevail.
Thus, analysing the data from sociological monitoring of university students in the city of Ufa (Russian Federation) and Karaganda (Republic of Kazakhstan), we can conclude that distance learning has great potential, the implementation of which will significantly increase the efficiency of the educational process in general, especially in the system of higher education.However, the question of a complete transition to distance education is premature, if not inappropriate.The potential of the traditional form of education will remain in demand due to the unique opportunities that distance education cannot provide.The use of proven teaching technologies will guarantee the quality and efficiency of the educational process as a whole.It doesn't matter if we're talking about digital classes or traditional ones.
Table 1 .
"Advantages" of distance learning from the perspective of students, in %
Table 2 .
"Disadvantages" of distance learning from the perspective of students, in % | 2022-04-20T15:18:15.887Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "5d460f6b49d2231919914fba2e0f9aa15c826612",
"oa_license": "CCBY",
"oa_url": "https://www.shs-conferences.org/articles/shsconf/pdf/2022/07/shsconf_aeshe2021_01002.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "5c23f71594efea59a77188af76bcfe2ff698a908",
"s2fieldsofstudy": [
"Sociology",
"Education"
],
"extfieldsofstudy": []
} |
207863387 | pes2o/s2orc | v3-fos-license | The Vanishing&Appearing Sources during a Century of Observations project: I. USNO objects missing in modern sky surveys and follow-up observations of a"missing star"
In this paper we report the current status of a new research program. The primary goal of the"Vanishing&Appearing Sources during a Century of Observations"(VASCO) project is to search for vanishing and appearing sources using existing survey data to find examples of exceptional astrophysical transients. The implications of finding such objects extend from traditional astrophysics fields to the more exotic searches for evidence of technologically advanced civilizations. In this first paper we present new, deeper observations of the tentative candidate discovered by Villarroel et al. (2016). We then perform the first searches for vanishing objects throughout the sky by comparing 600 million objects from the US Naval Observatory Catalogue (USNO) B1.0 down to a limiting magnitude of $\sim 20 - 21$ with the recent Pan-STARRS Data Release-1 (DR1) with a limiting magnitude of $\sim$ 23.4. We find about 150,000 preliminary candidates that do not have any Pan-STARRS counterpart within a 30 arcsec radius. We show that these objects are redder and have larger proper motions than typical USNO objects. We visually examine the images for a subset of about 24,000 candidates, superseding the 2016 study with a sample ten times larger. We find about $\sim$ 100 point sources visible in only one epoch in the red band of the USNO which may be of interest in searches for strong M dwarf flares, high-redshift supernovae or other catagories of unidentified red transients.
INTRODUCTION
Many of the hottest topics in current astronomical research concern the physics of extreme transient phe-nomena, such as gravitational wave events, gamma-ray bursts, Fast Radio Bursts (FRBs) or Active Galactic Nuclei (AGN) outbursts. Although we are gaining a better understanding of the physical processes governing them, our understanding of the transient phenomena in general is inevitably limited by the a priori assumptions that go into the data collection when we design our observations. With the advent of the Virtual Observatory in the early 2000s, astronomers suggested that very large surveys together with state-of-the-art developments in information technology could efficiently be used to probe rare or unusual astrophysical phenomena by expanding the parameter space beyond our current knowledge (Djorgovski 2000;Djorgovski et al. 2001). An example of such a rare class of object that would not have been discovered unless specifically looked for, is that of Hippke's star. This emerged from a search for artificially modified pulsations in Cepheid variables and led to the discovery of rare objects with two regimes with both long and short duration double pulsation periods (Hippke et al. 2015).
Another example of objects that may be missed in transient surveys, unless specifically looked for, are the rare failed supernovae (Kochanek et al. 2008), which occur when a star collapses almost directly to form a black hole. Recently, the possible detection of a failed supernova in a nearby galaxy has been reported (Adams et al. 2017a,b). There more exotic the phenomenon, the more likely are to miss it in the observational data due to our preconceptions and the duration and frequency of the sampling.
In this paper we describe the "Vanishing & Appearing Sources during a Century of Observations" (VASCO) project 1 , a multitask effort aimed at finding some of the most unusual variable phenomena and other astrophysical anomalies based on existing sky surveys. We also aim to develop a citizen-science branch of VASCO and indeed the basic philosophy behind the project was first described for a wider audience by Mattsson & Villarroel (2017).
VASCO is primarily centered around searches for vanishing objects observed in the sky and beyond the Earth's local environment. Unless a star directly collapses into a black hole, there is no known physical process by which it could physically vanish. If such examples exist this makes it interesting for searches for new exotic phenomena or even signs of technologically advanced civilisations (Villarroel et al. 2016). Vanishing stellar events currently are missed and hence go undetected in most ongoing allsky surveys. Villarroel et al. (2016) found only one such tentative candidate after a cross-match between 10 million USNO sources and the SDSS. Even if we discover a star that appears to vanish, it is an observational challenge to determine whether the object really vanished or just faded below the detection limit.
The VASCO project aims to find both vanishing and appearing sources as well as objects that show extreme variability on extended time scales (many decades), by comparing the nearly a century-old (∼ 70 years) sky scans with modern-day astronomical surveys. Compared to the recent transient facilities such as the Zwicky Transient Facility (ZTF) that commenced operations in 2018, we are probing a significantly longer time window -about 70 years -by investigating events that occurred between the epoch of the US Naval Observatory Catalogue catalogue (Monet et al. 2003) and the recent Pan-STARRS survey that has multiple detections for each astronomical source (Kaiser et al. 2002). Prior efforts to probe these large timescales have been led by the "Digital Access to a Sky Century @ Harvard" (DASCH) project ) that has digitized more than 450,000 plates with a full-sky coverage. The plates used were taken during the years 1890 to 1990 and had a limiting magnitude of B ∼ 14 (or V ∼ 15). Among the modern CCD surveys the Catalina Real-Time Transient Survey (CRTS) has the largest time span (∼ 14 years), with a total sky coverage of about 30,000 deg 2 and about 500 million light curves Mahabal et al. 2011;Djorgovski et al. 2012). The data, which is public, extends to V ∼ 19 − 21 mag per exposure and is based on CCD photometry taken at a large number of epochs. The CRTS has so far discovered about ∼ 17,000 optical transients, among them many superluminous and pecular supernovae, about 1500 cataclysmic variables, and about 4000 variable AGN.
Using a relatively large time window of ∼ 70 years, in combination with a large sample size, increases the probability of finding extremely rare events. Clearly this is still a minute time duration from a cosmological perspective, but it nevertheless sets an upper limit on the incidence of vanishing or appearing-star events. In addition there are a number of recently discovered astronomical transients that occur over significantly longer time scales than are for common variable stars that vary on periods from weeks to a few years. For example, hypervariable AGN (Lawrence et al. 2016;Kankare et al. 2017) were discovered by comparing two astronomical surveys separated by a ten year time gap. More than 95 percent of extragalactic objects exhibiting this long-term variability show the presence of an AGN (Drake et al. 2019). Hypervariable AGN exhibit still poorly understood long-term variability that could have various causes, e.g. microlensing events, superluminous supernovae in the accretion disk (Graham et al. 2017) or changes in the Eddington ratio of the AGN . These hypervariable AGN have been extensively studied with the CRTS.
The DASCH project has reported other interesting findings while probing these timescales. For example, it revealed long-term dimming of K giants (Tang et al. 2010), and resulted in the discovery of an unusual nova with an outburst (or flare) in 1942 that was followed by a 10 year decline . Peculiar transients have also been found in the CRTS, e.g. the very longlasting Type IIn SN 2008iy that took over 400 days to reach its peak brightness ).
Our limiting magnitude is much deeper (Pan-STARRS: r ∼ 23.4) compared to DASCH (V ∼ 15) and we focus specifically on the most extreme events that appeared above, or disappeared below, the detection limit in searches for the most extreme astronomical events and objects. Our timespan is significantly longer than that of the CRTS survey. One may expect to find R Coronae Borealis (R CrB) stars. These are carbon-rich supergiant stars that can dim up to 9 magnitudes with irregular time intervals, where the fading happens on timescales ranging from a few months to years. These eruptive objects have a poorly understood origin, while the most prominent hypothesis is that they formed from mergers of two white dwarfs, or are the result of He flashes in a planetary nebula stars (Clayton 2012). Today we know of ∼ 150 R CrB stars in our Galaxy (Tisserand et al. 2018) and expect about 5000 to exist.
Highly variable objects such as eclipsing binaries, Cepheids, RR Lyrae, R Coronae Borealis, dwarf novae and highly variable AGN are expected to be detected by VASCO, as their luminosity falls below or raises above the Pan-STARRS detection limit of r ∼ 23.4. As the limiting magnitude of USNO is around ∼ 20 − 21, this corresponds to a change of at least 2 magnitudes during the time period of 70 years. Mira variables may vary up to 10 magnitudes on time scales of a few years. Objects similar to these variables may eventually be redisovered during follow-up observations with larger telescopes or by patiently waiting for the object to reappear a few months or many years later.
Moreover, VASCO may also discover objects that are only visible in one epoch and then disappear in later surveys. Nearby stars with high proper motion will fall into this category. Outbursts in active galactic nuclei caused by relativistic jet activity or major increases accretion will also give short-term signatures in the optical that fade away in a few months or years e.g. Prieto (1997) or Mack et al. (2009). Also, transients such as supernovae and tidal disruption events can be detected this way. But natural astrophysical sources are not the only possible sources to discover. Modern Searches for Extraterrestrial Intelligence (SETI) programs are nowadays preparing and executing searches for interstellar optical laser communication, especially in the red and infrared. Therefore, it is of great interest to identify any transient that is only visible once, provided that we can later exclude those events that may be a result of plate defects, cosmic rays and other detection flaws. Figure 1 shows what kind of objects we may collect in our first candidate selection. In order to pinpoint the nature of each candidate, one must reconstruct its light curve, which can be done with help of old and modern archives, and by making deeper observations.
In this paper, we start by examining the tentative candidate reported by Villarroel et al. (2016). We present the results of in-depth archival searches, and also some new observations of this object. After examining the candidate, we cross-match the USNO and Pan-STARRS surveys. The current USNO sample is increased by a factor of 60 in comparison to the sample used by Villarroel et al. (2016), as we use about 60 percent of the USNO catalogue for the cross-matching. In contrast to the previous work, we also include objects with non-zero proper motion. In the Section 3 we discuss the properties of the "Mismatch Sample". We conduct a preliminary analysis of the images in the 'SDSS subsample', which includes about 15 percent of the "Mismatch Sample". The preliminary list of candidates that resulted from visual examination has been studied at seven epochs (five POSS surveys, SDSS and Pan-STARRS). While this endeavor may include many objects similar to what time-dependent surveys like the Catalina Real-time Sky Transient Survey (CRTS) and Zwicky Transient Facility (ZTF) already detect, we particularly emphasize single-time transients with large amplitudes ∆m > 5 magnitudes and objects that have been observed in more than one image prior to "disappearance" in order to collect the most exotic and extreme phenomena. Finally, we detail the general design and methodology of the VASCO project, as it is currently planned to be carried out over the coming years, including a citizen science project.
In a separate paper (Pelckmans et al. in prep.) we propose a machine-learning based tool aimed to facilitate the planned citizen science project.
THE "VANISHING" STAR IN VILLARROEL+ 2016
Villarroel et al. (2016) identified a candidate, but the candidate was not robust enough to make a convincing case for an example of a vanishing star. In the USNO catalogue this object was listed as having two detections: one was clearly visible and point-like in the POSS-1 red band image and the other detection was less clearly visible in the POSS-2 red band. We decided to reexamine it, both reassessing the old observations and by following up with some new imaging obtained with larger telescopes.
Observations with CAMELOT at IAC80
We observed with the IAC80 telescope, which is a part of the Teide Observatory and belongs to the Instituto de Astrofisica de Canarias (IAC), located at Tenerife Island (Spain). We used the CAMELOT (CAmara MEjorada LIgera del Observatorio del Teide") instrument in service mode and obtained 9 exposures of 30 minutes each in the red filter. The pixel size is 0.304" and the limiting magnitude about ∼ 24.7 in the Sloan r-band.
Observations with ALFOSC at NOT
We made even deeper observations (down to r ∼ 25.5 − 26) with the help of the Alhambra Faint Object Spectrograph and Camera (ALFOSC) instrument at the Nordic Optical Telescope (NOT, La Palma, Spain) in service mode and fast-track observations. The goal was to carry out deep enough observations to be able to detect a point source at the 25th magnitude level with a signal-to-noise ratio of at least 9 or 10, using deep Gunn r'-band imaging. Assuming an airmass of 1.5, seeing of 1 arcsec and a grey night, we estimated that about four hours of observation time were needed. Six exposures of 900 seconds were taken. For the resulting images, the pixel size was 0.214" and the limiting magnitude about ∼ 25.5 -26.0 in the r-filter. We first examine the old POSS images. As we can see from the table, the minimum requirement of two detections (on which USNO is based) is not clear for this particular object. Only one strong confirmation (POSS-I E plate) exists. Unlike an artifact, the object appears to be point-source-like in the POSS-I E plate. See Figure 2. One possibility is that this object is a star with significant proper motion and moved entirely out of the image.
We compare the POSS-1 E image with the new images taken with the NOT. See Figure 3. In the NOT imagery Once instrumental flaws and errors are removed, we expect different types of objects to be included in the VASCO "mismatch" sample. A particular focus is given to USNO objects that either have several detections before vanishing, or to objects that are brighter than < 18.4 magnitudes in USNO and thus have dimmed at least 5 magnitudes.
Rare, long-term variable objects may seem to appear or disappear in the USNO and Pan-STARRS catalogues as they rise above or fall below the detection limit. Among the daily but extreme astrophysical phenomena, we may detect some fast transients only seen at one epoch. Fast transients only seen in the red image could be the result of strongly redshifted transients, less well-known physical phenomena, and also as a result of interstellar communication with red, monochromatic lasers. The VASCO time baseline that probes variability over several decades provides opportunities to study multiple phenomena.
we find two objects very close to the original USNO location. One of the objects is located 2.4 arcsec southwest of the USNO object, and the second is 1.4 arcsec northwest of the USNO object. However, the resolution in the POSS-I E band is about 1.7 arcsec per pixel, and the displacement of the two reported objects is therefore within the error, in particular for the closer object only 1.4 arcsec away.
The colors may give a clue. The original USNO object was only seen in the red band. While the nearby WISE counterpart is seen both in the blue band with the Magellan telescope and in the red band with NOT, the NOT objects can only be seen in the red. This may support the hypothesis that the object from the 1950s and one of the objects seen in NOT are likely to be the same. But if so, the brighter (southwest) object has dropped about 4.2 to 4.3 magnitudes in the r-band and also moved a bit.
One may wonder what is the probability of observing a new, unrelated object with NOT within 2.5 arcseconds of the stated position in USNO, if going 4.2 magnitudes deeper. However, probability estimates of this sort are of little help when we search deliberately for outliers in big datasets covering billions of objects.
Could it have moved?
One possibility is that our target is a star with a fairly high proper motion (despite being catalogued in USNO as having no proper motion), and that it has moved substantially from its original position. If we compare the two different red plates from the POSS within a reasonable angular distance, we should be able to find the missing object by seeing an appearing object in the later POSS image from 1993.
Assuming a maximum proper motion of 6 arcseconds per year (slightly larger offset than the positional error of 5.5 arcsec), we know that between 1950 and 1993 the star will not have moved more than 4.3 arcminutes in any direction during these years. We therefore extract red filter images from POSS-1 (from 1950) and POSS-2 (1993) with a field of view corresponding to 9 x 9 arcminutes. We inspect these visually by "blinking" them. The few objects that "appear" in the later epoch turn out to also exist in the SDSS images, which means they simply were not resolved in the previous epoch. No other "appearing" objects could be seen in the later epoch, which means we can quite safely reject the hypothesis of a fast moving star. Solar system objects typically are bluer (as they shine by reflected sunlight), although exceptions of course exist. From the DSS Plate Finder 2 we see that the POSS-1 E red and POSS-1 O blue image were taken with a time difference of about half an hour, but nothing is visible in the blue image at the position of the star. Given the exposure time of 45 minutes of the POSS-1 E red image, if our object were an asteroid that quickly moved out of the field, it would have left a stripe (and 1 Summary of the observations. Possible detections, detection limits a and date of observations are reported. We use the DSS Plate Finder to retrieve the images used in the USNO database. For duplicate images we report the longest exposure time. As can be seen from the images, the object in the POSS-I E red image is point-source like. For the same object we find that the positional error of the object is 5.5 arcsec. This means the WISE counterpart is a possible counterpart. However, in the significantly deeper NOT images we find both the WISE counterpart and two possible candidates very close to the position of the original USNO object. a Limiting magnitudes for POSS are taken from (Djorgovski et al. 1998 not be point-like).
Was it possibly an image defect?
The final hypothesis that could rule out the idea of a transient or variable event in the 1950s plate is the simplest explanation of all: plate defects in the old photographic plates from POSS-1. While the USNO-B1.0 should be cleaned up from a fair number of these artifacts, and a separate list by Barron et al. (2008) could have included our target but did not, our target still has survived thanks to the two detections listed in USNO.
We reanalyze the POSS images based on highresolution data from STScI Digitized Sky Survey 3 . We see several things: only the detection from the POSS-1 E red plate taken on 16 March 1950 has a secure detection. The second detection -which we believe is based on the POSS-II F image from 22 March 1993 -is slightly offset and possibly not the same object (even if listed as the same; here the low resolution may have played a role). Of all the other images available on that server covering that particular sky region -Quick-V Northern (1982), Poss-I O (blue, 1950), POSS-II Blue (1986), POSS-II N (1993), POSS-II N (1996) -none of them convincingly shows the object. Some hints of an object may be seen at the given position in the Quick-V Northern image from 1982, but not in a way that would allow us to confirm the detection quantitatively as the signal-to-noise ratio is very low.
3 https://archive.stsci.edu/cgi-bin/dss plate finder While plate defects in USNO very seldom are star-like (Madsen & Gaensler 2013), some of the star-like sources could in principle be photographic plate defects.
These defects can be created when a small dust particle sticks to the plate during the exposure, or when microspots form after years of storage. Greiner et al. (1990) proposed examining original plates with a microscope in reflected light to help sort out which events we see are real astronomical events, and which ones that are pure plate defects. The best way to be sure when dealing with old photographic plate material is to investigate the photographic plates themselves under a microscope.
Unfortunately, we do not have access to the original plates. However, by comparing the point spread function (PSF) of the object to the PSF of typical stars in the same field, one can see if the object is likely to be a plate flaw or a real star. See Section 4.2. If an object has a considerably smaller PSF than a real star as measured on the given photographic plate, it may be discarded as a plate flaw. Our object appears to be like many astronomical point sources and has a PSF comparable to the real stars on the plate. This suggests that it is not a plate defect.
The USNO-B1.0 catalogue (Monet et al. 2003) presents the best old sky survey we can use in the optical as it goes deep enough (r ∼ 20) and contains 1 billion astronomical objects. It has all-sky coverage. This allows us to find astronomical transients that occurred before the birth of the all-sky transient surveys. Each object is supposedly detected at least twice in two widely separated epochs in the Palomar Sky Survey (POSS). 4 to 1999. The data were obtained in one blue band and one red band, and for some objects, also in the infrared. The Pan-STARRS catalogue has about 2 to 3 billion objects and is at present the largest digital sky survey with observations started in 2010. It covers the entire sky down to declinations dec∼ −30 deg. Information about the Pan-STARRS data products is described in a series of articles Magnier et al. 2016a,b,c;Waters et al. 2016;Flewelling et al. 2016). Our PS1 dataset is an offline version kindly provided by the Pan-STARRS collaboration.
One may wonder if it would not be better to directly search for vanishing or appearing objects only using internally consistent datasets like Gaia or Pan-STARRS, containing about 2 to 3 billion objects each, where each object has photometry done multiple times during 5 years of observations. The CRTS has homogeneous data and time baselines up to 14 years with CCD data. However, when one compares the digitized sky from USNO plates with the sky from Pan-STARRS, the significantly longer time span (∼ 70 years) changes the effective volume of the dataset over which any event could have been observed. Extremely rare events are much more likely to be found in surveys that combine both a long time baseline and deep photometry. The DASCH survey may have 100 years of photometry, but has a limiting magnitude around V ∼ 15. The longer time span allows us to discover extreme variables with a characteristic timescale of several decades, longer than the typical five years of iPTF. An example of a vanishing-star event that is not expected to happen in the Milky Way more often than once every few hundred years, is the hypothetical failedsupernova event (see Appendix). The VASCO time baseline and depth in photometry makes it possible to discover such events. Of course, we expect also to detect many objects that vary on shorter time scales.
3.2. Cross-matching the USNO and Pan-STARRS catalogues The goal of the cross-matching algorithm used in this paper is to make a list of USNO objects that do not have a Pan-STARRS counterpart within a certain distance threshold (e.g. 30 arcseconds). In the (Villarroel et al. 2016) paper the size of the starting samples used were, on average, about ∼ 10 million USNO objects. However, using the full catalogues of USNO-B1.0 and Pan-STARRS DR1 is more of a practical challenge, as the databases we have make up about 1 TB in size (roughly 300 GB and 700 GB each, respectively). This creates a problem of efficiency in the cross-matching process, which could last unacceptably long if not done smartly.
We use a 3 TB cloud environment provided by the Uppsala Multidisciplinary Center for Advanced Computa-tional Science (UPPMAX), which is part of the Swedish National Infrastructure for Computing (SNIC). The cross-matching is done in the environment of SQlite3, and carried out by parallellizing the cross-matching process by breaking down the USNO and Pan-STARRS databases into many smaller ones with the help of smart index methods. This enables the cross-matching process to be done effectively in smaller subsets compared to using the whole databases. All the technical details of the cross-matching are described by Soodla (2019).
The cross-matching procedure in VASCO differs from a traditional cross-matching between two catalogues, as we are searching for missing objects rather than corresponding objects.
In a traditional cross-match, one uses an object from catalogue A and tries to identify the same object in catalogue B using the coordinates (and additional properties like fluxes, surface densities of sources, etc). Due to proper motion, variability and many other factors, it can be quite challenging to verify if the object within a certain radius in catalogue B is the same object. A typical cross-match radius in traditional projects is 3 to 5 arc seconds. If one used instead a large cross-matching radius (e.g. 30 arcsec), there are often several possible matches, which means we may have a number of false positives among the cross-matches, and we have included spurious objects in the resulting catalogue.
In our particular case, the cross-matching is not a traditional cross-match. When we take an object from catalogue A and try to look for a "vanishing" object in catalogue B, we only care to know that no object at all resides at the given position in the second catalogue B. If one uses a small "cross-match" radius like 5 arc seconds, this leads to a large number of mismatches as various astrometric issues enter, including the proper motion of objects. However, by extending the "cross-match" radius to 30 arc seconds, one implicitly takes care of proper-motion related issues, except possibly for nearby red dwarfs or white dwarfs. That would make sure that USNO objects with proper motions less than <0.4 arcsec/yr over a 70 yr baseline are directly excluded from the resulting "mismatch" sample. The downside with this method is that one misses out on potential mismatches as false negatives enter the picture. This means that with a large cross-match radius our mismatches are very likely to be real mismatches, but we underestimate the number of mismatches (and hence we miss candidates). For objects with proper motions larger than ∼0.4 arcsec/yr, the displacement in coordinates is visible and easy to identify by blinking images. See Section 3.2.1.
From the USNO and Pan-STARRS J2000 coordinates we determine whether a counterpart exists or not within a certain angular distance. The USNO objects not having a counterpart we list as "mismatches" together with the closest Pan-STARRS neighbor. Only the positional proximity is used. In this early study we covered only 60 percent of the sky (about 600 million USNO objects) due to limitations in computing time, and some regions are left out in the cross-match, as seen in Figure 4.
Using a 30 arcsecond threshold (a limit set by the available computing time in the cloud environment), we find 426,975 mismatches (corresponding to a mismatch rate of 0.074 percent). Correcting for differences in sky coverage between USNO and Pan-STARRS by removing all objects with declinations < −30 degrees, 151,193 of the mismatches can be considered for further investigation. The mismatch rate is within the range of various data processing artifacts existing in sky surveys, and among these artifacts we must search for real candidates.
Treatment of high proper motion objects
For the 151,193 mismatches we first must ask: how many of these are only the result of a star just moving away over the last 70 years? We approach the problem by estimating the number of objects that would escape our 30 arcsec cross-match radius. See Section 3.2. A 30 arcsec cross-match radius over a 70 year timeline translates to proper motions larger than 0.4 arcsec per year. We therefore use the Gaia Data Release 2 (DR2) catalogue (Gaia Collaboration et al. 2016 to obtain all catalogue objects that have µ tot > 0.4 arcsec per year. The catalogue is complete down to g ∼ 19. We plot a histogram of their magnitudes in Fig. 5. As we later find that 95 percent of the objects in our mismatch sample are fainter than mag 16 (see Figure 6), we estimate the number of objects with g > 16 in Gaia DR2. Between 16 < g < 19 there are 2,482 objects. We extrapolate that the number of objects in the last bin 19 < g < 20 is ∼ 500, which means that the number of objects in Gaia DR2 with g > 16 and proper motions larger than 0.4 arcsec per year is roughly ∼ 3,000. Correcting for the sky coverage used in our cross-match, this decreases the number by a factor of two, meaning that we may expect around ∼ 1, 500 objects with high proper motion to contaminate our 150,000 mismatches. These objects can, however, be spotted when comparing the images and their surrounding fields.
We filter the mismatches by one additional proper motion limit. This limit is set by the image field we are prepared to investigate visually later on. Any USNO star that moves away will move at a limited angle distance per unit time, and will be seen as an "appearing" case in the corresponding Pan-STARRS survey at a different location, likely with the same colors and magnitude (unless also variable).
The Gaia survey has shown that there are only 9 stars known to us with proper motions larger than 5 arcsec per year. Over 70 years of time this corresponds to a movement of about 7 arc minutes between the POSS-1 and Pan-STARRS images. For a radius of 7 arc minutes it would therefore be wise to use image fields of size 15 arc minutes when we compare the images, if we want to keep all objects with proper motions up to 5 arcsec per year. We note that USNO's proper motions carry much larger uncertainties than those of Gaia, and removing all objects with proper motions in USNO larger than 5 arcsec per year (about 130 objects in the mismatch sample) would leave us 151,063 objects.
For practical purposes, we shall use 5 x 5 arc minute images (a search radius of 2.5 arcmin). We filter the data so that we restrict the listed USNO proper motions to be less than 4.3 arc seconds per year, leaving 151,038 objects in our mismatch sample.
3.3. Visually inspecting a subset with the SDSS One of the ways to investigate the 151,038 mismatches is to look at those missing in the Sloan Digital Sky Survey (SDSS) Data Release 12 (DR12). The SDSS only covers the Northern Hemisphere, and therefore approximately half of the objects have not been observed in both surveys. Also, the Sloan Digital Sky Survey started at an earlier epoch than Pan-STARRS, which reduces the time window for a potential disappearance by about 10 years. Consequently, vanishing events that have happened in the last decade may remain undetected.
In order to cross-match with the SDSS DR12, we use the CasJobs interface 5 , upload our coordinates to the server and use the Footprint function to check if a coordinate is within the SDSS scanned field. We see that 64,475 objects out of 151,038 can be found within the scanned field of the SDSS. These objects we re-upload to the CasJobs, and then do a closest neighbor search with a radius of 0.08 arcmin (5 arcsec). About 23,667 objects have no detectable closest counterpart in this search zone 6 . This means, that roughly one third of our candidates remain using 5 arcsec as a cross-match radius.
Here, we carry out a similar analysis as in Section 3.2.1, taking into account that SDSS only covers ∼ one-fourth of the sky, and see that the estimated number of expected mismatches for proper motions necessary to exceed a 5 arcsec cross-match (µ tot > 0.080 arcsec per year) is large -about 125,000 objects. However, most of these objects will not be a part of our visual subset. That previously used a 30 arcsec cross-match radius with Pan-STARRS.
At this point it would have been useful to employ image differencing software to compare the images to identify any obvious differences in pairs of very similar images. But, as we compare images made with widely different telescopes, instrumentation and methods (photographic vs CCD), we do not gain much advantage by doing this step. Moreover, the hard drive space required to download the many fits files is prohibitive; one thousand images occupy ∼ 1 TB. Therefore, we have inspected each of 23,667 candidates individually by visually comparing the images found in DSS1 7 , STSCI archive 8 and SDSS Explorer 9 . First, we used the SDSS Explorer list-view to remove all objects that had an obvious flaw such as a bright star or dead stripe in the SDSS image. See Villarroel et al. (2016) for details. This left 6359 objects, where no obvious flaw was causing the mismatch. In the next stage we individually examined the 6359 images in the DSS1 and only kept those that had an object in the center of the image, in order to remove false positives among the original USNO objects. This left 1691 candidates that had something clearly visible in the center of the DSS1 image. The SDSS subset effectively covers about 90 million stars from the USNO starting sample.
One possibility is that the mismatches we have found represent objects with some typical problems. For instance, our objects could have larger average proper motion than reported in USNO. Also, our objects could have fewer detections associated with them, in comparison to the "average" USNO object, which leads to a number of false positives. Therefore, we investigate some basic properties of the Mismatch Sample, and compare them to 49,999 typical USNO objects, randomly selected from the entire USNO catalogue. Figure 6 shows histograms over the apparent magnitudes (blue and red band) for the Mismatch Sample and the ∼50,000 randomly selected USNO objects. We see that the mean value in the blue band is b ∼ 18.85 ± 0.01 (Mismatch Sample) and b ∼ 19.01 ± 0.01 (USNO) in the first epoch (POSS1: years 1949 to 1966). For the red band the average is r ∼ 17.86 ± 0.004 (Mismatch Sample) and r ∼ 17.72 ± 0.01 (USNO). A two-sample Kolmogoroff-Smirnoff test reveals a small but statistically significant difference between the samples, where the mismatch objects are slightly fainter in the red, but brighter in the blue band.
As a next step, we consider the colors. See Figure 7. The used filters O (POSS-I blue), E (POSS-I red), J (POSS-II blue) and F (POSS-II red) have the effective wavelengths of 4100Å, 6500Å, 4700Å and 6600 A. The samples could possibly come from two different color distributions. Indeed, a two-sample Kolmogoroff-Smirnoff test shows a statistically significant difference if testing with the nominal value of α < 0.05. The average colors in the first epoch are b − r ∼ 1.49 ± 0.003 (mismatch) versus b − r ∼ 0.94 ± 0.01 (USNO sample). In the second epoch, the corresponding mean values are is b − r ∼ 1.37 ± 0.003 (mismatch) and b − r ∼ 0.99 ± 0.01. As the colors at the faint end may be uncertain, we also considered the corresponding color indices when using magnitudes brighter than 18 mag. We see that the average color differences in the first epoch are more pronounced for mags < 18, with b − r ∼ 1.22 ± 0.005 (mismatch) versus b − r ∼ 0.400 ± 0.01 (USNO sample).
We also compare variability separately in the two different bands. We show the difference between the brightness in the first and second epochs in Figure 8. In the blue band, the average change in magnitude is 0.19 ± 0.004 mag (mismatch) or 0.27 ± 0.007 (USNO). While the difference is significant enough to be noted in a Kolmogoroff-Smirnoff test, it is not particularly large and totally within the instrumental or calibration errors of USNO (Madsen & Gaensler 2013). In the red band the difference is: 0.03 ± 0.003 mag (mismatch) and −0.17 ± 0.005 mag (USNO). The difference here is also statistically significant in a two-sample Kolmogoroff-Smirnoff test. However, they are very likely to be the result of photometric calibration issues in the USNO survey, where the standard deviation of magnitudes in any band is about 0.3 mag but the systematic errors can be up to several magnitudes (Monet et al. 2003;Madsen & Gaensler 2013). These errors can, for instance, happen when measuring the magnitudes of objects in the neighborhood of very bright stars.
We have considered the mean proper motions (the absolute values) in our samples, from the square root of the sum of squares of µ ra and µ dec as listed in USNO. Figure 10 shows the proper motion distributions. We see that the mean µ total is 76.7 ± 0.55 mas yr −1 (mismatch sample) and 33.0 ± 0.51 mas yr −1 (USNO sample). The µ total differs significantly and by a factor of ∼ 2, where the mismatch objects have higher proper motions than the typical USNO objects.
Interestingly, the objects in the Mismatch Sample show a larger number of detections, ∼ 3.8 per object compared to the average object in the USNO sample with ∼ 3.5 detections per object. See Fig 9. Summing up, we have learned that the objects we find as mismatches are in general redder and have higher proper motions. This means nearby (< 100 pc) red stars could be significant contributors to the ∼ 150,000 Mismatch Sample. For instance, M dwarfs with magnetic flares could be among these. But what we see may also mean that the different detections for "one" USNO object may correspond to different objects, that happen to be close to each other by chance.
The visually inspected sample
We examine the final 1691 candidates and compare the old and new images between the DSS1 and the SDSS, complementing the study with images in several bands in the STSCI archive when the DSS1 images were not clear enough. At this stage, most of the candidates are the result of slightly offset coordinates, and the images reveal that the objects are present in both old and new images, with tiny offsets of the central point. About ∼ 200 of the 1691 candidates are caused by dead stripes in the SDSS. Finally, about ∼ 100 candidates remain, most with a point-like appearance. Nearly all these candidates are single-epoch observations in the POSS1, red band. This is likely due to the way the USNO catalogue was constructed, but possibly also due to the order of visual inspection, where we started by first examining the POSS-1 red images, later the POSS-1 blue, etc, which could introduce a bias in favour of detecting these onetime events in the POSS-1 red band.
One of possible ways of weeding out plate flaws using only the digital scans is by examining the PSFs of stars with a similar magnitude range on each plate, and comparing them to the PSF measured for each candidate. Using DS9 we measured the radial profile of the light distribution near a typical star, where the width of the PSF is estimated to be the full width at half maximum (FWHM) of the Gaussian. The sharpness of the stars varies somewhat between the plates, but when we compare the FWHM of the given transient on a plate with the FWHM of a well-known star in the same plate, we find that the full widths at half maximum are similar in most cases. However, about ∼ 20 objects need to be removed as they either appear asymmetric or their widths are significantly smaller than that of real stars. We find that some typical artifacts have a FWHM significantly larger than normal stars, and we therefore remove candidates with significantly larger PSFs as well. We, however, keep candidates that look like binary stars or multiple star systems, even if these could well be artifacts.
We list the ∼ 100 surviving objects (preliminary candidates) in Table 2. No candidate has a cross-match within 30 arc sec of a sourcein in the General Catalogue of Variable Stars (Samus et al. 2017), which means none of them are already known variable objects.
In a separate article we examine each surviving object in depth with the aim of identifying what is the true nature of our sources and to select the top candidates. As individual examples, we include images of typical objects that only are seen in one epoch. See Figures 11 and 12. The latter candidate stands out among the others. We see how something is visible in the POSS-1 and POSS-2 red filters, but with a slight shift. In the more recent images from SDSS and Pan-STARRS nothing is visible, as can be seen in the figure. However, one must take into account the exact location, the signal-to-noise ratio of the detections, the elongated fibre-like structure next to the two stars in POSS-1 (possibly an artifact?). This needs further investigation. While the blue POSS filters are not shown here, there is possibly also an extremely faint detection in the POSS-1 blue filter, but nothing at all visible in the POSS-2 blue.
DISCUSSION
The VASCO project aims to look for vanishing and appearing objects using old and new sky surveys. In 2016 we performed a pilot study (Villarroel et al. 2016) and searched for vanishing stars in a cross-match of 10 million no-proper motion objects in USNO and SDSS (since 2000). We found one point source and established that the probability discovering vanishing events was about one in 10 million (or less) within this time frame of roughly a decade.
We have now performed a follow-up analysis of the old images, more archival searches, and performed new observations of this object from Teide Observatory and the Nordic Optical Telescope. Near the original location, within 2.4 arcsec and 1.4 arcsec, respectively, two fainter objects that are approximately 4 to 4.5 mag fainter can be found in the red band.
We conclude that there are four possibilities: 1. The detection is a variable object that has dropped approximately 4 to 4.5 mag between the 1950s and 2018.
2. The detection is a very red (or redshifted) transient event that happened in March 1950. It could have been a M dwarf that flared during the POSS-1 exposure.
3. The object is a plate scratch. This appears unlikely due to the point-like nature of the detection itself.
4. The object is a nearby, red, faint low-mass star or brown dwarf with a very high proper motion that has allowed the object to move 4.5 arcmin over a time span of 70 years. As there are few stars with such high proper motions in USNO, this appears not too likely either.
Since only one secure detection of these objects exists, and although that detection seems is of a point source, it is difficult to establish its nature.
We have performed a new, deeper cross-match of 600 million of objects from USNO and the entire Pan-STARRS DR1 (starting in 2013) to search for more convincing vanished candidates, which supersedes the pre-vious USNO sample by a factor of 60. As Pan-STARRS goes deeper than SDSS, the new cross-match therefore allows us to exclude a large number of variable objects near the detection limit. We obtain a final sample of about 150,000 mismatches (the "Mismatch Sample") characterized by the lack of a counterpart in Pan-STARRS. We have investigated the properties of the Mismatch Sample and found that the mismatches are generally redder, more variable in the red band, and have higher average proper motion. Many of these could be M dwarfs closer than 100 pc, and if an M dwarf was flaring during the POSS-1 exposure, it could be invisible in the Pan-STARRS and SDSS surveys.
Cross-matching the Mismatch Sample with the SDSS, we find 23,667 objects cannot be found in SDSS. The number of USNO objects surveyed by this cross-match is about 91 million. We examined each of these candidates in this subset visually. Most are artifacts of various sorts. However, about 100 candidates are point sources visible only in the photographic POSS-1 plates that were taken from the 1950s to 1970s. That means that the complete mismatch sample should contain at least 700 detections of this class. This is a significantly larger number than the eight known objects in our Galaxy that have proper motion larger than 5 arcsec yr −1 . A mismatch sample Table 2. We show the images from (upper left) POSS-1 E red, (upper right) POSS-2 red, (lower left) combination of SDSS filters, (lower right) Pan-STARRS r. The object is seen in POSS-1 red band and has the coordinates (ra,dec) = 277.7422, 40.90004. Afterwards, it seems to have"vanished". Fine centering the coordinates of this object gives (ra,dec)=277.734042, 40.9054433. utilizing a 5 arcsec cross-match radius instead of the 30 arcsec cross-match radius currently used is expected to provide even more potential detections.
With the visual inspection performed on a subset of the Mismatch Sample (see Section 3.3) we have considered the most interesting candidates from a sample of roughly 90 million USNO objects. Among these, no truly vanishing star was convincingly detected, which means we can expect the chance of finding vanishing-star events during 70 years is less than 1 in 90 million in our Galaxy. In the Appendix we demonstrate with theoretical calculations that it is not likely to encounter a failed supernova in the VASCO searches.
5.1. One-hundred red events? What do we actually know about the transients? For the Villarroel et al. (2016) object, we can use the USNO limiting magnitude b ∼ 21 to set a lower limit of the color, b − r ≥ 1.3. But many other events in the Table 2 appear to have much redder colors up to b − r ≥ 7.4, which may mean that our objects are a mixture of apparently red events. For some objects the red and blue observations might have happened simultaneously, while for others, there may have been a significant offset in time between the red and blue observations. We note the similarity of the Villarroel et al. (2016) object and the nuclear transient reported in Figure 6 of Djorgovski et al. (2001), where an event with r ∼ 18.5 is observed in its bright phase in one red image, and only seems to be "extremely faint" in two other filters while revealing a background galaxy at z ∼ 1 with r ∼ 24.5. In our case, the NOT image reveals two background objects within 1.4 and 2.4 arcsec angular distance 10 , close to the original spot. They too could be galaxies at high redshift. However, the offset in position between the USNO and NOT detections that either is caused by the low resolution of POSS-1 or by proper motion puts this explanation into doubt.
Most of the 100 events could be detected in one image and not be detected again. If one assumes that most of these events were detected in two filters at the same time, they are unusually red to be Solar System objects, with half of the objects having colors b − r ≥ 2. Solar System objects are typically much bluer (due to the color of reflected sunlight), even if rare exceptions exist. Taking Fig. 11 as a typical example, the POSS-1 red band and the blue band image were obtained with about a quarter hour time difference according to the listed epochs for each image in the DSS plate finder 11 . The exposure time for the red image is about 50 minutes. If the object were an asteroid and was quickly moving through the field of the red image in a few minutes of time, the object would be elongated on the plate. This object is, however, pointlike. In addition, the candidate is anomalously red and not seen in the blue band, which further decreases the likelihood that it is an asteroid.
For nearby high proper motion stars to comprise most of our sample, we have far too many candidates. From the Gaia survey we know that there exist only eight stars with proper motions larger than 5 arcsec per year, which is the minimumn proper motion needed to explain the "vanishing" events. Therefore, it is unlikely that many of the 100 events are high proper motion objects.
Other events we may have observed are novae, supernovae at high redshift, and microlensing events or flares from M dwarfs. Some of the red transients might be intermediate-luminosity red transients (ILRTs) (Bond et al. 2009) or tidal disruption events. Some of our candidates might be M dwarf flares, as many of our objects are faint (r ∼ 18 − 19), red and appear to have non-zero proper motion. M dwarfs tend to brighten several magnitudes during a flare, and recently a flare of 10 magnitudes was reported (Rodriguez et al. 2018).
We examined the digitised photographic plates for typical plate flaws in Section 4.2 and the objects listed in Table 2 have satisfied the selection criteria. We propose that these objects may be worth following up with transient sky surveys to see if they may be recovered. We will analyze each of these 100 sources in a separate paper and attempt to carry out deep imaging of them.
Following the VASCO criteria introduced at the end of the Section 1, we define the most interesting candidates as either the single-time transients with large amplitudes ∆m > 5 magnitudes, and also objects that were detected in more than one image prior to "disappearance". These objects are listed in Table 3. The candidates displayed in Figures 11 and 12 belong to this table.
Implications for SETI research
The Search for Extraterrestrial Intelligence (SETI) nowadays includes a broad set of activities, where the two large domains of searches are done both in the radio and in the optical, in hopes of finding so-called "technosignatures" like those we ourselves are already capable of producing, such as interstellar communication with lasers (Schwartz & Townes 1961). Optical SETI searches looking for lasers are particularly interesting, as these signatures often have a low temporal dispersion (as opposed to the radio searches), and earthlings already have the necessary technology to produce short, nanosecond laser pulses that could outshine our own Sun by a factor of ∼ 5000. SETI programs such as the "Panoramic optical and near-infrared SETI instrument" (PANOSETI) are presently preparing instrumentation to search for short light pulses on timescales of nanoseconds to microseconds that may arise due to interstellar communication Wright et al. 2018).
A number of ongoing optical search programs have already succeeded in producing some upper limits to the incidence of both pulsed lasers and continuous laser signals. For example, for 800 nm lasers, one estimates the fraction of transmitting civilisations to be around f ∼ 10 −7 (NASA Technosignature Report 2018) for 100 second long pulses. A study that instead looked for signs of a continuous laser in the optical spectra of 5600 FGKM stars (Tellis & Marcy 2015 could also exclude the presence of lasers in all of these spectra. Similar searches have been done in the infrared, where extinction is much less of a problem in these wavelengths and a wavelength window is opened up that is largely devoid of background noise. See . The VASCO project may be a "conventional" astrophysics project, but it originated in the context of SETI, as described by Villarroel et al. (2016), who proposed to Table 2, we remeasured the coordinates of the interesting candidates. The list contains all events showing a single point source with r < 18.4, either as measured by the listed USNO magnitudes or when we remeasured its magnitude directly from the digitalized plates. Also one object that appears to be seen in more than one image, is included. search surveys for vanished stars in our Galaxy as probes of "impossible effects" that could only be ascribed to an extraterrestrial technology due to the high likelihood of this as an observational signature. While VASCO attempts to search for more transients of also natural astrophysical origins, the project bears implications for SETI research. A general review describing the possibilities of technosignature searches in the time-domain astronomy is given by Davenport (2019).
RA
In the VASCO searches we may search for vanished stars and can expect to find transients on three different time scales: (1) a hypothetical vanished star may have existed for billions of years before it vanishes. We have determined that the probability is less than p < 10 −7 .
(2) We may find extreme, variable astrophysical objects that vary over timescales of decades. (3) We may find astrophysical transients that are as short as the exposure time of a typical POSS image (∼ 1 hour). The short transient detections we see in the red plates from POSS-I, may have many different explanations, ranging from instrumental causes to bona fide astrophysical ones. An attractive feature about the list we have produced is that a monochromatic interstellar laser at 600 to 680 nm that shines for about one hour may well present itself as a point source only detected once in one image, due to the short time when the laser operated. Simply put, the single events presented in Section 3.3 have many degenerate solutions. It will be the work of a future publication to work out and disentangle this.
In SETI, frequent technosignature searches also include searches for giant structures that harness the energy of stars and produce waste heat with temperatures T ∼ 100 − 300 K. The most extreme form is referred to as a Dyson sphere (Dyson 1960), which entirely encloses a star and produces the largest fractional change in the brightness of the object. Carrigan (2009) sought Dyson spheres around 11000 stars using IRAS photometry and spectroscopy. Zackrisson et al. (2015) surveyed 1359 galaxies with the help of the Tully-Fischer relation, found no convincing candidates, and estimated that the fraction of Kardashev II -III civilisations (Kardashev 1964) capable of transforming their entire galaxy is less than 0.3 percent. Griffith et al. (2015) used WISE and 2MASS to search for IR excesses amongst 100,000 targets that appear to be dust rich, star-forming galaxies. As the waste heat shows the same signatures that dust shows, for extragalactic objects these searches may be too ambiguous to give confirmable candidates.
One may wonder why a highly advanced Kardashev II-III civilisation, capable of putting Dyson spheres around every star in a galaxy, would limit their effort to harness the energy of stars over such a giant volume as an entire galaxy. Indeed, an AGN occupies a much smaller space (as small as our Solar System), and has much more concentrated energy to offer. For example, the quasar 3C 273 has about 4 trillion times the luminosity of our Sun. Indeed, an AGN may be a significantly more effective target to build a Dyson sphere around. Many AGN (in particular obscured ones) naturally have a thick layer of dust dimming the central power source and giving off infrared emission. This dust is located at the sublimation radius. When an AGN is so obscured that hardly any photons leak through to excite the surrounding gas, we may not even detect the typical narrow emission lines that are the signature of an AGN.
When the accretion disk varies and changes its intensity, we expect the corresponding hot dust emission (arising typically ∼ 0.1 parsec from the supermassive black hole) to respond, but with a time delay. This time delay is often used to infer the physical size of the black hole. Together with the angular size of the torus, obtainable from interferometry, one can estimate the distance to the AGN using it as a standard candle (Hoenig et al. 2014).
However, in a dynamic, Dysonian AGN one may expect that the time delay of the infrared emission does not follow the typical behavior of a dust torus. It could be that the AGN cannot even be used as a standard candle, as the artificial structure will not obey natural changes in the power source. Therefore, as an extension of the VASCO project, we suggest searches for extra-galactic objects with variability in the infrared region. These variable AGN can be followed up with IR reverberation mapping experiments. The research will mainly be aimed at understanding the mysterious nature of the central few parsecs of an AGN. Undoubtedly, VASCO will generate large lists of candidate objects in searches for vanishing stars. Individually, these serve no purpose unless verified. We can agree that a wide-field search that results in a list of candidates is of no great interest for research if each candidate sooner or later gets dismissed due to lack of verification as a potential SETI candidate.
However, if a region of the sky has a tendency to produce an unexpectedly large fraction of candidates relative to the background, this region or "hot spot" may deserve some extra attention. As a part of VASCO's research program, we plan to combine all the unverified initial results from many different search programs such as the optical all-sky surveys NIROSETI and PANOSETI, and from other wide-field surveys in general (see Section 5.2. We aim to visualize the background of the unverified candidates in a two-dimensional projection of the sky. Altogether, this noisy background of neglected candidates could reveal "hot spots" of transient activity, where for some reason many candidates are concentrated. Doing this iteratively with reliable clustering methods and zooming in on the most active regions in our SETI (or technosignature) searches, we can identify the most probable locations to host extra-terrestrial intelligence. VASCO will therefore never dismiss any candidates forever. Rejection and acceptance are only transient states in the process. The information on potential "hot spots" can further be used to select the most interesting candidates.
Expanding the set of candidates
What we have presented so far is a cross-match between USNO and Pan-STARRS in searches for vanishing objects, using a 30 arcsec cross-match radius. However, the plan of VASCO is to do the following: 1. Finalize the current search for vanishing objects with a 30 arcsec cross-match radius by examining the entire Mismatch Sample visually and finalize the cross-match over the sky regions that so far have not been used.
2. Search for appearing objects within a 30 arcsec cross-match radius (with Pan-STARRS objects having r < 19).
3. Search for vanishing objects within a 5 arcsec crossmatch radius (setting limits in magnitude), including proper motion corrections. Given the larger number of spurious mismatches with this search radius, we will need to develop a better automatic methods to handle the identification of candidates in images.
This is a long-winded process requiring considerable time on powerful computing clusters, but it may generate a large list of interesting transients of all sorts.
The large number of images we are dealing with within the complete VASCO project and the increased complexity of our searches requires a better approach than was done in the pilot study. Clearly, we must explore ways to transfer avail ourselves of automatized procedures as much as possible, but without relying on algorithms for all candidate selection and quality control. At this moment, such algorithms are still being developed. Current problems are related to the inefficiency of comparing two images manually, comparing images based on CCD cameras with images from old photographic plates, and finally we must adjust the algorithms to identify the most meaningful candidates. In a separate paper by Pelckmans et al. (in prep.) we propose a new tool for handling a large number of images using methods of machine learning.
Summary
VASCO is a project that provides an opportunity to discover many past transients events, both objects that vanish and those that appear. The time span between these surveys is large. This allows for other phenomena to be discovered other than what can be expected in ongoing transient surveys like e.g. ZTF. Using a large crossmatch radius of 30 arcseconds, we obtained a sample of 150,000 USNO objects that cannot to be found in Pan-STARRS. This represents an interesting starting sample in searches for vanishing objects. As we used a large cross-match radius of 30 arcsec (instead of the more typical 3 to 5 arcsec radius), we underestimate the real number of potential mismatches that can be found through cross-matching attempts. We have investigated the statistical properties of this sample and found that many of these "mismatches" are occurring in the red band. Visual checks confirm that indeed the most interesting cases, about 100, are mostly one time detections in the red band. At present, we do not know what these detections represent. We believe they may be a mixed bag of transient phenomena. The object found by Villarroel et al. (2016) is of the same class, and might possibly be a variable object that dropped 4.5 mag since it was imaged long ago. It could also have been some type of transient event such as a background high-redshift supernova or a flaring M dwarf.
In good agreement with theoretical predictions for the number of failed supernovae in our Galaxy (see Appendix, A), we also set an upper limit on the probability in detecting a vanishing star to be less than 1 in 90 million during our time window of 70 years.
Meanwhile, we will keep developing methods to analyze the remaining images in the Mismatch Sample in searches for reliable examples of vanishing stars.
University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, Univer-sity of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University. failed SN within a 50 yr window is somewhat better. Assuming that no stars in the initial-mass range [18 M , 25 M ] will explode is perhaps too extreme.
In summary, "failed SNe", massive stars that collapse to black holes without any detectable transient event (SN explosion) are not likely to explain vanishing stars in the Galaxy on timescales less than ∼ 1000 yr. Our current understanding of the progenitors of failed SNe (such as the models of LC06 and WH07) indicates that such events should be caught by VASCO only if the present day IMF is extremely top heavy. On a more speculative note, failed SNe are not well understood. Therefore, we cannot rule out the possibility that less massive (and therefore more abundant) stars fail to explode, possibly due to some other mechanism, thus leading to a much higher rate of "vanishing stars". | 2019-11-12T18:51:38.000Z | 2019-11-12T00:00:00.000 | {
"year": 2019,
"sha1": "100e73fab8bd13074c0f879ee1561452e8f2230f",
"oa_license": null,
"oa_url": "http://jultika.oulu.fi/files/nbnfi-fe2019122049110.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "caa9c7ae80d96907ca8b8c3eda6ea5fc681075d5",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
234348898 | pes2o/s2orc | v3-fos-license | sTWEAK as Predictor of Stroke Recurrence in Ischemic Stroke Patients Treated With Reperfusion Therapies
Aim: The purpose of this study was to investigate clinical and neuroimaging factors associated with stroke recurrence in reperfused ischemic stroke patients, as well as the influence of specific biomarkers of inflammation and endothelial dysfunction. Methods: We conducted a retrospective analysis on a prospectively registered database. Of the 875 patients eligible for this study (53.9% males; mean age 69.6 ± 11.8 years vs. 46.1% females; mean age 74.9 ± 12.6 years), 710 underwent systemic thrombolysis, 87 thrombectomy and in 78, systemic or intra-arterial thrombolysis together with thrombectomy was applied. Plasma levels of interleukin 6 (IL-6) and tumor necrosis factor alpha (TNFα) were analyzed as markers of inflammation, and soluble tumor necrosis factor-like inducer of apoptosis (sTWEAK) as an endothelial dysfunction marker. The main outcome variables of the study were the presence and severity of leukoaraiosis (LA) and stroke recurrence. Results: The average follow-up time of the study was 25 ± 13 months, during which 127 patients (14.5%) showed stroke recurrence. The presence and severity of LA was more severe in the second stroke episode (Grade III of the Fazekas 28.3 vs. 52.8%; p < 0.0001). IL-6 levels at the first admission and before reperfusion treatment in patients with and without subsequent recurrence were similar (9.9 ± 10.4 vs. 9.1 ± 7.0 pg/mL, p = 0.439), but different for TNFα (14.7 ± 5.6 vs. 15.9 ± 5.7 pg/mL, p = 0.031) and sTWEAK (5,970.8 ± 4,330.4 vs. 8,660.7 ± 5,119.0 pg/mL, p < 0.0001). sTWEAK values ≥7,000 pg/mL determined in the first stroke were independently associated to recurrence (OR 2.79; CI 95%: 1.87–4.16, p < 0.0001). Conclusions: The severity and the progression of LA are the main neuroimaging factors associated with stroke recurrence. Likewise, sTWEAK levels were independently associated to stroke recurrence, so further studies are necessary to investigate sTWEAK as a therapeutic target.
INTRODUCTION
Since the implementation of the new approach to stroke as a neurological emergency, which has led to the progressive creation of Stroke Units and the development of new reperfusion therapies, short-term outcome has improved in developed countries (1)(2)(3)(4) both in terms of mortality and functional outcome. The new guidelines, however, have focused mainly on patient care in the acute phase, but we have not seen many developments in post-hospital care, or secondary prevention, and some data suggest an increase in late disability in stroke patients (2).
A large part of early, medium and late morbimortality is associated with stroke recurrence, which affects 40% at 5 years and 50% at 10 years after the first cerebrovascular episode, both ischemic and hemorrhagic (5)(6)(7)(8)(9). The control of vascular risk factors, antiplatelet agents and statins has not significantly modified therapeutic strategy, although direct oral anticoagulant drugs have fundamentally demonstrated fewer hemorrhagic complications (10,11).
The influence of reperfusion therapies in acute phase on stroke recurrence has not been well-established. It seems, however, that in patients undergoing mechanical thrombectomy early recurrence is lower, but in patients that receive intravenous thrombolysis medium and long-term recurrence is similar. In both cases, reperfused patients seem to have a better long-term progress as compared to non-reperfused patients (1,(12)(13)(14).
On the other hand, there is clinical evidence that moderate to severe leukoaraiosis ((LA) or white matter lesions) presence may be related with endothelial dysfunction and blood brain barrier (BBB) damage (15)(16)(17)(18). LA presence is known to contribute to long-term functional decline, morbidity, and death in independent outpatients and in stroke patients (19). We have recently identified an endothelial dysfunction marker, the soluble tumor necrosis factor-like inducer of apoptosis (sTWEAK), as a possible biomarker independently associated with hemorrhagic transformation and poor functional outcome in patients with IS undergoing reperfusion therapies through the presence of LA (20). sTWEAK is constitutively expressed by monocytes, tumor cell lines, and endothelial cells. Via binding to fibroblast growth factor-inducible 14 [Fn14]), sTWEAK can function as an inflammatory cytokine. In this line, previous studies have shown that patients with IS had high sTWEAK levels. However, no correlation was found between sTWEAK and an ischemic area volume during acute stroke (21,22).
At present, the primary goal of secondary prevention strategies after IS is to reduce the risk of recurrent stroke, and information on stroke recurrence and survival is useful to assess the effect of secondary prevention and risk factors for recurrence and death. In this scenario, it would be useful to identify biomarkers that could become therapeutic targets for developing future treatments or diagnostic indicators for stroke recurrence prevention; which would allow more accurate posthospital follow-up/care, as this would lead to lower disability and mortality in medium and long-term outcome.
We hypothesized that elevated serum levels of sTWEAK might be involved in a higher frequency of stroke recurrence through the presence of LA. In the present study, we intend to investigate the possible relationship among sTWEAK-LAstroke recurrence in reperfused IS patients; compare results with other inflammation biomarkers and evaluate the functional outcome at 3 months.
Patient Screening
For this study, we enrolled the stroke patients admitted to the Stroke Unit of the Hospital Clínico Universitario of Santiago de Compostela (Spain), who were prospectively registered in an approved data bank (BICHUS), and received reperfusion therapies (both intravenous and endovascular) during the acute phase. All patients were treated by expert neurologists according to national and international guidelines. Exclusion criteria for this analysis were: (1) latency time (from the onset of symptoms to hospital care) >4.5 h; (2) previous modified Rankin scale (mRS) >1; (3) history of chronic inflammatory diseases; (4) lack of at least two neuroimaging studies in the 1st week; (5) lost to follow-up patients (personal interview or telephone) at 3 months. The analysis of the data for this study was retrospective, using the period between September 2007 and September 2017.
For the estimation of stroke recurrence (ischemic stroke (IS) or intracerebral hemorrhage (ICH) patients) after the first ischemic stroke, the same database (BICHUS) was used in patients re-admitted to the same Stroke Unit. All the patients under care in Galicia (Spanish region on the northwest of the Iberian Peninsula) by the Servizo Galego de Saúde (SERGAS) are registered in a computer medical history (IANUS) that was used for patients who presented recurrence and who were seen by primary care doctors or other hospitals in the public network. Patients treated in private centers or outside Galicia were not registered and consequently excluded.
Clinical Variables and Neuroimaging Studies
The registry includes demographic variables, vascular risk factors, time from stroke onset to reperfusion therapies, comorbidities and associated treatments, axillary temperature and blood pressure, blood count and coagulation test, and biochemical variables. The clinical picture was evaluated by certified neurologists using the National Institute of Health Stroke Scale (NIHSS) at admission, every 6 h during the 1st day, and every 24 h during hospitalization; modified Rankin Scale (mRS) was used to evaluate functional outcome at discharge and at 3 months. Effective reperfusion was defined as ≤8 points in the NIHSS during the first 24 h. Poor outcome was defined as mRS > 2 at 3 months. Stroke diagnosis was made using the TOAST classification (23).
In the first episode, Computed Tomography (CT) was performed in all patients and Magnetic Resonance Imaging (MRI) in selected patients at admission. Follow-up CT scan after fibrinolysis or thrombectomy was performed in all patients at 24 h, and CT at 48 h or at any time if neurological deterioration (increase ≥4 points in the NIHSS) was detected; and between the 4th and 7th day. The presence and severity of LA was assessed using Fazekas scale (24) with a total score of 0 to 6 (Fazecas I or Grade I, 1-2; Fazecas II or Grade II, 3-4; Fazecas III or Grade III, 5-6) by MRI/CT. Hemorrhagic transformation was defined according to ECASS II criteria (25). All neuroimaging tests were analyzed by a neuro-radiologist supervised by the same researcher (JMP). The neuroimaging study was completed in 786 (89.8%) patients. In the recurrence episode, in 94 (74.0%) patients only one CT was performed at admission and in 68 (53.5%) patients a further study was performed between the 4th−7th day.
Biomarkers
We used plasma levels of interleukin 6 (IL-6) and tumor necrosis factor alpha (TNFα) as markers of inflammation, and soluble tumor necrosis factor-like inducer of apoptosis (sTWEAK) as marker of endothelial dysfunction (26,27). The blood sample to measure biomarkers was collected before the administration of the reperfusion treatment in the first stroke, and in the case of a recurrent stroke, in the 1st h following admission to the Stroke Unit of the Hospital Clínico Universitario of Santiago de Compostela. In the first episode, IL-6 measurements were performed in 843 patients (96.3%), TNFα in 828 (94.6%) and sTWEAK in 869 (99.3%). In the recurrences, the percentage of patients with a sample to measure biomarkers was lower (IL-6 71.6%; TNFα 56.7%; and sTWEAK 67.7%).
Biochemistry, hematology, and coagulation tests were assessed in the central laboratory of the Hospital Clínico Universitario of Santiago de Compostela blinded to clinical and neuroimaging data. IL-6, TNFα and sTWEAK measurements were performed in the Clinical Neurosciences Research Laboratory by researchers blinded to clinical and neuroimaging data. Serum levels of IL-6 and sTWEAK were measured by enzyme linked immunosorbent assay (ELISA) technique following manufacturer's instructions. IL-6 ELISA kit (BioLegend, San Diego, USA) minimum assay sensitivity was 1.6 pg/ml with an intra-and inter-assay coefficient of variation (CV) of 5.0 and 6.8%, respectively. sTWEAK Kit (Human TWEAK ELISA Kit (Elabscience, Texas, USA) minimum assay sensitivity was 4.69 pg/mL with an intra-and inter-assay CV of 5.06 and 5.21%, respectively. TNFα was measured using an immunodiagnostic IMMULITE 1000 System (Siemens Healthcare Global, Los Angeles, USA). Minimum assay sensitivity was 1.7 pg/mL, with an inter-assay CV of 6.5% and intra-assay CV of 3.5%. Biomarkers were evaluated within the first 3 months after blood sample collection.
Endpoints
The main outcome variables were stroke recurrence and the presence and severity of LA evaluated by neuroimaging within the first 48 h after an episode. Secondary endpoints were the association between stroke recurrence and plasma levels of IL-6; TNFα, and sTWEAK.
Statistical Analysis
For the descriptive study of the quantitative variables we used the mean ± one standard deviation or the median [range] according to the type of distribution determined by the Kolmogorov-Smirnov test for a sample with the significance correction of Lilliefors. The significance of the differences was estimated using the student's t-test or the Mann-Whitney U test. One-sided analysis of variance (ANOVA) was used to compare differences between more than two groups. The qualitative variables were expressed as percentages and for the differences the chi-square test and, if applicable, the uncertainty coefficient were used. The estimation of the independent variables associated with stroke recurrence was carried out using multiple regression models, identifying the continuous or categorical variables determined in the first stroke. First, we carried out logistic regression models including all variables with significant differences in univariate studies grouped according to demographic and background data, clinical and progression data and neuroimaging data. With the variables selected, a new logistic regression model was developed, which finally included the results of the biomarker analysis. To detect the ability of biomarkers to classify the values associated with stroke recurrence, ROC (Receiver Operating Characteristic) curves were developed, converting continuous variables into categorical ones for a value that offers maximum sensitivity and specificity. The results were expressed as odds ratio (OR) with 95% confidence intervals (95% CI). Significant values of p < 0.05 were considered. Analyzes were performed with IBM SPSS v. 25 for Mac.
RESULTS
The first patient was enrolled in January 2008 and until the end of the enrollment period (December 2017) 986 reperfused IS patients were registered. Figure 1 lists flowchart of patient groups. We excluded 27 patients who died during the first 24 h and 84 patients for whom no follow-up through either personal interview or IANUS was available. Of the 875 patients eligible for this study (53.9% males; mean age 69.6 ± 11.8 years vs. 46.1% females; mean age 74.9 ± 12.6 years), 710 patients underwent intravenous thrombolysis, 87 endovascular therapy (intraarterial thrombolysis or mechanical thrombectomy) and 78 underwent both intravenous and endovascular therapy. According to the TOAST classification, 206 patients were classified as atherothrombotic (23.5%), 381 as cardioembolic (43.5%), 11 as lacunar (1.3%) and 277 as undetermined (31.7%). Symptomatic hemorrhagic transformation (HT) was noted in 280 (32%) patients during the first admission; of which, 127 suffered stroke recurrence.
In recurrent strokes, biomarker measurements were similar in the sample collected in the first and in the second episode (IL-6, 9.1 ± 6.9 pg/mL vs. 9 We demonstrated a correlation between sTWEAK levels and the severity of LA at the first admission (Spearman's coefficient p < 0.0001) ( Figure 4A) that does not exist with the other biomarkers, and that the levels of sTWEAK measured in the second episode at admission increased in those patients in whom the severity of LA progressed between the two or more episodes as shown in Figure 4B (Spearman's coefficient p < 0.0001).
The ROC curve analysis of sTWEAK for stroke recurrence shows an area under the curve of 0.651; CI 95%: 0.596-0.705; p < 0.0001. For a cut-off point of 7,000 pg/mL, sensitivity is 63% and specificity 64%. In a logistic regression model adjusted for all biomarkers, only the sTWEAK values ≥7,000 pg/mL measured in the first stroke were independently associated with stroke recurrence (OR: 2.79; CI 95%: 1.87-4.16; p < 0.0001).
When the sTWEAK categorized variable was introduced into the logistic regression model, but not LA, sTWEAK multiplied the risk of recurrence by 2.48 (Table 4, Model A). It was demonstrated that if we include LA as a simple categorical variable, levels of sTWEAK, measured at the onset of the first stroke, ≥7,000 pg/mL multiplies by 1.62 the risk of presenting a recurrent stroke (Table 4, Model B). Importantly, however, if we include the different degrees of LA severity, the value of sTWEAK ≥7,000 pg/mL as a predictor of recurrence risk is no longer independent and is subrogated to LA severity ( Table 4, Model C).
DISCUSSION
Stroke recurrence is the first cause of increased mortality and non-motor sequelae, and this complication persists in patients undergoing reperfusion treatments (1, 5-9, 12-14, 28). In our series of patients with acute IS, who received the best possible treatment according to management guidelines, recurrence was 14.5% for an average follow-up time of 2 years. The outcome of patients with recurrence was poor in 86% of cases, with a mortality of 28%. Recurrence in our study is similar to that obtained by several authors (29), but higher than that referred in other studies. This may be explained by the fact that our follow-up time was longer, and the age of our patients was higher. Mortality, however, was similar in all the studies reviewed (6,12,30,31).
The type of stroke did not influence the frequency of recurrence, although the second episode led to the reclassification of almost 50% of the undetermined strokes into cardioembolic, and three patients with cardioembolic strokes recurred as intracerebral hemorrhage. Recurrence has been significantly lower in patients undergoing thrombectomy than in the case of systemic thrombolysis, and much lower than when the procedure was combined. Previously published data are uneven (1, 12-14, 32, 33). In our cases, these results were not influenced by the time between the onset of symptoms and treatment (p = 0.108), or follow-up time (p = 0.424, data not shown). However, patients in whom an effective reperfusion was achieved presented with lower recurrence rates. In a previous research work, we found that the treatment with tPA without reperfusion is associated with a worse patient progression, possibly due to the toxic effect of the drug in these cases (34).
In our study, oral anticoagulation and white blood cell count in the first stroke were independent factors associated with stroke recurrence. Although in the first episode the frequency of cardioembolic subtype was similar in both groups, in the second episode half of undetermined strokes were reclassified into cardioembolic, which implies an undervaluation of the initial diagnosis of cardioembolic. The platelet count was similar in both groups and functional situation before stroke was worse in the patients who recurred, although this data did not reach independence in the multivariate model (13). It is interesting that LA has been the strongest factor associated with stroke recurrence, and this association is directly related to the severity and extent of the white matter lesion. Despite the differences in the neuroimaging study and the method to quantify LA, this association is widely reported in the literature (35)(36)(37)(38)(39)(40)(41). There are, however, some differential data: (1) the association with lacunar infarctions (35) (in our series, only in reperfused patients, and none in the 11 lacunar infarctions recurred), and (2) the relationship with cardioembolic infarctions, which is not found in any study (38,39). In our case, the association between LA and recurrence was similar in atherothrombotic, cardioembolic and undetermined strokes (p = 0.383). A possible explanation for this discrepancy might be that in our series the patients with cardioembolic strokes had a more advanced age (atherothrombotic 69.6 ± 12.6 years, cardioembolic 73.8 ± 11.8 years, lacunar 67.3 ± 11.8 years and undetermined 71.8 ± 12.9 years). Aside from theses discrepancies, LA is currently an important factor of poor outcome after a stroke. Of the determined inflammatory markers (white blood cells, fibrinogen, C-reactive protein, sedimentation rate, IL-6 and TNFα), only white blood cells and TNFα maintained statistical significance in the first regression models but they lost it when including all clinical factors and of neuroimaging. However, sTWEAK (levels ≥ 7,000 pg/mL) was associated independently with an increased risk of stroke recurrence. The strong relationship between the sTWEAK levels in the first stroke and the severity of LA suggests that sTWEAK is a surrogate marker for LA, and thus, when we included the severity of LA in the regression model, sTWEAK disappeared as an independent recurrence factor (Table 4, Model B).
sTWEAK is a type II transmembrane glycoprotein of the TNF (tumor necrosis factor) superfamily that acts by binding to Fn14 which is a small transmembrane type I protein. TWEAK-Fn14 is expressed in all cells that act in the Neurovascular Unit and overexpresses within a few hours of establishing a cerebral ischemia (42)(43)(44)(45). TWEAK-Fn14 overexpression induces an inflammatory profile in brain endothelial cells with increased secretion of proinflammatory cytokines, production and activation of matrix metalloproteinases that will participate in the disruption of the blood-brain barrier and expression of intercellular adhesion molecules involved in the union of white blood cells to the endothelium (46,47). This maintained expression could condition the development and progression of LA and could be the molecular marker associated with white matter disease associated with chronic cerebral ischemia. This hypothesis, however, remains to be demonstrated.
From a clinical point of view, the importance of sTWEAK as predictor of LA progression associated with the increase of stroke recurrence does not seem preferred, since neuroimaging is more sensitive and specific, at least with the method used (we have exclusively determined sTWEAK, and no sTWEAK-Fn14). However, the possibility of blocking the activation of the sTWEAK-Fn14 system (anti-sTWEAK or anti-Fn14 monoclonal antibodies, or through sTWEAK-Fn14 fusion blockade) makes this marker a hopeful therapeutic target that could decrease the progression of LA and stroke recurrence (48,49).
This study has some limitations. First, our study presents the weaknesses of any retrospective study, even if its origin is prospective. Bias in the enrolment of patients was reduced as we enrolled all those registered in our hospital and followedup in any hospital of the public system (in Galicia, the network of private hospitals is small). Second, sTWEAK measurements were not simultaneous and were made by different researchers, although measurements were always blind to the clinical and neuroradiological data and supervised by the same senior researchers, and the same is true of clinical and neuroradiological data. Three, it is important to note that LA is a gradual disease affected by different risk factors, and not associated with a unique pathological process (16,50). There is the possibility that LA may be associated with factors in the study population other than stroke. It is known that in regions corresponding to LA on neuroimaging, the wall of penetrating arteries is thickened and hyalinized, and there is often narrowing, elongation, and tortuosity of small vessels, potentially leading to reduced cerebral blood flow, and permanent BBB damage. Furthermore, after the first stroke, Wallerian degeneration (WD) could develop and cause new white matter hyperintensities related with LA progression (51). Four, serum levels of sTWEAK do not represent a specific marker of a particular process; patients with multiple sclerosis, heart failure, or atherosclerosis show also variations in the sTWEAK levels (52). However, we investigate the possible relationship among sTWEAK-LA-stroke recurrence in reperfused IS patients. The strong points of this work are the unbiased screening of individuals, the high number of enrolled patients, and the large number of biomarkers assessed.
CONCLUSION
Stroke recurrence is associated with increased mortality, nonmotor sequelae. Currently, preventive efficacy is limited. The presence of an advanced degree of LA, as well as its progression, is the main neuroimaging factor associated with stroke recurrence. sTWEAK (≥ 7,000 pg/mL) is a biomarker correlated with the progression of LA and stroke recurrence. sTWEAK could become a diagnostic boimarker and a potential therapeutic target in reducing stroke recurrence but further studies will be necessary.
DATA AVAILABILITY STATEMENT
The original contributions generated for this study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by This research was carried out in accordance with the Declaration of Helsinki of the World Medical Association (2008) and approved by the Ethics Committee of Santiago de Compostela (2019/616). The patients/participants provided their written informed consent to participate in this study. did not participate in the study design, collection, analysis, or interpretation of the data, in writing the report, or in the decision to submit the paper for publication. | 2021-05-11T13:23:05.983Z | 2021-05-11T00:00:00.000 | {
"year": 2021,
"sha1": "659e2691bec07cb2358470ecd7c2d09bc236c2ee",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fneur.2021.652867/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "659e2691bec07cb2358470ecd7c2d09bc236c2ee",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
270690989 | pes2o/s2orc | v3-fos-license | Untangling the genetics of beta cell dysfunction and death in type 1 diabetes
Background Type 1 diabetes (T1D) is a complex multi-system disease which arises from both environmental and genetic factors, resulting in the destruction of insulin-producing pancreatic beta cells. Over the past two decades, human genetic studies have provided new insight into the etiology of T1D, including an appreciation for the role of beta cells in their own demise. Scope of Review Here, we outline models supported by human genetic data for the role of beta cell dysfunction and death in T1D. We highlight the importance of strong evidence linking T1D genetic associations to bona fide candidate genes for mechanistic and therapeutic consideration. To guide rigorous interpretation of genetic associations, we describe molecular profiling approaches, genomic resources, and disease models that may be used to construct variant-to-gene links and to investigate candidate genes and their role in T1D. Major Conclusions We profile advances in understanding the genetic causes of beta cell dysfunction and death at individual T1D risk loci. We discuss how genetic risk prediction models can be used to address disease heterogeneity. Further, we present areas where investment will be critical for the future use of genetics to address open questions in the development of new treatment and prevention strategies for T1D.
INTRODUCTION
Type 1 diabetes (T1D) is a complex autoimmune disease defined by progressive loss of insulin production due to beta cell death and loss of functional beta cell mass, and disease is triggered by environmental factors in genetically susceptible individuals.In European ancestry populations, at least half of the risk of developing T1D is driven by inherited factors [1,2].Large-scale genetic association studies have identified over 140 variant associations across 93 genomic regions that affect T1D risk and provide insight into different disease mechanisms [3e8].While islet autoimmunity is a defining feature of T1D, loss of immune tolerance to islet autoantigens cannot fully account for disease onset.Emerging genetic support for non-immune factors contributing to T1D can complement our understanding of immune pathways influencing the disease.A subset of T1D-associated genetic variants has been shown to alter beta cell function, survival, or crosstalk with the immune system, demonstrating mechanisms through which beta cells can affect risk of developing T1D.Here, we review genetic risk factors for T1D, with a particular focus on those influencing beta cells, and discuss how human genetics can advance our understanding of beta cells in T1D pathogenesis and improve clinical care.
PATHOPHYSIOLOGY OF BETA CELLS IN T1D
In the 1980s, a model conceptualized T1D pathophysiology in six stages, beginning with inherited genetic susceptibility (Stage I), a triggering event (Stage II), and a period of active autoimmunity but normal glucose control (Stage III) [9].In this model, progressive loss of beta cell mass due to autoimmune attack is the critical process leading to impaired insulin secretion (Stage IV), overt diabetes (Stage V), and eventually complete loss of detectable insulin secretion (Stage IV) [9].Subsequent research has revealed that T1D pathogenesis is more heterogeneous and complex than initially appreciated.For example, no single causal agent has been identified as a trigger for islet autoimmunity, and different individuals may have distinct environmental triggers.The clinical presentation of T1D is also heterogeneous, including differences in age of onset [10] and residual insulin secretion [11].In addition, the relationships between the emergence of islet autoantibodies (seroconversion), immune infiltration of islets (insulitis), beta cell destruction, and progression to diabetes are more nuanced than originally imagined.For example, only a subset of children with islet autoantibodies progress to beta cell destruction and T1D within ten years [12], and autoantibody type may influence disease progression [13].As evidence supporting different endotypes of T1D accumulates, new models of T1D consisting of multiple distinct routes of progression and pathogenesis are increasingly being considered.Multiple models have been proposed for how beta cells contribute to T1D pathogenesis.In the canonical model, beta cell death during T1D progression occurs primarily in the context of insulitis, where immune cells infiltrating the islets induce apoptosis in beta cells through cytotoxic T cell-mediated death or proinflammatory cytokines [14e 18].Whether beta cells are simply bystanders of immune attack or have a causal role in the pathogenesis of T1D, for example by triggering an immune attack or responding to immunological or environmental stress, has been widely debated [19e22].Here, we describe evidence supporting a causal role for beta cells in T1D.
Beta cells triggering autoimmunity
Islet autoantigen-specific T cells are equally frequent in the peripheral blood of unaffected and affected individuals and appearance of autoantibodies against islet antigens does not guarantee progression to T1D, suggesting that autoreactive T cell escape from central immune tolerance is not sufficient to initiate T1D [23e25].The specific events that precede and precipitate immune infiltration of islets is an open question and may be heterogeneous across individuals and T1D subgroups.Beta cell damage, dysfunction, and stress have all been proposed as autoimmune triggers.Enhanced or aberrant antigen presentation by beta cells can initiate or exacerbate the autoimmune response [26,27].Endoplasmic reticulum (ER) stress and the unfolded protein response (UPR) in beta cells can also contribute to autoimmune initiation [28].However, a study of infants with monogenic diabetes triggering beta cell dysfunction and stress showed no evidence of enhanced islet autoantibody production [29].While these results may not be representative of beta cell stress in older ages, they suggest additional factors, likely environmental, are required to initiate islet autoimmunity in T1D.
Beta cell vulnerability
Heterogeneity in beta cell resilience to immune attack or environmental stressors may be equally important in shaping disease progression.The beta cell fragility model suggests that certain individuals have beta cells that are less tolerant of immunological or metabolic stress, leading to increased cell death and risk of diabetes [30,31].Both T1D and type 2 diabetes (T2D) are associated with variants near GLIS3 (9p24.2) 32,33 , which encodes a transcription factor that regulates beta cell development [34].Mice heterozygous for Glis3 display changes in genes regulating the UPR, consistent with a model of beta cell fragility in both forms of diabetes [30].Other genetic associations shared between T1D and T2D may have similar functions.Identifying environmental exposures that contribute to T1D incidence in a large fraction of patients has been challenging.Nonetheless, longitudinal monitoring of environmental exposures and islet autoimmunity biomarkers in high-risk individuals has offered clues about environmental contributors, with multiple studies linking chronic enterovirus infection in early childhood to T1D [35,36].Beta cell response to environmental stressors or stimuli, such as viral infection or cytokines, may ultimately dictate whether an exposure will lead to the development of T1D.After cytokine exposure, beta cells increase expression of MHC class I along with a series of other peptides, including known beta cell autoantigens, misfolded insulin, different splice products, and even fusion peptides, with an enrichment in peptides originating from secretory granules [37].The magnitude of these responses may determine whether an initial insult leads to chronic inflammation in the islet, which can exacerbate an immune response, leading to progression to later stages of T1D.Beta cell crosstalk with the immune system has also been hypothesized to play a role in T1D progression [38].Cytotoxic T cells are the most common immune cell in islets from recent onset T1D donors [39].Meanwhile, HLA class I is hyper expressed in islets from recent onset donors, and islet-specific antigens, including neo-antigens, are presented for recognition by T cells.Cellular stress and remodeling of the beta cell microenvironment can alter the fidelity of processes such as mRNA and protein synthesis and processing, contributing to the generation of neo-antigens.How changes in the abundance and constitution of beta cell antigens, together with altered HLA class I expression, may help cause T1D pathogenesis is an area of active investigation.On the other hand, beta cells from individuals affected by T1D express molecules such as PD-L1, which inhibits invading cytotoxic T cells [40].Supporting a role for PD-L1 in preventing T1D, nearly 3% of patients treated with PD-1-PD-L1 blockade in the context of metastatic cancers develop T1D [41] and individuals with inherited PD-L1 deficiency develop early onset T1D [42].
LINKING T1D GENETIC ASSOCIATIONS TO CANDIDATE GENES
The complexity and heterogeneity of T1D pathogenesis is mirrored by the genetic basis of T1D.T1D is a highly polygenic disease where association studies have collectively identified over 90 distinct genomic regions affecting T1D risk.Genetic studies have implicated a range of cell types in the pancreas and other tissues as causal in disease, including T cells, antigen-presenting cells, beta cells, exocrine cells, and others.Human genetic studies can provide further insight into causal processes within these cell types driving T1D.However, causal genes and contexts have not yet been established at most T1D loci.While maps of T1D-associated regions are a valuable first step, GWAS signals must be interpreted in the context of local genetic architecture, genomic function, cellular processes, and overall physiology for their mechanistic and therapeutic value to be realized.In this section, we review the basic principles of interpreting genetic associations, highlighting important considerations to avoid false conclusions, and outline resources that can help to contextualize T1D associations.Rigorous follow-up of T1D regions using these approaches can guide investigation of proposed models of T1D pathogenesis, including the role of beta cells in T1D etiology and progression.We note that strategies for following up on genetic association signals have been reviewed elsewhere [43], and refer readers to these sources for more detailed discussion of statistical methods and experimental approaches.
Prioritizing causal variants with genetic fine mapping
Genome-wide association studies (GWAS) provide robust and reproducible maps of genomic regions influencing a trait.However, genomic regions nominated by GWAS are broad (approximately 500 kilobases to 1 Megabase), containing hundreds to thousands of common genetic variants and harboring up to several dozen candidate genes.The low resolution of GWAS signals is due to genetic linkage, where genetic variants in close proximity are more often inherited together and thus alleles are correlated in the population (referred to as "linkage disequilibrium" (LD)).Specialized analyses, accounting for local LD patterns within each region, are required to distill broad association signals into sets of variants which may be causal for disease risk, called "credible sets."The process of defining credible sets in GWAS regions is referred to as genetic fine mapping and is typically implemented using dedicated statistical algorithms for variable selection (Figure 1A).Many GWAS regions contain multiple causal variants, including over 30% of T1D risk loci, each potentially mediated by distinct molecular effects [6,7].Modern genetic fine mapping algorithms attempt to define a credible set for each independent causal signal at a locus [44,45], with the underlying assumption that each independent signal, and thus credible set, contains a single causal variant.Two recent T1D fine mapping analyses, using different fine mapping algorithms, defined highly concordant credible sets in many T1D regions [6,7], providing a starting point for mechanistic investigation.In some regions, T1D credible sets only partially overlapped between the two studies, reflecting some of the challenges inherent to genetic fine mapping and limitations of available T1D genetic data sets.The success of genetic fine mapping depends on several factors, including the LD structure in the region, the number of independent causal variants in the region, the effect sizes and allele frequencies of causal variants, the size and ancestral background of the study cohort, and the accuracy with which causal variants were assayed in the study (whether the variant was included on the genotyping array or imputed with high accuracy) [46].The LD structure of a region is a primary determinant of how effective genetic fine mapping can be.If a causal variant is in high LD with many nearby variants, statistical fine mapping methods may not be able to prioritize causal variants over others based on genotype association patterns alone, resulting in large credible sets, sometimes containing thousands of candidate causal variants (e.g., T1D credible sets in the region encoding MEG3 and DLK1) [6,7].In these regions, incorporating multiple ancestry groups into genetic studies, which have differing patterns of LD, can help improve fine mapping resolution [47].To date, T1D studies of non-European ancestry groups are small and lack statistical power, limiting their utility for fine mapping [7,48,49].However, future investment in diverse T1D cohorts could go a long way towards delineating causal variants in many T1D regions.Along with LD structure, the number of independent causal variants in a region is critical to the robustness of credible sets.Generally, credible sets are most robust in regions where only a single signal is identified (e.g., T1D credible sets in the region encoding GLIS3) [6,7].Fine mapping algorithms can struggle to confidently define credible sets in especially complex regions with several independent causal variants, particularly when causal variants are in partial LD with each other (e.g., T1D credible sets in the regions encoding IL2RA, CTLA4, and UBA-SH3A) [6,7].Credible sets in these regions may be more sensitive to technical artifacts or modeling assumptions and are more likely to change as additional data become available [47].In summary, genetic fine mapping is a useful tool for refining broad GWAS signals into tractable credible sets for experimental follow up.Existing T1D credible sets can be integrated with molecular data to nominate causal cell types, regulatory elements, and genes using approaches we discuss in the following sections.At the same time, it is important to keep locus-specific factors in mind when interpreting credible sets.In particular, delineating credible sets is challenging in regions with extended LD or multiple partially correlated signals.Credible sets should be interpreted as sets of candidate causal variants prioritized using available genetic data, with the understanding that they may change as additional data become available, particularly from diverse cohorts.
Prioritizing causal cell types using regulatory annotations
Most common disease-associated variants are in non-coding regions of the genome, likely affecting regulatory elements that govern gene expression across diverse cellular contexts (e.g., enhancers and promoters, stimulatory and basal conditions) [50,51].Targeted efforts to annotate regulatory elements in pancreas cell types have provided more refined maps of islet cell type-specific regulatory activity (Table 1).More recently, single cell epigenomics has been useful for profiling regulatory elements active in specific cell types within a heterogeneous tissue, such as the pancreas, and has enhanced the definition of regulatory elements in each islet cell type.Integrating regulatory maps with genetic association data can help indicate which tissues and cell types are broadly involved in T1D risk.Functional enrichment analyses are used to determine whether traitassociated variants preferentially overlap regulatory elements active in a given tissue or cell type [52e60].T1D-associated variants are enriched in immune cell regulatory elements [5], most prominently in T cells, as well as regulatory elements active in islets, particularly those specific to beta cells [6,7,61].One study identified enrichment specifically in cytokine-induced regulatory elements in islets [62], suggesting that T1D risk in beta cells and other islet cell types may act in response to cytokine signaling.Collectively, functional enrichment analyses support that T1D risk is affected by genetic effects on immune cells and beta cells, as well as non-endocrine pancreatic cell types such as exocrine acinar and ductal cells [6,7,62,63].
Prioritizing candidate genes and regulatory mechanisms
Given tissues and cell types broadly enriched for T1D-associated variants, the next challenge is to determine mechanisms of action at specific T1D associations, including the affected gene(s).Both genome-wide and targeted methods can be implemented to this end.
A molecular quantitative trait locus (QTL) is a genetic variant that affects a quantitative molecular trait, such as gene expression ("eQTL"), splice isoform expression ("splice QTL"), protein expression ("pQTL"), or chromatin accessibility ("caQTL") (Figure 1B) [64].Collaborative efforts, such as the Genotype-Tissue Expression (GTEx) project [65], have generated eQTL maps across diverse cell types and contexts.Human islet QTL studies provide resources for investigating isletcentric mechanisms of GWAS risk variants ( One approach to generating mechanistic hypotheses at GWAS loci is to integrate candidate causal disease variants with QTL maps using colocalization analysis [68], which formally tests whether genetic associations for two traits may be driven by a shared causal variant (Figure 1B).Like genetic fine mapping, QTL colocalization analysis is performed with dedicated statistical analysis tools [69,70].Moreover, the same factors that affect genetic fine mapping influence colocalization analyses, including LD structure and the number of independent associations in a locus [64,68,69].Based on available resources, less than half of GWAS signals colocalize to known eQTLs, and some models indicate that eQTL studies would require very large sample sizes to explain most GWAS associations [71].These conclusions are informed by existing eQTL studies, which lack cell type-and contextdependent expression measurements.As QTL study sample sizes increase and more diverse cell state contexts and populations are profiled, more eQTL-GWAS colocalizations will be discovered.High throughput chromatin conformation capture assays can be used to generate three dimensional chromatin maps, aiding in our understanding of the spatial organization of enhancers and other non-coding regions of the genome [72e74].Using this information, it is possible to annotate target genes by their physical contact with enhancers and other regulatory elements [75].Further, disease risk information can be overlaid onto these enhancer-gene maps in disease-relevant tissues [72].While this approach has been used to identify T2D target genes not yet supported by eQTL evidence, such as GLIS3 and INS [76e78], this approach has not yet been systematically applied to noncoding T1D risk variants.Other methods have been developed to link non-coding regions to target genes in the absence of physical interaction data.Cicero, which looks at co-accessibility of regulatory elements in single cell chromatin accessibility data, identified SOCS1 as a potential target gene in cytokine-treated beta cells, and newer methods have been developed that use paired RNA-and ATAC-seq (multiome) data [79e82].Together, these methods can bridge the gap when annotating regulatory elements and their target genes and can aid in identifying putative causal genes at T1D risk loci, especially in non-coding regions of the genome.Moving forward, scalable molecular perturbation approaches, including CRISPR-based editing and inhibition/activation screens and massively parallel reporter assays (MPRAs), will provide orthogonal insight into non-coding regulatory mechanisms in disease relevant tissues (Figure 1C) [83].We expect that combining perturbationinformed regulatory predictions with well-powered QTL maps generated using single cell approaches in T1D-relevant contexts will help annotate molecular mechanisms for many more T1D-associated regions.
MODELS TO STUDY CANDIDATE T1D GENES
Understanding how candidate genes contribute to disease in the broader context of cellular-and tissue-level function is required to realize the therapeutic potential of T1D genetics.Studies of monogenic forms of diabetes have provided vital insights into the importance of disease genes and the associated pathways that lead to beta cell dysfunction or failure.For example, the majority of cases of Wolfram syndrome are caused by autosomal recessive mutations in the Wolfram syndrome 1 (WFS1) gene, which encodes wolframin [84,85].Wolframin deficiency results in altered calcium signaling [86], impaired GSIS [86,87], and ER stress [88] in beta cells.Studies of WFS1 deficiency may inform mechanisms contributing to beta cell stress or dysfunction and could provide insight into disease processes relevant to T1D, such as beta cell fragility or how ER stress might precipitate autoimmunity [84].
However, the study of monogenic forms of diabetes is unable to holistically model the complex interplay of multiple pathways and tissues that act in concert to cause T1D.The development of T1D models that mimic natural disease etiology and progression has also proved challenging.Heterogeneity in disease course, interactions between multiple tissues and cell types, and contributions of diverse environmental triggers have all been difficult to replicate in animal or in vitro models.Despite these limitations, rodent models, primary human islets, and appropriately chosen cell lines have all still provided tremendous insight into the roles of beta cells and candidate genes in T1D pathophysiology.Rodent models of T1D have been extensively reviewed elsewhere [89,90].Below, we describe models of T1D with special relevance to the field of T1D genetics, with an emphasis on human models.
Rodent models
Mouse and rat models of T1D have been used to study disease progression and for preclinical development of disease-modifying therapies (Figure 2A).Here we focus on models of virally-induced and spontaneous diabetes, as well as humanized models in which human tissue is engrafted into immunodeficient mice to study autoimmunity.Transgenic mice expressing lymphocytic choriomeningitis virus (LCMV) nucleoprotein (NP) or glycoprotein (GP) antigen under control of the rat insulin promoter (RIP) are a common virus-induced diabetes model [91,92].These mice express LCMV-NP/GP as "self" antigen on beta cells.Naïve RIP-NP/GP mice do not develop diabetes spontaneously, but infection with LCMV causes LCMV-specific T cells to recognize the GP/NP-expressing beta cells, resulting in beta cell destruction and development of diabetes within 1e2 weeks [91,92].This model was designed to investigate the roles of viral infection and loss of peripheral tolerance in T1D development [91,92], and it provides the advantage of a defined autoantigen with readily available antigen-specific T cell receptor transgenic mice, as well as the ability to control the timing of diabetes development.A limitation of the RIP-LCMV model is that it does not model the complexity of human T1D in which a variety of genetic and environmental conditions contribute to development of disease.
The most widely used rodent model of spontaneous T1D is the nonobese diabetic (NOD) mouse [93].In NOD mice, insulitis begins around 3e4 weeks of age, and overt diabetes typically presents between 12 and 14 weeks of age, although incidence and age of onset vary by colony [94].NOD mouse autoimmune diabetes shares some genetic risk factors with human T1D, including major contributions from MHC class II alleles [95].NOD mice express a distinct I-A g7 allele, which contains a polymorphism at position 57 of the I-A b chain [96,97].The same polymorphisms in the human ortholog, HLA-DQ b57, are associated with T1D risk in humans [98].Outcrossing NOD mice with other strains identified dozens of additional loci, termed insulin-dependent diabetes (Idd) loci, underlying diabetes in NOD mice [99].While causal genes remain unknown in most Idd loci, several contain orthologs for human T1D genes, including CTLA4 and IL2 [100].Genetic variants within the Idd9 locus have also been shown to modulate beta cell susceptibility to autoimmune attack [101].Although the causal genes at this locus remain unknown and may not be orthologous to human T1D genes, understanding the genetic basis of diabetes in NOD mice has potential to illuminate disease processes that may be present in human T1D.Studies of T1D genetics in the NOD mouse have been extensively reviewed elsewhere [99,101].Because of the similarities to human disease, pre-clinical studies of T1D-modifying therapies are frequently performed in NOD mice.These therapies have yielded largely disappointing results in clinical trials [102,103], although the success of the anti-CD3 monoclonal antibody Teplizumab (Tzield), which first showed promise as a T1D therapy in NOD mice, supports their value in drug development [104,105].Humanized mouse models engrafted with functional human tissue have been developed to better mimic human disease processes in mice [106,107].Immunodeficient recipient mouse strains have been primarily developed on an NOD background [106].In particular, NODscid IL2rgnull (NSG) mice lack mature lymphocytes and NK cells and have been widely used for engraftment of human islets, peripheral blood mononuclear cells (PBMCs), and hematopoietic stem cells (HSC) [107,108].By engrafting tissues from T1D donors with diverse genetic backgrounds or with targeted genetic modifications, these models can be used to study how different genetic backgrounds contribute to T1D development [106].
Cell lines
In addition to in vivo models of T1D, several cell lines have been used to probe the role of T1D risk genes in beta cells.Glucose-responsive beta cell lines allow for rapid genetic modification and assessment of glucose-stimulated insulin secretion (GSIS) and cell survival in response to various environmental stimuli.Both rodent and human cell lines have been developed for in vitro studies of beta cell biology and dysfunction (Figure 2A).The cell lines Min6 [109] and INS-1 [110] were generated from mouse and rat insulinomas, respectively.Min6 and INS-1 cells are glucoseresponsive to physiologically relevant glucose concentrations [109e 111], and INS-1 GSIS has been further enhanced in the INS-1 832/ 13 subclone stably transfected with a human insulin expression vector [112].Min6 and INS-1 (and its derivatives) have been widely used for in vitro studies of beta cell function and survival in response to genetic alterations and environmental stressors.An additional murine cell line, NIT-1, was developed from a beta cell adenoma originating from an NOD mouse and displays modest GSIS [113,114].NIT-1 cells are especially useful for studying the relevance of beta cell gene expression in the development of T1D, as NIT-1 can be rapidly genetically engineered and transplanted into NOD mice.Recently, a genomee wide CRISPR screen of NIT-1 cells transplanted into NOD mice identified Rnls as a modifier of ER stress and beta cell survival [115], and RNLS maps to a T1D risk locus in humans [116].
The development of the first functional human beta cell line was a long-anticipated breakthrough.In the early 2000s, EndoC-bH1 cells were developed through targeted oncogenesis of human fetal pancreatic tissue [117].Critically, EndoC-bH1 cells were glucoseresponsive and expressed common beta cell markers [117].Recent generations of EndoC-bH display improved functionality and maturity.EndoCbH3 harbor Cre-excisable immortalization factors to generate cells that can be expanded and then induced to quiescence to better mimic mature human beta cells [118,119].EndoC-bH5 cells show a nearly 10-fold increase in insulin release in response to glucose and susceptibility to proinflammatory cytokines similar to human islets [120].EndoC-bH are genetically tractable, allowing researchers to investigate the effects of genetic manipulation on cell function, stress, and survival within a human genetic background.Nonetheless, some studies underline potential limitations of EndoC-bH cells as a model of beta cells in T1D.Karyotypic abnormalities have been reported in EndoC-bH cells, suggesting experiments with these lines should be designed and interpreted with caution [121].Additionally, EndoC-bH cells express minimal nitric oxide synthase in response to cytokine exposure [122,123].Ductal cells have been shown to produce nitric oxide synthase in response to cytokines and may be a source of nitric oxide synthase in primary human islets [124,125].These results highlight the importance of confirming results in primary islets, where beta cells interact with other potentially relevant cell types.
iPSC-derived beta-like cells
The development of mature beta cells from human embryonic stem cells (hESCs) and induced pluripotent stem cells (iPSCs) for transplantation into T1D patients has been pursued as a potential T1D cure.The first protocols for generating stem-cell derived beta-like (SC-beta) cells in vitro were published in 2014 [126,127], and the SC-beta cells produced displayed reduced insulin secretion and transcriptional signatures similar to fetal beta cells [126,127].Since then, differentiation protocols have been developed that yield SC-beta cells approaching the functionality of primary human islets [128,129].Importantly, current differentiation protocols yield cell preparations containing all endocrine cell types (alpha, beta, delta, gamma, epsilon) that can assemble into islet-like organoids (SC-islets), permitting studies of SCbeta cells in an environment more similar to native islets [130].
In addition to their therapeutic potential, SC-beta cells allow for crucial studies of T1D genetics.iPSCs can be generated from healthy patients or those with T1D and genetically modified to study the effects of risk variants in diverse backgrounds (Figure 2A) [131e133].iPSC models also enable perturbation experiments, multi-omic analysis, and functional studies (e.g., secretion assays) to be performed on SC-beta cells derived from individual patients, an undertaking that has historically been challenging with limited primary islet tissue.iPSCs can be transplanted into immunodeficient SCID or NOD/SCID mice to study the in vivo role of genetic variation on T1D development.Importantly, iPSCs-derived islet-like cells respond to proinflammatory cytokines similarly to primary human islets [134,135] and are amenable to coculture with immune cells from the same individual to investigate crosstalk between immune and beta cells.Co-culture of iPSC-derived beta-like cells with PBMCs in vitro has demonstrated that thapsigargininduced ER stress in SC-beta cells activates co-cultured autologous T cells, further supporting the relevance of beta cell stress to T1D pathogenesis and highlighting the value of iPSC-derived models [136].
Although hESCs and iPSCs are valuable tools for studying beta cell development, inefficient differentiations limit their utility.Current differentiation protocols yield SC-beta cells that remain functionally and transcriptionally immature [137e139], and SC-islets contain cell types not present in mature human islets, most notably enterochromaffinlike cells and polyhormonal cells [139,140].On the other hand, hESC-and iPSC-derived islet cells respond to proinflammatory cytokines similarly to adult human islets [134,135], and may thus represent an interesting experimental model to study responses to inflammation in early life, a period when beta cells are not yet fully mature but may be already exposed -in some individuals e to the early stages of insulitis.Procurement of immune cells for autologous co-culture studies presents a challenge for modeling T1D autoimmune processes.Immune cells must be either differentiated from iPSCs or collected from donors, timing blood draws for PBMCs with cell differentiations.A more indepth discussion of modeling T1D processes using SC-derived models is provided in a previous Human Islet Research Network review [141].
Primary human tissue
Using primary cadaveric human islets to study T1D genetics and beta cell dysfunction has high translational potential.However, access to primary human islet tissue is limited, particularly from donors with diabetes, and heterogeneity between donors creates variability in experimental results [142].Experimental work in primary human islets has been limited by low transfection efficiency of cells within intact islets.However, recent advances in protocols for pseudo-islet generation allow for efficient transduction of dissociated cells prior to reaggregation into functional pseudo-islets [143,144].This approach makes CRISPR-mediated genome editing possible in human pseudoislets [144].Genetically engineered pseudo-islets can be used to investigate the impact of genetic modifications on beta cell function and survival in the context of functional human islet architecture (Figure 2A).Since beta cells work in coordination with each other and the other cell types in the islet [145e147], these models will likely offer insights about genetic mechanisms in T1D that would be inaccessible using isolated beta cell models.
In addition to primary human islets, live pancreatic slices from nondiabetic and diabetic individuals are emerging as a powerful tool to study islet function and morphology in the context of the surrounding exocrine pancreas (Figure 2A).Unlike isolated islets, pancreas sections preserve the surrounding islet microenvironment and information on islet localization within the organ, retaining cell-to-cell interactions and tissue compartments [148].Live pancreatic slices have been used to investigate beta cell mass in T1D patients [149] and islet capillary function in the context of diabetes [149,150].Pancreas slices may be valuable for understanding genetic variant effects in the context of different islet immune niches or pancreatic anatomy in autoantibody positive or T1D patients.The use of live pancreatic slices for studies of diabetes pathogenesis has been recently reviewed elsewhere [151].
In vitro stressors
During T1D progression, beta cells are exposed to proinflammatory cytokines and display evidence of heightened ER stress [152e154].These conditions can be modeled in vitro in primary islets, pseudoislets, cell lines, or SC-islets (Figure 2B).Culturing cells with the proinflammatory cytokines interleukin-1 beta (IL-1b), interferon gamma (IFN-g), and tumor necrosis factor alpha (TNF-ɑ), which are secreted by islet-infiltrating immune cells during progression of T1D, is one of the most common models of beta cell inflammatory stress [154].Exposure to interferon alpha (IFN-ɑ), which promotes upregulation of MHC class I and ER stress and mediates beta cell death in the presence of IL-1b, has also been used to model early inflammatory processes in T1D [155,156].Cytokine cocktails induce transcriptional programs in beta cells that are similar to those seen in beta cells from T1D donors, supporting the relevance of this model to human disease [157].
Proinflammatory cytokines in the islet may be secondary to viral infection or other environmental stressors.Viral triggers of islet autoimmunity have been modeled by infecting islets with viruses implicated in T1D, such as Coxsackievirus, or by mimicking viral infection using double-stranded RNA [158,159].Thapsigargin, a sarco/endoplasmic reticulum Ca 2þ ATPase inhibitor, and tunicamycin, which inhibits N-linked glycosylation of proteins, have been used as in vitro models of ER stress [115,160].Other stimuli have been used to model additional aspects of T1D progression, such as hyperglycemia, hypoxia, and oxidative stress.
Given the complex genetic basis of T1D, a subset of T1D-associated variants likely affect disease by modifying beta cell responses to diabetogenic conditions.However, enormous heterogeneity between individuals in environmental exposures makes such gene-byenvironment interaction effects challenging to detect in epidemiological studies.The controlled experimental conditions provided by in vitro systems can increase power to detect modifying effects of genetic variants on beta cell response to exposures.In particular, in vitro models of environmental stimuli can be combined with genetic modification and beta cell function or survival assays to investigate genetic effects on beta cell sensitivity to known or hypothesized environmental causes of T1D.
T1D-ASSOCIATED REGIONS INFLUENCING BETA CELL FUNCTION
In the sections above, we described tools for nominating T1D candidate genes based on genetic evidence and models for investigating these candidates in cellular-, tissue-, and organismal contexts.To date, there are few, if any, studies which have conclusively mapped a T1D-associated region to a causal variant and gene acting in beta cells and further demonstrated a cellular/organismal function leading to T1D.Therefore, at present, we only have a partial understanding of how beta cells contribute to T1D risk.In this section, we review T1Dassociated regions and candidate genes with genetic evidence supporting a role in beta cells.We also note that there is a large body of literature that has experimentally assessed the function of candidate genes at T1D-associated loci in beta cells, but many of these genes have not yet been linked directly to T1D risk variants.To help demonstrate the current gaps in knowledge at these and other loci, we include here a handful of strong candidate genes that are predicted to affect beta cell function but which have varying degrees of evidence linking them to T1D credible variants (Table 3, Figure 3).
INS
Genetic variation in the region encoding insulin is the largest genetic determinant of T1D susceptibility outside of the MHC.A highly polymorphic variable number tandem repeat (VNTR) in the promoter region of the insulin gene (INS) was identified in the 1980s [161] and confirmed by long-read sequencing to consist of a 14 base pair sequence that repeats up to 200 times [162].Observed INS VNTR alleles have been grouped into three classes (I, II, and III), where class I alleles have the fewest repeats and class III alleles have the most.The longer class III alleles confer dominant protection against T1D [163,164], potentially by promoting negative selection of autoreactive T cells specific for insulin-derived peptides [165,166].This hypothesis is supported by evidence showing the protective alleles are correlated with higher insulin expression in the human thymus [166,167].
Table 3 e Evidence linking T1D credible variants from recent fine mapping studies to candidate genes.Fine mapping of the INS region has identified multiple independent associations with T1D [6,7], suggesting that reducing INS region haplotypes to three broad VNTR classes may obscure additional mechanisms at this locus.Based on existing resources, none of the genetic variants associated with T1D affect basal INS expression or splicing in beta cells; however, as discussed in Section 3, QTL maps of human islet cell types are based on expression profiling of aggregate islet tissue under basal conditions from a limited number of donors.Integration of T1D fine mapping with molecular data points to rs4929965 as a candidate causal variant which maps to a beta cellspecific distal regulatory element that contacts the INS promoter [168], indicating that it may affect INS expression in beta cells in the right context.The T1D association tagged by rs4929965 also influences risk of T2D but has opposite effects on the two diseases [33].
Well-powered, islet cell type-specific QTL maps across diverse contexts, for example using in vitro stressors, will likely reveal new regulatory mechanisms of T1D-and T2D-associated variants near INS.
HLA
The most substantial genetic determinants of T1D risk are the human leukocyte antigen (HLA) class II genes (HLA-DRB1, -DQA1, -DQB1, -DPA1, and -DPB1), which encode components of major histocompatibility complex (MHC) class II molecules.In particular, the haplotypes DRB1*03:01-DQA1*05:01-DQB1*02:01 ("DR3") and DRB1*04:01/02/03/05-DQA1*03:01-DQB1*03:02 ("DR4") are strong predictors of T1D risk [169] and have already been used to prioritize high-risk individuals for longitudinal prospective studies of T1D etiology [170].Additional association signals are also seen in genes encoding MHC class I molecules (HLA-A, -B, and -C) [171].Both MHC I and MHC II molecules present peptide antigens for recognition by T cells, an essential step in T cell-mediated adaptive immunity.HLA variants mediating T1D risk are concentrated in the peptide binding pockets of MHC molecules [171] where they are suspected to influence binding and presentation of self-antigen.Increased expression of HLA class I genes is observed in insulincontaining islets from recent onset T1D patients [26] and may enhance beta cell destruction by cytotoxic CD8 þ T cells.Increased islet HLA class I expression can precede insulitis, and autoantigen presentation by beta cells on MHC I molecules may contribute to T1D etiology [172,173].MHC II complexes are typically expressed by professional antigen presenting cells (APCs), such as dendritic cells, macrophages, or B cells.However, there is some evidence of ectopic MHC II expression within the islet [174e176].Studies on the relationship between T1D-associated alleles and MHC I and II expression in beta cells, however, have been limited [177].
PTPN2
At the 18p11 locus, fine mapping identified three independent associations in intronic regions of protein tyrosine phosphatase non-receptor type 2 (PTPN2) gene [6].PTPN2 has been implicated in regulating beta cell responses to proinflammatory stress.Expression of PTPN2 in prihuman islets and rodent beta cell lines was found to increase after exposure to proinflammatory cytokines [178].Another tyrosine phosphatase, PTPN22, also strongly associated with T1D, showed no change in expression in beta cells in response to cytokines [178].Further work showed that PTPN2 regulates IFN-g signaling and modulates ER stress after cytokine exposure [160].Additionally, PTPN2 was found to modulate the deleterious effects of TNF on human beta cells via regulation of JNK activity [179].PTPN2 has also been shown to affect beta cell survival after cytokine exposure in a genome-wide CRISPR loss-of-function screen in EndoCbH1 cells [80].Knocking out PTPN2 in stem-cell-derived beta-like cells led to increased HLA class I expression and consequently increased recognition by autoreactive T cells [180].These studies suggest PTPN2 may modulate beta cell apoptosis, by dephosphorylating downstream targets of cytokine signaling, or protect beta cells from immune recognition.While these studies demonstrate a role for PTPN2 in beta cell function and survival, we note that none thus far have formally linked altered PTPN2 activity in beta cells to T1Dassociated variants directly and that other evidence indicates a role of PTPN2 in both adaptive and innate immune systems.
GLIS3
The 9p24.2 region is one of a few loci associated with both T1D and T2D risk, and formal colocalization analysis supports a shared causal variant with the same direction of effect for both traits [32,33].T1D fine mapping defined a single causal signal in this locus, with credible variants mapping to an approximately 14 kilobase region intronic to GLIS3 [6,7].Pancreatic islet chromatin interaction maps indicate the credible set in this locus interacts with multiple genes in the region, including RFX3, RFX3-AS1, and GLIS3, but deletion of the putative causal enhancer only affected expression of GLIS3 [77].The GLIS3 gene encodes a GLI-similar Kruppel-like Zinc finger transcription factor that regulates pancreatic beta cell development [34], and mutations of GLIS3 cause a form of neonatal diabetes [181].Chromatin immunoprecipitation studies in rodent models have shown that GLIS3 interacts directly with the promoter, as well as with PDX1, MAFA, and NEUROD1 to regulate activity at the insulin promoter [182].Mouse Glis3-deficient models display hyperglycemia and shortened lifespan.Additionally, Glis3 heterozygous mice had changes in genes regulating the UPR, leading to downstream beta cell stress and supporting the shared beta cell fragility model of T1D and T2D [30].CRISPR deletion of GLIS3 during embryonic stem cell differentiation into pancreatic beta cells led to decreased representation of INS-positive differentiated cells [183,184].GLIS3 may also play a role in beta cell survival, as indicated by in vitro studies that assessed beta cell apoptosis in response to proinflammatory cytokines or glucolipotoxicity [185,186].Given its role in T1D, T2D, and monogenic diabetes, pathways regulated by GLIS3 may represent a therapeutic opportunity in the treatment of multiple forms of diabetes [31].
CLEC16A/DEXI/SOCS1
The 16p13 locus is a gene-rich region harboring multiple independent T1D associations [6,7] and several potential candidate genes [187], including C-lectin domain containing 16 A (CLEC16A), dexamethasone-induced transcript (DEXI), suppressor of cytokine signaling 1 (SOCS1), and MHC class II transactivator (CIITA).Fine mapping defined two T1D credible sets in the region.The primary T1D signal at 16p13 maps to a single CLEC16A intron [6,7].CLEC16A encodes an E3 ubiquitin ligase essential for mitophagy (selective autophagy of damaged mitochondria) [188e190].CLEC16A deficiency in rodent and human islets leads to impaired beta cell function and reduced beta cell survival following exposure to proinflammatory cytokines and inflammatory insults [191].T1D credible variants in CLEC16A are associated with reduced insulin secretion and decreased expression of CLEC16A in human beta cells [188], though there has been no formal colocalization between T1D GWAS signals and islet eQTL for CLEC16A.In immune cells, T1D credible variants in CLEC16A are associated with DEXI expression and overlap a regulatory element that contacts the DEXI promoter [192].DEXI modulates the type I IFN/ STAT pathway in beta cell lines and primary human islets [193].Modulation of DEXI in NOD mice, however, did not affect the development of T1D [193,194].
The secondary T1D signal at 16p13 spans multiple introns of recQ mediated genome instability 2 (RMS2) [6,7].T1D credible variants in RMS2 overlap a cytokine-responsive regulatory element which is thought to regulate cytokine-dependent expression of SOCS1 in beta cells [80].In a genome-wide CRISPR screen in EndoCbH1 cells, SOCS1 promoted cytokine-mediated beta cell survival and affects beta cell survival in human and animal models by dampening the inflammatory response [80].Finally, although not linked specifically to T1Dassociated variants, CIITA, a transcriptional regulator of MHC class II gene expression, represents a fourth potential candidate gene in the 16p13 locus [195,196].This T1D locus illustrates the complexity of interpreting disease associations in regions with multiple compelling candidate genes.Functional validation of variant-to-gene links in disease-relevant models will be vital to teasing apart the true causal mechanisms underlying T1D association in this region.
IFIH1
Interferon-induced helicase 1 (IFIH1) encodes the cytoplasmic viral RNA detector melanoma differentiation-associated protein 5 (MDA5), which is vital for antiviral signaling [197e199].T1D fine mapping implicates multiple low-frequency variants altering the MDA5 protein [7], the most common of which is rs1990760, an A946T missense mutation in the carboxy terminal domain (CTD) of MDA5 [197].The A946T variant, which increases risk for T1D, causes increased cytokine production and gene expression in human PBMCs [200] and associates with stronger interferon response to Coxsackievirus B (CVB) in human islets [201].Reduced expression of MDA5 or defects in the MDA5 helicase 1 domain on the NOD background reduced incidence of CVB-associated diabetes in part due to reductions in type 1 IFNs [202,203], but complete deletion of MDA5 led to an accelerated onset of diabetes in NOD mice following CVB exposure [203].Taken together, human and mouse evidence indicate the MDA5-mediated antiviral response is likely involved in T1D etiology and suggest therapeutic potential for tuning these responses.However, whether MDA5 contributions to T1D are mediated primarily by its activity within islets, immune cells, or both remains an open question.
DLK1/MEG3
Human genetic studies support paternally inherited risk for T1D in the imprinted region of chromosome 14q32, which contains the genes maternally expressed 3 (MEG3) and delta-like homolog 1 (DLK1) [204].T1D fine mapping indicates two independent association signals in 14q32 [6].One T1D credible set was refined to a single candidate variant, rs56994090 [6,7] and colocalized with an islet splice-QTL for the lncRNA MEG3 [205].The other credible set contains a variant, rs3783355, that overlaps a beta cell-specific regulatory element [63] and showed allelic bias in islet transcription factor ChIP-seq data [206].Together, these data support rs56994090 and rs3783355 as candidate causal variants for T1D potentially acting through two distinct regulatory mechanisms or genes within the islet.MEG3 is a maternally expressed long non-coding RNA, whose expression is downregulated in islets of T2D donors [207].The paternally imprinted DLK1 encodes a delta-like non-canonical Notch ligand, which is broadly expressed in rodents during development and later restricted to pancreatic beta cells, pituitary somatotroph cells, bone marrow, adrenal gland, and gonadal tissues [208e210].Conditional loss of Dlk1 in mouse beta cells did not affect islet size, number, or architecture up to 6 weeks after birth [210].However, mice bearing transgenic overexpression of Dlk1 within beta cells displayed increased islet mass and insulin secretion [211].Studies in isogenic hESCs revealed that loss of DLK1 and disruption of DLK1 regulatory regions led to increased beta cell apoptosis [63].
CTSH
T1D fine mapping analyses identified a single credible set of 4 or 5 variants in the chromosome 15q25.1 region, including a nonsynonymous variant in the cathepsin H (CTSH) gene [6,7], which colocalized with a whole-blood eQTL for the same gene [212].Earlier work suggested that T1D-associated variants may also influence CTSH expression in pancreas [213,214], however, formal colocalization of these effects has not been evaluated using credible sets or eQTL resources.CTSH encodes a lysosomal cysteine protease, which is ubiquitously expressed and vital for degradation of specific cargo delivered to lysosomes [215].Cathepsins have been broadly implicated in immune cell function, as MHC class II molecules present antigens derived following lysosomal processing, as well as autophagy [215,216].Impairments in beta cell macroautophagy and lysosome function have been observed in T1D [217].CTSH expression is suppressed by cytokine exposure in both rodent and human beta cells, and overexpression of CTSH protected beta cells against cytokinemediated apoptosis in part through decreased JNK and p38 signaling and reduced expression of the proapoptotic factors Bim, DP5, and c-Myc [214,218].These beneficial effects of CTSH appear to be mediated through regulation of the small GTPase Rac2, as Rac2 deficiency abolishes the protective effects of CTSH on beta cell survival [219].Further, CTSH knockout mice display reduced islet insulin content.Together with observations with CLEC16A, involvement of the CTSH locus suggests organellar quality control in beta cells may be important in T1D.5.9.TYK2 Tyrosine kinase 2 (TYK2) encodes a non-receptor Janus kinase critical for type I IFN signaling that is broadly expressed among immune cell types and beta cells.Fine mapping suggests two independent nonsynonymous variants in TYK2 (rs34536443 (P1104A) and rs12720356 (I684S)) offer protection against T1D [6,7].Peripheral immune cells from individuals bearing the P1104A variant had significantly reduced STAT1/3 phosphorylation, a readout of TYK2/Janus kinase activity, following exposure to type I IFN across all immune cell subsets [220].In mouse beta cells, complete Tyk2-deficiency accelerated diabetes induction following exposure to a diabetogenic form of encephalomyocarditis virus that was specifically dependent on beta cell Tyk2 loss [221].This may be due to the importance of TYK2 in beta cell development, as TYK2 knockout human iPSCs also had an impaired emergence of endocrine precursors [222].Alternatively, knockdown of TYK2 appeared to be protective against experimental forms of beta cell damage in EndoCbH1 cells or human islets following exposure to the viral dsRNA mimic PIC or in iPSC-derived islets following exposure to IFN-a [155,222e224].The importance of the protective P1104A kinase domain mutant has not been directly studied in human or rodent beta cells.However, use of a TYK2 pharmacologic inhibitor, which stabilizes the TYK2 pseudokinase domain and has been reported to have similar effects as the P1104A variant, led to reduced IFN-amediated upregulation of MHC class I in iPSC-derived human islets and reduced T cell cytotoxicity in co-culture assays [222,225].These studies suggest that partial TYK2 deficiency or pharmacological recapitulation of effects of the TYK2 P1104A variant, as opposed to complete TYK2 loss of function, may have beneficial effects on beta cells in the prevention of T1D by inhibiting type 1 IFN signaling and the consequent upregulation of MHC class I and chemokine production to recruit cytotoxic T cells.
GENETIC SUPPORT FOR T1D HETEROGENEITY
T1D is a heterogeneous disease marked by variation in several traits, including (but not limited to) age of onset [10], first autoantibody present [226], rate of autoantibody spreading and types of subsequent autoantibodies [227], immune infiltration of islets [228], residual insulin secretion [11,229], and susceptibility to secondary complications [230].Variation across these features is non-random.For example, earlier onset disease is associated with lower residual insulin secretion [11], hyperimmune islets [228], and faster disease progression [231].Meanwhile, the first-appearing autoantibody distinguishes two patterns of genetic and environmental exposures [226].This apparent clustering of traits suggests that T1D can potentially be divided into multiple endotypes, each with a distinct mechanistic underpinning that could be addressed by an appropriately matched therapeutic strategy [230].An alternative hypothesis is that T1D development for each individual is determined by different combinations of multiple causal pathways.This concept -termed the "palette" model -was first proposed in the context of type 2 diabetes risk [232].Regardless of whether T1D can be broken into discrete endotypes or represents a composite of effects on multiple causal pathways, recognizing patterns of T1D heterogeneity and the causal processes underlying them may facilitate tailored treatments and improved outcomes for patients.For instance, individuals diagnosed after 13 years of age have reduced B cell infiltration and higher retention of beta cell mass [228], suggesting that beta cell dysfunction, rather than beta cell death, may play a more prominent role in older patients and therapies restoring beta cell function may be more effective in this group.Here, we discuss existing genetic support for heterogeneous T1D etiology and pathophysiology and opportunities for using genetics to further dissect T1D heterogeneity.
6.1.Genetic underpinnings of T1D heterogeneity Individual T1D loci are known to correlate with features of disease etiology and progression.High-risk children with HLA-DR4 haplotypes tend to develop insulin autoantibodies (IAA) as the first-appearing autoantibody within the first two years of life [233e235].In contrast, HLA-DR3 haplotypes are associated with glutamic acid decarboxylase antibody (GAD) as the first-appearing autoantibody with seroconversion occurring between two to five years of age [233,234].Multiple T1D risk alleles have been associated with earlier T1D onset [236], and genetic risk factors had a larger effect on T1D risk in younger individuals [212].However, this does not necessarily imply that young onset T1D is more 'genetic' (i.e., heritable) than older onset disease.Most genetic discovery for T1D has focused on pediatric (<16 years of age) cohorts of European ancestry, and, therefore, established T1D risk variants will be enriched in T1D cases from these age and ancestry groups.Recent work compared T1D genetic risk prediction in self-reported Hispanic, Black, and White individuals in the Search for Diabetes in Youth (SEARCH) study [237].This work highlighted the importance of including a larger number of HLA variants more common in non-European populations to capture T1D risk across diverse ancestries and showed a variable distribution of T1D genetic risk scores across self-reported ethnicities.Expanding studies of T1D to a more diverse patient population, including individuals with later disease onset, lower-risk HLA haplotypes, and non-European ancestries, will likely reveal new pathways contributing to disease in these groups [48,238].Additionally, genetic studies of T1D within putative endotypes could provide insight into where their etiologic mechanisms diverge.
6.2.Genetic prediction models to address T1D heterogeneity T1D is distinct among common complex diseases as known genetic risk factors explain the majority of disease heritability.Genetic risk scores (GRS) for T1D can distinguish high-risk individuals with area under the curve (AUC) in independent validation cohorts of >0.9 in Europeans and >0.8 in other ancestry groups [48,239e241].Given the strong performance of existing T1D GRS, they will be useful for prioritizing high-risk individuals for monitoring and enrollment in early intervention trials.However, there may also be an opportunity to use genetic risk prediction models to dissect T1D heterogeneity, either as a marker for T1D endotypes or to partition genetic effects into causal pathways.Most existing biomarkers supporting T1D endotypes are intractable in the general population.For example, establishing the first-appearing autoantibody requires longitudinal autoantibody testing prior to overt symptoms.Similarly, the immune cell composition of pancreatic islets cannot currently be evaluated in living patients.In contrast, geneticallyderived T1D endotype scores could be assessed at birth to prioritize high-risk children for longitudinal monitoring, or at the time of T1D diagnosis to inform therapeutic decision-making.In a related approach, T1D GRS has already been used to discriminate T1D from other forms of diabetes [242e244].Ultimately, T1D may not reduce to a fixed number of discrete endotypes.The genetic complexity of T1D hints that its etiology for most individuals is a blend of causal pathways, similar to other complex diseases [232,245].As new molecular resources are developed to map genetic associations to causal genes in islet and immune cell types, one may eventually use genome-wide profiles to estimate the relative contribution of relevant pathways to the disease process in individual patients.This approach has been pioneered in recent T2D studies, which partitioned genetic risk into etiological pathways and demonstrated heterogeneity across individuals in terms of which pathways were the predominant factor underlying disease risk [246,247].For T1D, one may envision a genetic score for each of several contributing pathways (e.g.innate viral response, beta cell stress response, or beta cell antigen processing and presentation).Whether through discrete endotypes or cumulative effects of causal pathways, the manifestations of T1D heterogeneity (e.g., variation in beta cell survival and severity of insulitis [228]) are likely to be, in part, determined by the relative contribution of immune and beta cellintrinsic processes.Genetics will be critical to discerning the relative contribution of these causal processes to disease burden in the population and to the disease process in individual patients.
PROPOSED AREAS FOR FUTURE FOCUSED EFFORT IN T1D GENETICS
Substantial progress has been made towards understanding the genetic basis of T1D through concerted efforts to recruit T1D cohorts for genetic studies [248].However, there is still a large gap in translating genetic discoveries into therapeutic opportunities for prevention and treatment.Looking forward, large population biobanks pairing whole genome sequencing (WGS) with deep phenotyping will empower a new era of discovery, including investigation of rare variation, T1D-related traits in healthy individuals, and "phenome-wide" analyses.These expanded genetic studies will be invaluable in contextualizing T1D genetic associations and understanding their effects on beta cell function.In addition to larger genetic association studies, orthogonal efforts will be needed for a complete understanding of the diverse mechanisms contributing to T1D.Here, we highlight three areas critical to a holistic view of T1D genetics and for translating genetic discovery into knowledge.Focused investment in building these resources may reveal novel therapeutic opportunities for T1D.7.1.Diverse T1D GWAS T1D prevalence is expected to increase by 46e78% by 2040 in most parts of the world, and more than 100% in the Middle East and Africa [249].In the US, T1D incidence is highest in non-Hispanic whites but rising fastest among minority populations, who also have worse clinical outcomes [250].Meanwhile, later-onset T1D is more common than previously thought [251].Since nearly all T1D association studies have been performed on pediatric European ancestry cases, genetic studies of T1D in both non-European ancestry and later-onset cohorts are urgently needed.Recruitment of larger, diverse T1D association cohorts will improve fine mapping of causal variants, uncover new risk loci [252], and improve genetic risk prediction for T1D.Importantly, focused recruitment efforts will be required, as there is low prevalence of T1D in population-based biobanks, particularly in non-Eurocentric groups.For example, one of the largest population biobanks to date, the UK Biobank, contains genetic and health information from more than 500,000 participants.However, over 90% of the UK Biobank samples were collected from Eurocentric populations [253].While the UK Biobank has fueled valuable insights about the genetics of T1D [10,254,255], it contains fewer than 1,500 individuals with T1D total and fewer than 100 T1D-affected individuals of non-European ancestry, limiting its utility for understanding T1D across diverse groups.Limited genetic studies in non-European populations highlight the importance of targeted recruitment of minority groups in T1D genetic studies.Genotyping of individuals from 22 Arab countries revealed that HLA haplotypes may have different effects on T1D risk depending on ancestry [256].For example, the DRB1*0401-DQB1*0302 haplotype is protective amongst Lebanese patients [257] but confers increased susceptibility to Italian and Bahraini populations [257,258].Different directions of effect across populations could reflect LD patterns (i.e., unmeasured causal variants residing on different haplotypes in the two populations) or environmental modifiers altering the effect of a causal variant on disease risk.In both cases, heterogeneity in effects will diminish the effectiveness of existing T1D risk prediction tools in non-Eurocentric populations.A European GRS [259] using 30 SNPs performed poorly in African-ancestry individuals compared to an Africanancestry GRS using only 7 SNPs [48].Subsequent application of these GRS models to an independent cohort confirmed the need for more diverse T1D cohorts to improve risk prediction [260].Moreover, if pathways contributing to disease pathogenesis are heterogeneous across age and ancestry groups, we may struggle to detect or differentiate causal mechanisms which are more prominent in the poorly represented groups.In summary, failure to address the knowledge gap between the genetics of T1D in pediatric-onset European ancestry groups and the genetics of T1D in all other populations may lead to exacerbated health disparities and missed opportunities.
Biobanks linking human islets to genetic variation
Over the past several decades, biobanking efforts of islets from human donors have expanded rapidly [261].Biobanks store patient tissue samples linked to clinical records and are collected in a standardized manner reducing variability in tissue collection, storage, and processing.There are several biobanks relevant to the role of the pancreas in T1D.The Network for Pancreatic Organ Donors with Diabetes (nPOD) is a major T1D-specific biobank in the US, having collected tissue from nearly 200 donors with T1D, as well as 60 individuals positive for T1D autoantibodies.In addition to pancreas tissue samples, nPOD also manages the T1D exchange biobank, housing biospecimens and clinical data on over 1,000 patients with T1D.The Human Pancreas Analysis Program (HPAP) is another biobank organized and funded through the Human Islet Research Network and the National Institute of Diabetes, Digestive, and Kidney diseases (NIDDK) [262].HPAP specializes in the procurement of whole pancreata from T1D, T1D autoantibody positive, and T2D donors along with matched non-diabetic controls [263].Samples collected in the HPAP are subject to a diverse series of genetic, genomic, cellular, and tissue-based assays, which are made publicly available with the goal of understanding beta cell loss in T1D and T2D.The Integrated Islet Distribution Program (IIDP) is a major provider of human pancreatic islets and related tissue samples primarily from individuals without diabetes, but also includes samples from organ donors with T2D [264].The IIDP has phenotyping and genotyping cores and offers extensive clinical, medical, and islet characteristic data to investigators [265].Finally, the Alberta Diabetes Institute (ADI) IsletCore is another international biobank providing human pancreatic islets and other associated tissues (spleen, lymph, adipose, etc.).Notably, the ADI IsletCore provides data for several genomic assays including bulk and single cell RNA-sequencing along with electrophysiological (Patch-seq) and metabolic (Seahorse, GSIS) data [266,267].Collectively, these efforts to bank human islet and pancreas samples and make data publicly available to researchers will help improve our understanding of the genetic regulation of these T1Drelevant tissues.Tissue biobanks present an incredible resource for understanding genetic variant effects on beta cells and intermediate phenotypes leading to T1D, including clinical variables, hormone secretion, and molecular profiles.Existing biobanks and islet resources can also be leveraged to link intermediate features to each other (e.g., correlate molecular features to ex vivo perifusion assay readouts), to explore environmental effects on beta cells, or to study interactions between the immune system and beta cells.However, as with GWAS studies, improved recruitment of diverse patients and democratization of future and current data will be necessary to maximize the impact of biobank resources.
Expanded variant-to-gene maps in human islets
Resources for investigating genetic effects on molecular phenotypes such as gene expression within human islets are currently limited in terms of sample size, omics modalities, and quality.The largest human islet eQTL studies include only a few hundred donors and used bulk RNA-sequencing data of islets from heterogeneous sources.Incorporating molecular phenotyping of human islets into biobank and consortium efforts will provide a hugely improved resource for molecular QTL analysis.High quality molecular profiling of specific human islet cell types, for example using bulk assays of sorted cells or single cell multi-omic assays, will enable linking genetic variation to molecular features in native contexts.Furthermore, generating molecular maps using beta cells from autoantibody positive or T1D donors could link genetic variants to molecular phenotypes at specific stages of disease progression.Similarly, maps generated from islets following exposure to ex vivo stimuli could yield insight into variants that impact beta cell response to environmental stressors.More complete molecular maps will help to close the gap between genetic discovery and molecular processes underlying T1D and help to delineate which loci affect beta cell function, survival, and crosstalk with the immune system.Specific risk alleles can be further validated in cell-based systems, such as EndoCbH and iPSCs, which are genetically tractable, renewable sources of human beta cells.Using genetically modified EndoCbH and iPSCs to identify the effects of variants on beta cell function or survival is an important area of focus to mechanistically link genetic variants to beta cell dysfunction.Finally, resolving the cell type and context of T1D associations may help provide molecular explanations for observed T1D heterogeneity and inform our understanding of potential T1D endotypes [268].
CONCLUDING REMARKS
In the last two decades, since the advent of large-scale genetic association studies, new mechanistic frontiers have been reached that provide novel in-roads into understanding T1D pathophysiology.It is now clear that T1D is a multi-system disease where beta cells, immune cells, exocrine cells, and other cell types in the pancreas, as well as other tissues such as thymus and lymph nodes, likely play an etiological role [6,268].Continued human genetic studies of T1D will be crucial to expand our understanding of how beta cells contribute to T1D risk.Applying stringent criteria to link risk variants to genes at T1D loci will produce more robust insights and therapeutic targets.Human genetics can also support personalized medicine approaches, including matching genetically supported T1D endotypes or causal pathways to appropriate therapies.Finally, the development of next generation models of T1D, including humanized mouse models of T1D and isogenic cell systems to study immune and beta cell crosstalk, will allow investigation of T1D risk loci and their functional effects across multiple cell types [106,131,269e271].Translating risk loci into mechanistic insight will ultimately help unlock novel therapies to treat or prevent T1D.
ETHICS APPROVAL AND CONSENT TO PARTICIPATE
Not applicable.
CONSENT FOR PUBLICATION
Not applicable.
Figure 1 :
Figure 1: Dissecting a GWAS locus using (A) genetic fine mapping to define credible sets in a region; (B) molecular QTL mapping and colocalization with T1D association signals to nominate causal molecular mechanisms; (C) cell type-specific regulatory annotations and experimental systems to decode putative causal non-coding genomic regions.Created with Biorender.com.
Figure 2 :
Figure 2: Models used to study candidate T1D susceptibility genes.(A) Mouse, cell line, human, and stem cell-derived models and their applications to study T1D genetics and disease processes.(B) In vitro stressors to model disease processes in the context of diverse genetic backgrounds.GSIS, glucose-stimulated insulin secretion; ER, endoplasmic reticulum; hESCs, human embryonic stem cells; iPSCs, induced pluripotent stem cells; SC-beta cells, stem-cell derived beta-like cells.Created with Biorender.com.
Figure 3 :
Figure 3: A model of candidate gene contributions to beta cell destruction in T1D.Loss of GLIS3 or DLK1 may impede beta cell differentiation or enhance post-natal beta cell apoptosis, possibly potentiating beta cell fragility in the setting of autoimmunity.In mature beta cells, viral dsRNA signaling through MDA5 (encoded by IFIH1) may induce cytokine and chemokine release, contributing to immune cell recruitment and cytokine signaling in beta cells.Proinflammatory cytokine signaling in beta cells is modulated by TYK2, DEXI, PTPN2, and SOCS1, resulting in downstream transcription factor activation, ER stress (enhanced by reductions in GLIS3), and accumulation of damaged organelles (potentiated by loss of CLEC16A), which together may contribute to beta cell apoptosis (attenuated by overexpression of CTSH) in the setting of proinflammatory cytokine release.In addition to apoptosis, beta cell dysfunction and reductions in glucose-stimulated insulin secretion may be mediated by GLIS3 and CLEC16A.Finally, environmental stressors induce expression of HLA class I and components of HLA class II in beta cells, likely contributing to T cell recognition of beta cells.During T cell development, tolerance to insulin autoantigens is mediated by INS expression in thymic epithelial cells.Autoreactive T cells that avoid clonal deletion in the thymus can be activated by islet autoantigens presented on MHC, resulting in autoimmune destruction of beta cells.Created with Biorender.com.
Table 1 e
Resources for mapping gene expression and cis regulatory element activity in human islets. | 2024-06-24T15:10:37.971Z | 2024-06-01T00:00:00.000 | {
"year": 2024,
"sha1": "12ff1381492f353da2437ccfa8814529e421693f",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1016/j.molmet.2024.101973",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1a171da1bc49e77a52468cb5b2eb7b7f7b167ebf",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.