id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
38212379
|
pes2o/s2orc
|
v3-fos-license
|
Superconductivity Induced by Interfacial Coupling to Magnons
We consider a thin normal metal sandwiched between two ferromagnetic insulators. At the interfaces, the exchange coupling causes electrons within the metal to interact with magnons in the insulators. This electron-magnon interaction induces electron-electron interactions, which, in turn, can result in p-wave superconductivity. In the weak-coupling limit, we solve the gap equation numerically and estimate the critical temperature. In YIG-Au-YIG trilayers, superconductivity sets in at temperatures somewhere in the interval between 1 and 10 K. EuO-Au-EuO trilayers require a lower temperature, in the range from 0.01 to 1 K.
The interactions between electrons in a conductor and ordered spins across interfaces are of central importance in spintronics [1,2]. Here, we focus on the case in which the magnetically ordered system is a ferromagnetic insulator (FI). The interaction at an FI-normal metal (NM) interface can be described in terms of an exchange coupling [3][4][5][6]. In the static regime, this coupling induces effective Zeeman fields near the boundary [7][8][9][10]. The magnetization dynamics caused by the coupling can be described in terms of the spin-mixing conductance [4][5][6]. Such dynamics can include spin pumping from the FI into the NM [11,12] and its reciprocal effect, spintransfer torques [5,13]. These spin-transfer torques enable electrical control of the magnetization in FIs [14].
One important characteristic of FIs is that the Gilbert damping is typically small. This leads to low-dissipation magnetization dynamics [15], which, in turn, facilitates coherent magnon dynamics and the long-range transport of spin signals [5,13]. These phenomena should also enable other uses of the quantum nature of the magnons.
Here, we study a previously unexplored effect that is also governed by the electron-magnon interactions at FI-NM interfaces but is qualitatively different from spin pumping and spin-transfer torques. We explore how the magnons in FIs can mediate superconductivity in a metal. The exchange coupling at the interfaces between the FIs and the NM induces Cooper pairing. In this scenario, the electrons and the magnons mediating the pairing reside in two different materials. This opens up a wide range of possibilities for tuning the superconducting properties of the system by combining layers with the desired characteristics. The electron and magnon dispersions within the layers as well as the electron-magnon coupling between the layers influence the pairing mechanism. Consequently, the superconducting gap can also be tuned by modifying the layer thickness, interface quality, and external fields.
Since the interactions occur at the interfaces, the consequences of the coupling are most profound when the NM layer is thin. We therefore consider atomically thin FI and NM layers. This also reduces the complexity of the calculations. For thicker layers, multiple modes exist along the direction transverse to the interface (x), with different effective coupling strengths. We expect a qualitatively similar, but somewhat weaker, effect for thicker layers.
A model of interface-induced magnon-mediated d-wave pairing has been proposed to explain the observed superconductivity in Bi/Ni bilayers [34]. A p-wave pairing of electrons with equal momentum-so-called Amperean pairing-has been predicted to occur in a similar system [35]. Importantly, the electrons that form pairs in these models reside in a spin-momentum-locked surface conduction band.
By contrast, we consider a spin-degenerate conduction band in an FI-NM-FI trilayer system. We find interfacially mediated p-wave superconductivity with antiparallel spins and momenta. These pairing symmetries are distinct from those of the 2D systems mentioned above. We assume that the equilibrium magnetization of the left (right) FI is along theẑ (−ẑ) direction; see Fig. 1. We consider matching square lattices, with lattice constant a, in all three monolayers. The interfacial plane comprises N sites with periodic boundary conditions. The Hamiltonian is where we use A (B) to denote the left (right) FI. The Heisenberg Hamiltonian describes the left FI. Here, i is an in-plane site, NN(i) is the set of its nearest neighbors, J is the exchange interaction, and S A i is the localized spin at site i. The expression for H B FI is similar. For the time being, we assume that the conduction electron eigenstates in the NM are plane waves of the form c q,σ = j exp(ir j · q)c jσ / √ N . Here, c ( †) jσ annihilates (creates) a conduction electron with spin σ at site j in the NM, and q is the wavevector. For now, the NM Hamiltonian is H NM = q σ E q c † qσ c qσ , and the dispersion is quadratic, Here, m is the effective electron mass. Below, when estimating the coupling J I at YIG-Au interfaces, we consider another Hamiltonian with different eigenstates and a different dispersion. We model the coupling between the conduction electrons and the localized spins as an exchange interaction of strength J I : where σ = (σ x , σ y , σ z ) is a vector of Pauli matrices. After a Holstein-Primakoff transformation, we expand the Heisenberg Hamiltonian given in Eq. (2) up to second order in the bosonic operators and diagonalize it. We rep- , where s is the spin quantum number of the localized spins and a ( †) j is a bosonic annihilation (creation) operator at site j. The magnons in layer A, with the form a k = j∈A exp(ir j · k)a j / √ N , are the eigenstates of the resulting Hamiltonian. Analogously, the magnons in layer B are denoted by b k . The magnon dispersion is We disregard second-order terms in the bosonic operators from the interfacial coupling and obtain where V = −2J I √ s/ √ 2N is the coupling strength between the electrons in the NM and the magnons in the FI layers.
There is no induced Zeeman field in the NM since the magnetizations in the FIs are antiparallel. Analogously to phonon-mediated coupling in conventional superconductors, the magnons mediate effective interactions between the electrons. For electron pairs with opposite momenta, we obtain with the interaction strength We define the gap function in the usual way: In the continuum limit, we replace the discrete sum over momenta k with integrals over E = E k and the angle ϕ, where k = k [sin(ϕ), cos(ϕ)]. We assume that only the conduction electrons close to the Fermi surface form pairs. The magnon energy that appears in Eq. (8) is then given by ε k+k ≈ ε(ϕ , ϕ), where Here, k F = √ 2mE F / is the Fermi wavenumber. We assume that the NM is half filled, k F = √ 2π/a. We introduce the energy scale E * = 4sJk 2 F a 2 = 8πsJ, which is associated with the FI exchange interaction. Then, we scale all other energies with respect to E * : δ = ∆/E * , In this way, the gap equation presented in Eq. (9) simplifies to with the dimensionless coupling constant α = we have restricted the energy integral to the range We choose x B -based on the value of α-in the following way. x B must be sufficiently large that all contributions to the gap from regions outside this range are vanishingly small. In the weak-coupling limit (α 1), the gap function has a narrow peak near x = 0, and therefore, x B can be much smaller than 1.
To gain a better understanding, we first assume a quadratic dispersion for the magnons, which matches that of Eq. (5) in the long-wavelength limit. Consequently, the dimensionless magnon energy (ϕ , ϕ) becomes q (ϕ , ϕ) = 1 + cos(ϕ −ϕ). Below, we numerically check the correspondence between the solutions resulting from the full dispersion versus the solutions obtained with the quadratic approximation assumed here. For the quadratic magnon dispersion, the gap equation has a solution with p-wave symmetry, δ(x, ϕ) = f (x) exp(±iϕ). Applying this ansatz to Eq. (11), we calculate the integral over the angle ϕ in the weak-coupling limit [36].
The gap equation becomes Using a Gaussian centered at x = 0 as an initial guess, we solve Eq. (12) numerically through iteration [37]. Fig. 2 shows the results. For a fixed coupling α, the maximum value occurs when x = 0 and τ = 0. The dimensionless critical temperature τ c is the temperature at which the gap vanishes. As in the BCS theory, the gap equation can also be solved analytically by approximating V (x) as a constant with a cutoff centered at x = 0. In this constant-potential approximation, the ratio f max /τ c is approximately 1.76, which is slightly lower than what we find numerically; see Fig. 2
(c).
Let us check that the numerical solutions to Eq. (12), for the quadratic magnon energy, resemble the solutions to Eq. (11) for the full magnon energy of Eq. (10). To this end, we numerically iterate Eq. (11), starting from the solution to Eq. (12) as the initial guess [38]. We consider the case of zero temperature, τ = 0. The symmetries δ(x, ϕ) = δ(−x, ϕ) = iδ(x, ϕ + π/2) = δ * (x, −ϕ), where δ * is the complex conjugate of δ, imply that we need to consider only x > 0 and 0 < ϕ < π/4. We show the results of these iterative calculations in Fig. 3. The third iteration of δ is shown in Fig. 3 (a,b). After only three iterations, the differences between consecutive functions are already nearly imperceptible; see Fig. 3 (c,d). The gap as a function of energy still exhibits a peak at the Fermi energy. Compared with the results obtained for a quadratic magnon dispersion, this peak is of a similar shape but is slightly lower and narrower; see the inset of Fig. 3 (c). There are also additional features of δ(x, ϕ) at positions (x, ϕ) = ( (ϕ , ϕ), ϕ) in the parameter space where the derivative of (ϕ , ϕ) with respect to ϕ vanishes. Next, we estimate the critical temperatures T c for two possible experimental realizations, one in which the FI is yttrium-iron-garnet (YIG) and one in which the FI is europium oxide (EuO). The NM layer is gold in both cases. We consider the YIG-Au-YIG trilayer first.
For the FIs, we assume-encouraged by the results presented in Fig. 3-that the low-energy magnons dominate the gap. The relevant magnons can therefore be well described by a quadratic dispersion. Our model assumes that the FI and NM layers have the same lattice structure. However, in reality, the unit cell of YIG is much larger than that of Au. To capture the properties of YIG in our model, we fit the parameters such that the FIs have the same exchange stiffness (D/k B = 71 K nm 2 [39]) and saturation magnetization (M s = 1.6 · 10 5 A/m [39]) as those of bulk YIG. We assume that each YIG layer has a thickness equal to the bulk lattice constant of YIG (a YIG ≈ 12Å [39]). We use the thickness, the saturation magnetization and the electron gyromagnetic ratio γ e to estimate the spin quantum number s = M s a YIG a 2 /( γ e ). Using the quadratic dispersion approximation, we determine the exchange interaction to be J = D/(2a 2 s). The lattice spacing a remains undetermined. In the bulk, gold has an fcc lattice and a half-filled conduction band. We use experimental values of the Fermi energy (E B F = 5.5 eV [40]) and the Sharvin conductance (g Sh = 12 nm −2 [6]) to determine the effective mass, m = 2πg Sh 2 /E B F . We assume that the monolayer is half filled and has the same effective electron mass as that of bulk gold. We consider the case in which the monolayer lattice constant a is equal to the lattice constant a t of a simple cubic tight-binding model for gold. a t is approximately 20% smaller than the bulk nearest-neighbor distance of actual gold.
We calculate the interfacial exchange coupling J I for a YIG-Au bilayer in terms of the spin-mixing conductance, which has been experimentally measured. In doing this calculation, we use the same model for the YIG as in the trilayer case; however, for the gold, we employ a tight-binding model of the form H t = −t t σ i j∈NN(i) c † iσ c jσ , with a simple cubic lattice. The Hamiltonian of the bilayer is H B = H t + H A FI + H int . We assume that J I s t t , which al-lows us to disregard the proximity-induced Zeeman field. The energy eigenstates c t qσ and the dispersion E t q = 4t t (3 − cos(q x a t ) − cos(q y a t ) − cos(q z a t )) of H t are well known. Under the assumption of half filling, we find that t t = E B F /12 and a t = 0.63/g Sh . We use the same experimental values for E B F and g Sh (from Ref. 6 and 40) as before.
We set the lattice constant of the trilayer, a, equal to the lattice constant of the bilayer, a t . This ensures that both models have the same lattice structure at the interface and, consequently, that the interfacial exchange interaction Hamiltonian H int has the same form in both cases. To first order in the bosonic operators, The coupling strength V t is proportional to the amplitudes of the tight-binding-model eigenstates at the interface: The spin-mixing conductance can now be calculated for the ferromagnetic resonance (FMR) mode, resulting [41] in We numerically evaluate V 0 and estimate the bilayer interfacial exchange coupling J I = (2π) 2 g ↑↓ t 2 t a 2 t /(9.16s 2 ) using measured values of the spin-mixing conductance g ↑↓ . We assume that J I has the same value in the trilayer case. Using E * = 8πsJ, we find that E * is approximately 1.5 eV. We find the coupling constant α from the relation α = J 2 I ma 2 /(16 √ 2π 2 2 J). The reported experimental values for the spin-mixing conductance range from 1.2 nm −2 to 6 nm −2 [42][43][44]. In turn, this implies that α lies in the range of [0.0014-0.007]. The corresponding critical temperatures range from 0.5 K to 10 K.
Next, we consider a EuO-Au-EuO trilayer. Europium oxide has an fcc lattice structure with a lattice constant of 5.1Å, a spin quantum number of s = 7/2 and a nearestneighbor exchange coupling of J/k B = 0.6 K [45]. The nodes on a (100) surface of an fcc lattice form a square lattice in which the lattice constant is equal to the distance between nearest neighbors in the bulk. We assume that the monolayer has the same structure and therefore set a equal to the distance between nearest neighbors in bulk EuO. We use the same effective mass as for the YIG-Au-YIG trilayer. Then, the Fermi energy is E F = 1.8 eV, and the energy scale E * /k B is approximately 53 K. Values on the order of 10 meV have been reported for the interfacial exchange coupling strengths J I [46] in EuO/Al [7], EuO/V [8], and EuS/Al [9,10]. These estimates were based on measurements of a proximity-induced effective Zeeman field. Under the assumption that J I is in the range of [5][6][7][8][9][10][11][12][13][14][15] meV, we find a wide range of values of [0.004-0.03] for α. We estimate the corresponding critical temperatures numerically using the quadratic dispersion approximation. Finally, we find a range of [0.01-0.4] K as possible values for T c .
In conclusion, interfacial coupling to magnons induces p-wave superconductivity in metals. The critical temperatures are experimentally accessible in the weak-coupling limit. The gap size strongly depends on the magnitude of the interfacial exchange coupling. The thickness dependence, the robustness against disorder, and the physics beyond the weak-coupling limit should be explored in the future.
This work was partially supported by the European Research Council via Advanced Grant No. 669442 "Insulatronics" and the Research Council of Norway via the Centre of Excellence "QuSpin".
[37] To eliminate the singularity in V (x−x ) at x = x for the numerical integration, we replace V (x−x ) in Eq. (12) with x 0 dxV (x−x ) and f (x) on the left-hand side of Eq. (12) with F (x) = x 0 dxf (x). In each iteration, we numerically evaluate the integral over x in the resulting equation and obtain f (x) by numerically differentiating F .
|
2017-07-12T15:08:47.000Z
|
2017-07-12T00:00:00.000
|
{
"year": 2017,
"sha1": "450af9760d3e751047786f9a6dfc8d4955798a8e",
"oa_license": null,
"oa_url": "https://ntnuopen.ntnu.no/ntnu-xmlui/bitstream/11250/2588296/2/PhysRevB.97.115401.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "450af9760d3e751047786f9a6dfc8d4955798a8e",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
}
|
119124567
|
pes2o/s2orc
|
v3-fos-license
|
Finite Subgroups of Group Rings: A survey
In the 1940's Graham Higman initiated the study of finite subgroups of the unit group of an integral group ring. Since then many fascinating aspects of this structure have been discovered. Major questions such as the Isomorphism Problem and the Zassenhaus Conjectures have been settled, leading to many new challenging problems. In this survey we review classical and recent results, sketch methods and list questions relevant for the state of the art.
Introduction
Group rings first come up as a natural object in the study of representations of groups as matrices over fields or more generally as endomorphisms of modules. They also appear in topology, knot theory and other areas in pure and applied mathematics. For example, many error correcting codes can be realized as ideals in group algebras and this algebraic structure has applications on decoding algorithms.
The aim of this survey is to revise the history and state of the art on the study of the finite subgroups of units in group rings with special emphasis on integral group rings of finite groups. For an introduction including proofs of some first results the interested reader might want to consult [133,119]. Other surveys touching on the topics considered here include [134,85].
Let G be a group, R a ring and denote by U(R) the unit group of R and by RG the group ring of G with coefficients in R. The main problem can be stated as follows: Main Problem: Describe the finite subgroups of U(RG) and, in particular, its torsion elements.
This problem, especially in the case of integral group rings of finite groups, has produced a lot of beautiful results which combine group theory, ring theory, number theory, ordinary and modular representation theory and other fields of mathematics. Several answers to the Main Problem have been proposed. The strongest ones, such as the Isomorphism and Normalizer Problems and the Zassenhaus Conjectures, introduced below, are true for large classes of groups, but today we know that they do not hold in general. Other possible answers, as the Kimmerle or Spectrum Problems are still open. We hope that this survey will stimulate research on these and other fascinating questions on group rings. For this purpose we include in our final remark several open problems and revise the status of some problems given before in [133] in our final remark.
One of the main motivations for studying finite subgroups of RG in the case where G is finite is the so called Isomorphism Problem which asks whether the ring structure of RG determines the group G up to isomorphism, i.e.
The Isomorphism Problem: Does the group rings RG and RH being isomorphic imply that so are the groups G and H?
(ISO) is the Isomorphism Problem for R = Z and G finite.
Observe that the Isomorphism Problem is equivalent to the problem of whether all the group bases of RG are isomorphic. A group basis is a group of units in RG which is a basis of RG over R. It is easy to find negative solutions to the Isomorphism Problem if the coefficient ring is big, for example, if G and H are finite then CG and CH are isomorphic if and only if G and H have the same list of character degrees, with multiplicities. In particular, if G is finite and abelian then CG and CH are isomorphic if and only if G and H have the same order. As (ISO) was considered a conjecture for a long time it is customary to speak of counterexamples to (ISO).
The "smaller" the coefficient ring is, the harder it is to find a negative solution for the Isomorphism Problem. This is the moral of the following: Remark 1.1. If there is a ring homomorphism R → S then SG ∼ = S ⊗ R RG. Thus a negative answer to the Isomorphism Problem for R is also a negative answer for S. In particular, a counterexample for (ISO) is a negative solution for the Isomorphism Problem for all the rings.
In the same spirit, at least in characteristic zero, the "smaller" the ring R is the harder it is to construct finite subgroups of RG besides those inside the group U(R)G of trivial units. For example, if G is a finite abelian group then all the torsion elements of U(ZG) are trivial, i.e. contained in ±G. This implies that (ISO) has a positive solution for finite abelian groups. This is a seminal result from the thesis of Graham Higman [72], where the Isomorphism Problem appeared for the first time and which raised the interest in the study of units of integral group rings. More than 20 years later Albert Whitcomb proved (ISO) for metabelian groups [140].
The map ε : RG → R associating each element of RG to the sum of its coefficients is a ring homomorphism. This is called the augmentation map. It restricts to a group homomorphism U(RG) → U(R) whose kernel is denoted V (RG) and its elements are called normalized units. Clearly U(RG) = U(R)×V (RG), in particular U(ZG) = ±V (ZG). It can be easily shown that if RG and RH are isomorphic there is a normalized isomorphism α : RG → RH, i.e. ε(α(x)) = ε(x) for every x ∈ RG.
Higman's result on torsion units of integral group rings of abelian groups cannot be generalized to non-abelian groups because conjugates of trivial units are torsion units which in general are not trivial. A natural guess is that all torsion units in the integral group ring of a finite group are of this form, or equivalently every normalized torsion unit is conjugate to an element of G. Higman already observed that V (ZS 3 ) contains torsion units which are not conjugate in U(ZS 3 ) to trivial units (S n denotes the symmetric group on n letters). Since Higman's thesis was not that well known, this was reproven many years later by Ian Hughes and Kenneth Pearson. They observed however that all the torsion elements of V (ZS 3 ) are conjugate to elements of S 3 in QS 3 [74]. Motivated by this and Higman's result, Hans Zassenhaus conjectured that this holds for all the integral group rings of finite groups [141]: The First Zassenhaus Conjecture (ZC1): If G is a finite group then every normalized torsion unit in ZG is conjugate in QG to an element of G.
Similar conjectures for group bases and, in general for finite subgroups of ZG, are attributed to Zassenhaus: The Second Zassenhaus Conjecture (ZC2): If G is a finite group then every group basis of normalized units of ZG is conjugate in QG to G.
The Third Zassenhaus Conjecture (ZC3): If G is a finite group then every finite subgroup of normalized units in ZG is conjugate in QG to a subgroup of G.
Some support for these conjectures came from the following results: If H is a finite subgroup of V (ZG) then its order divides the order of G [142] and its elements are linearly independent over Q [72]. The exponents of G and V (ZG) coincide. The last fact is even true replacing Z by any ring of algebraic integers [43].
The Second Zassenhaus Conjecture is of special relevance because a positive solution for (ZC2) implies a positive solution for (ISO). Actually where (AUT) is the following Problem: The Automorphism Problem (AUT): Is every normalized automorphisms of ZG the composition of the linear extension of an automorphism of G and the restriction to ZG of an inner automorphism of QG?
In the late 1980s counterexamples to the conjectures started appearing. The first one, by Klaus Wilhelm Roggenkamp and Leonard Lewy Scott [131,125,132], was a metabelian negative solution to (AUT) and hence a counterexample to (ZC2) and (ZC3). Observe that while (ZC2) fails for finite metabelian groups, (ISO) holds for this class by Whitcomb's result mentioned above. So in the 1990s there was still some hope that (ISO) may have a positive solution in general, as Higman had already stated in his thesis: "Whether it is possible for two non-isomorphic groups to have isomorphic integral group rings I do not know, but the results of section 5 suggest that it is unlikely" [71]. However Martin Hertweck found in 1997 two non-isomorphic groups with isomorphic integral group rings [56,57]. We elaborate on these questions in Section 3.
So the only of the above mentioned questions open at the end of the 1990s was (ZC1). From the 1980s (ZC1) has been proven for many groups including some important classes as nilpotent or metacyclic groups. A lot of work was put in trying to prove it for metabelian groups, but metabelian counterexamples were discovered by Florian Eisele and Leo Margolis in 2017 [50]. Details on (ZC1) are provided in Before we give more details on these questions in Section 3, we revise in Section 2 some methods to attack the problems mentioned above as well as known results on (ZC3), the strongest conjecture on the finite subgroups of ZG.
The Zassenhaus Conjectures were possible answers to the Main Problem and in particular (ZC1) was still standing as a possible answer for torsion units until recently. Since all three Zassenhaus Conjectures have been disproved, maybe it is time to reformulate them as the Zassenhaus Problems. Another type of answer was proposed by Wolfgang Kimmerle: Kimmerle Problem (KP): Let G be a finite group and u a torsion element in V(ZG). Does there exist a group H which contains G such that u is conjugate in QH to an element of G?
Observe that while (ZC1) asks if it is enough to enlarge the coefficient ring of ZG to obtain that all torsion units are trivial up to conjugation, (KP) allows to enlarge also the group basis.
Recall that the spectrum of a group is the set of orders of its torsion elements. A weaker answer to the Main Problem could be provided by solving the following problem: The Spectrum Problem (SP): If G is a finite group do the spectra of G and V (ZG) coincide? See more about these problems in Section 5. As V (ZG) and G have the same exponent, at least the orders of the p-elements of V (ZG) and G coincide. So, it is natural to ask whether the isomorphism classes of the finite p-subgroups of V (ZG) and G are the same. It is even an open question whether every (cyclic) finite p-subgroup of V (ZG) is conjugate in QG to a subgroup of G. Section 6 deals with these and other questions on finite p-subgroups of V (ZG).
Techniques and results from modular representation theory have been very useful in the study of the problems mentioned in this article. On the other hand, questions in modular representation theory about the role of the defect group in a block are related to questions on p-subgroups in units of group rings. Here rational conjugacy is not as useful and p-adic conjugacy is in the focus, as in the F * -Theorem (Theorem 3.6) and in Theorem 4.2. There is some hope that this kind of results might be applied e.g. to solve the following question of Scott: Scott's Defect Group Question [131]: Let Z p denote the p-adic integers, let G be a finite group and let B be a block of the group ring Z p G. Is the defect group of B unique up to conjugation by a unit of B and suitable normalization?
It is not clear, and should be regarded as part of the problem, what suitable normalization means if the block is not the principal block of the group ring. This problem, and indeed Scott's question in its generality, has been solved by Markus Linckelmann in case the defect group is cyclic [93].
Scott's question is also of interest since, as has been shown by Geoffrey Robinson [123,124], even a weak positive answer to it would provide a proof of the Z * p -Theorem avoiding the Classification of Finite Simple Groups. The first proof of the Z * p -theorem using the CFSG is due to Orest Artemovich [6]. Here the Z * p -Theorem means an odd analogous of the famous Z * -Theorem of George Glauberman.
We use standard notation for cyclic C n and dihedral group D n of order n; symmetric S n and alternating group A n of degree n; and linear groups SL(n, q), PSL(n, q), etc. For an element g in a group G we denote by C G (g) the centralizer and by g G the conjugacy class of g in G.
General finite subgroups
Though the group of units of group rings has been studied for about eighty years there are very few classes of group rings for which the group of units has been described explicitly. For the case of integral group rings, the interested reader can consult the book by Sudarshan Kumar Sehgal [133] and by Eric Jespers andÁngel del Río [77,78]. Actually constructing specific units is not that obvious, except for the trivial units of a group ring RG, i.e. those in U(R)G. For example, there is a famous question, studied at least since the 1960s, attributed to Irving Kaplansky [81], though he refers to a question by Dmitrij Smirnov and Adalbert Bovdi [1] (Problem 1.135 in [2]): Kaplansky's Unit Conjecture: If G is a torsion-free group and R is a field then every unit of RG is trivial. Kaplansky's Unit Conjecture is still open and few progress has been made on it besides the case of ordered groups for which it is easy to verify (in fact the same proof works for unique product groups). In contrast with this, the only finite groups for which all the units of ZG are trivial are the abelian groups of exponent dividing 4 or 6 and the Hamiltonian 2-groups. Actually these are the only groups for which U(ZG) is finite [71] (see also [77,Theorem 1.5.6]).
From now on R is a commutative ring, G is a finite group and we focus on finite subgroups of U(RG). The commutativity of R allows to identify left and right RG-modules by setting rgm = mrg −1 for r ∈ R, g ∈ G and m an element in a left or right RG-module.
Higman proved that if G is an abelian finite group then every torsion unit of ZG is trivial. In the 1970s some authors computed U(ZG) for some small non-abelian groups G. For example, Hughes and Pearson computed U(ZS 3 ) [74] and César Polcino Milies computed U(ZD 8 ) [116]. As a consequence of these computations it follows that (ZC3) has a positive solution for S 3 and D 8 . These early results were achieved by very explicit computations.
A notion to deal with more general classes of groups is the so called double action module: Definition 2.1. Let H be group and let α : H → U(RG) be a group homomorphism. Let (RG) α be the R(G × H)-module whose underlying R-module equals RG and the action by elements of G × H is given by: The connection between the Zassenhaus Conjectures and double action modules relies on the following observations. For our applications R usually is the ring of integers or the field of rationals, and occasionally the ring Z p of p-adic integers. More precisely, consider a finite subgroup H of V (ZG). Then the embedding α : H ֒→ U(ZG) defines a double action Z(G × H)-module but also a double action Q(G × H)-module (QG) α . By Proposition 2.2, to prove that H is conjugate in QG to a subgroup of G we need to prove that (QG) α ∼ = (QG) β for some group homomorphism β : H → G. As two Q(G × H)-modules are isomorphic if and only if they afford the same character, the following formula is relevant: Here χ α denotes the character afforded by (RG) α for an arbitrary homomorphism α : H → U(RG) and The element ε g (a) is called the partial augmentation of a at g. The partial augmentation has an even more practical role in dealing with the Zassenhaus Conjectures via the following: This theorem has been the cornerstone in the study of the Zassenhaus Conjectures. It is the reason why a lot of research has been deployed to study partial augmentations of torsion units of ZG. We collect here some of the most important results in this direction. The first one is also known as the Berman-Higman theorem, named after Higman and Samuil Berman -probably the two earliest researchers in the field. (d) If ε g (u) = 0 and the p-part of u is conjugate to an element x of G in Z p G then x is conjugate to the p-part of G [62]. (e) If G is solvable then ε g (u) = 0 for some element g of order n in G. [63]. Proposition 2.3 also motivated the introduction of an algorithmic method to study finite subgroups H of V (ZG) using the characters of G to obtain restrictions on the partial augmentations of the elements of H. Each ordinary character χ of G extends linearly to a map defined on CG, its restriction to H is the character χ H of a CH-module and we have where g G represents a sum running on representatives of the conjugacy classes of G. Therefore for each ordinary character ψ of H we have This can be used in combination with Propositions 2.3 and 2.4 to prove or disprove the Zassenhaus Conjectures in some cases. The information provided by this on partial augmentations is also information about the characters of double action modules, by (2.1). This sometimes helps to construct specific groups of units and eventually counterexamples to the Zassenhaus Conjectures. See Section 4 for more details.
In case the subgroup is p-regular similar formulas are available for p-Brauer characters. More precisely, if H is a finite subgroup of U(ZG) of order coprime with p, χ is a p-Brauer character of G and ψ is an ordinary character of H then represents a sum running on representatives of the conjugacy classes of pregular elements of G [61,105]. Actually, these are the only partial augmentations relevant for the application of Proposition 2.4, because if h ∈ H and g is p-singular then ε g (h) = 0 (see statement (c) of Proposition 2.5). These formulas are the bulk of the method introduced by Indar Singh Luthar and Inder Bir Singh Passi who used it to prove (ZC1) for A 5 [96]. Later it was generalized by Hertweck and used to prove (ZC1) for S 5 and some small PSL(2, q) [61]. It is nowadays known as the HeLP Method. It consists, roughly speaking, in solving formulas (2.2) and (2.3) for all irreducible χ and ψ viewing the ε g (h) as unknowns and employing additional properties of these integers such as those given in Proposition 2.5. The method has been implemented for the GAP system [52,16] for the case where H is cyclic.
The strongest positive results on (ZC3) were achieved by Al Weiss. The first one was proved before by Roggenkamp and Scott for the special case of group bases. This theorem is an application of a deep module-theoretic result of Weiss [138] which strongly restricts the possible structure of double action modules of p-adic group rings of p-groups. As a consequence of Theorem 2.6, (ZC3) holds for p-groups. Actually Weiss proved: Theorem 2.7. [139] (ZC3) holds for nilpotent groups.
Next theorem collects some other results on (ZC3).
(c) All the Sylow subgroups of G are cyclic [79].
As it was mentioned in the introduction, the first counterexample to (ZC2) and (ZC3) was found by Roggenkamp and Scott as a negative solution to (AUT) [125,132]. This counterexample was metabelian and supersolvable. Using their methods Lee Klingler gave an easier negative solution of order 2880 [92]. More negative solutions were later constructed by Hertweck [58,59], the smallest of order 96 [60], using groups found by Peter Blanchard as semilocal negative solutions [19].
We close this section with a very general problem posed by Kimmerle at a conference [3] for which little is known: The Subgroup Isomorphism Problem (SIP): What are the finite groups H satisfying the following property for all finite groups G? If V (ZG) contains a subgroup isomorphic to H then G contains a subgroup isomorphic to H.
Note that (SP) is the specification of (SIP) to cyclic groups. The only groups for which a positive solution for (SIP) has been proven are cyclic p-groups [43], C p × C p for p a prime [84,65] and C 4 × C 2 [105]. All the known negative solutions to (SIP) are based on Hertweck's counterexample to (ISO).
Group bases
As already mentioned in the introduction, a lot of research on the units of group rings originally focused on the role of group bases inside the unit group. This is directly related to questions such as (ISO) or (ZC2). Still it turned out to be very complicated to achieve results for big classes of groups, apart from metabelian groups. Roggenkamp and Scott [126] proved (ISO) for finite p-groups. In fact they proved that inside the p-adic group ring of a finite p-group any two group bases are conjugate and hence they are isomorphic. This of course implies (ZC2) for this class of groups. The stronger results of Weiss [138], quoted in Theorems 2.6 and 2.7, were obtained using different methods. After these relevant achievements other positive results for (ISO) were obtained by some authors. Next theorem summarizes some of the most important classes of solvable groups for which (ISO) has been proved. Stronger (yet technical) versions of the first and last statements in the previous theorem can be found in [128] and [55], respectively.
Another problem about the natural group basis of an integral group ring, which is deeply connected to the solution of (ISO), is the so-called Normalizer Problem. Note that the group basis G is obviously normalized by G itself and the central units of ZG. The Normalizer Problem asks if these two groups already fill out the normalizer of G in U(ZG): The Normalizer Problem (NP): Let G be a finite group. Is it true that the normalizer of G in the units of ZG is the group generated by G and the central units of ZG? For many decades this was called the Normalizer Conjecture and so it is reasonable to speak of counterexamples to (NP). One first important contribution, in a more general context, was given already in the 1960's. It is not a coincidence that both counterexamples appeared at the same time. Actually, to construct his counterexample for (ISO) Hertweck first constructed a counterexample G to (NP). This G is different from the group described in Theorem 3.4 and actually is not metabelian. He explicitly constructed a unit t in ZG normalizing G and not acting as an inner automorphism of G. He then defined an action of an element c on G which is inverting t, i.e. t c = t −1 , and proceeded to show that X = G ⋊ c and Y = G, tc are two non-isomorphic group bases of ZX. In view of this construction and Theorem 3.3 it becomes clear that part (a) of the following problem is wide open, since no counterexample to (NP) can serve as a starting point for the construction of a counterexample as carried out by Hertweck. But even in the case where the order of the group is even (ISO) has "almost" a positive answer. Namely, any group G can be extended by an elementary abelian group N such that (ISO) has a positive answer for N ⋊ G. This is a consequence of a strong result obtained by Roggenkamp and Scott: the F * -Theorem. See [67] for some history of the theorem and also a complete proof of the most general case.
To state the F * -Theorem let I R (G) denote the augmentation ideal of a group ring RG, i.e. the kernel of its augmentation map.
Theorem 3.6. [F * -Theorem] Let R be a p-adic ring and G a finite group with a normal p-subgroup N containing the centralizer of N in G. Let α be an automorphism of RG such that α stabilizes I R (G) and I R (N )G. Then G and α(G) are conjugate inside the units of RG.
If one is only interested in the case of integral coefficients then this can be used to answer (ZC2): Though most of the time the questions mentioned in this section have been studied for special classes of solvable groups, also (almost) simple groups were partly in the focus of attention. We mention some results. The proof of the first theorem uses the Classification of Finite Simple Groups. We close this section by shortly considering the general Isomorphism Problem. Sam Perlis and Gordon Walker proved that the Isomorphism Problem for finite abelian groups and rational coefficients has a positive solution [114]. Observe that this implies Higman's answer to (ISO) for abelian groups by Remark 1.1. Richard Brauer asked in [40] the following strong version of the Isomorphism Problem: Can two non-isomorphic finite groups have isomorphic group algebras over every field? Two metabelian finite groups satisfying this were exhibited by Everett Dade [45]. This contrasts with the positive result on (ISO) for metabelian groups mentioned above which was already known at the time. Note that questions on degrees of irreducible complex characters, as presented e.g. in [75,Section 12] or in more recent work on local-global conjectures such as [101,113], can be regarded as questions on what determines the isomorphism type of a complex group algebra. Recently a variation of the Isomorphism Problem for twisted group rings has been introduced [111].
In contrast to the problems described before, (MIP) deals with an object which is finite, but whose unit group fills up almost the whole group algebra. Though extensively studied the problem is only solved when G is either not too far from being abelian or when its order is not too big. Major contributions were given by, among others, Donald Steven Passman, Robert Sandling and Czes law Bagiński. We refer to [70,18,49] for an overview of known results and for a list of invariants of any group basis determined by the modular group algebra. To our knowledge it is not even clear if the choice of the base field k might make a difference for (MIP).
Torsion units -(ZC1)
In Section 2 we have observed the relevance of partial augmentations for the study of finite subgroups of V (ZG). When studying the First Zassenhaus Conjecture this has even a nicer form: Let G be a finite group and let u be an element of order n in V (ZG). Then u is conjugate in QG to an element of G if and only if for every divisor d of n and every g ∈ G one has ε g (u d ) ≥ 0.
Observe that the condition in the last theorem is equivalent to the following: for every d | n there is a conjugacy class of G containing all the elements at which u d has non-zero partial augmentation.
Most of the early papers on the First Zassenhaus Conjecture dealt with special classes of metacyclic and cyclic-by-abelian groups. For example, (ZC1) was proved for groups of the form C ⋊ A with C and A cyclic of coprime order in [118,117]. This was generalized in [99] for the case where A is abelian (also of order coprime to the order of C). The proof of the stronger statement in Theorem 2.8.(a) uses these results. More positive answers to (ZC1) for special cases of cyclic-by-abelian groups appeared in [112,102,98,120]. Finally Hertweck proved (ZC1) for metacyclic groups in [64]. Actually he proved it for groups of the form G = CA with C a cyclic normal subgroup of G and A an abelian subgroup. This was generalized by Mauricio Caicedo and the authors who proved (ZC1) for cyclic-by-abelian groups [41]. This and Theorem 2.8.(a) suggest to study the following: Problem 2: Does (ZC3) hold for cyclic-by-abelian groups? Meanwhile (ZC1) was proved for groups of order at most 144, many groups of order less than 288 [73,54,8] and many other groups. The following list includes the most relevant families of groups for which (ZC1) has been proven: • Metabelian: -A ⋊ b where A is abelian and b is of prime order smaller than any prime dividing |A| [102]. -Groups with a normal abelian subgroup of index 2 [97].
-Frobenius groups of order p a q b for p and q primes [79].
-P ⋊ A with P a p-group and A an abelian p ′ -group [62].
-A × F with A abelian and F a Frobenius group with complement of odd order [11]. • Non-solvable: , GL (2,5) and the covering group of S 5 [28].
As evident from the above, part (a) of the following problem has seen little advances since being included in [133,Problems 10,14]. As it was mentioned in the introduction a metabelian counterexample to (ZC1) was discovered recently by Eisele and Margolis [50]. It is worth to give some explanations on how this counterexample was discovered. Many of the groups for which (ZC1) was proved contained a normal subgroup N such that N and G/N have nice properties (cyclic, abelian or at least nilpotent). Often the proof separates the case where the torsion unit u maps to 1 by the natural homomorphism ω N : ZG → Z(G/N ). We write The following particular case of (ZC1) was proposed in [133] Sehgal's 35th Problem: If G is a finite group and N is a normal nilpotent group of G Is every torsion element of V (ZG, N ) conjugate in QG to an element of G?
The following result of Hertweck, which appeared in [106], has interest in itself but it is also important for its applications to Sehgal's 35th Problem, due to statement (d) of Proposition 2.5. Indeed, it implies that if u is a torsion unit in V (ZG, N ), for N a nilpotent normal subgroup of G, then N contains an element n such that for every prime p the p-parts of u and n are conjugate in Z p G. Moreover, by Proposition 2.5.(d), if ε g (u) = 0 for some g ∈ G then the p-parts of n and g are conjugate in G.
One attempt to attack Sehgal's 35th Problem, already present in [102], is the matrix strategy which uses the structure of ZG as free ZN -module to get a ring homomorphism ρ : ZG → M k (ZN ), with k = [G : N ]. Here M k denotes the k × k-matrix ring. If u ∈ V (ZG, N ) then ρ(u) is mapped to the identity via the entrywise application of the augmentation map. Using Theorem 4.1, Theorem 4.2 and a generalization of (2.1) it can be proved that if ρ(u) is conjugate in M k (QN ) to a diagonal matrix with entries in N then u is conjugate in QG to an element of G, which would be the desired conclusion. However, Gerald Cliff and Weiss proved that for N nilpotent this approach only works if N has at most one non-cyclic Sylow subgroup [42].
Due to this negative result the matrix strategy was abandoned. However the authors observed in [109] that some results in the paper of Cliff and Weiss can be used to obtain inequalities involving the partial augmentations of torsion elements of V (ZG, N ) which we refer to as the Cliff-Weiss inequalities. In case N is abelian these inequalities take the following friendly form: N ). If K is a subgroup of A such that A/K is cyclic and n ∈ N then g∈nK |C G (g)|ε g G (u) ≥ 0. The Cliff-Weiss inequalities are actually properly stronger than the inequalities (2.2) for units in V(ZG, N ) [107]. Moreover in [108] the authors presented an algorithm based on these inequalities and Theorem 4.2 to search for minimal possible negative solutions to Sehgal's 35th Problem and hence to (ZC1). More precisely the algorithm starts with a nilpotent group N and computes a group G containing N as normal subgroup and a list of integers which satisfy the Cliff-Weiss inequalities but not the conditions of Theorem 4.1, i.e. they pass the test of the Cliff-Weiss inequalities to be the partial augmentations of a negative solution to Sehgal's 35th Problem.
Of course non-trivial solutions of the Cliff-Weiss inequalities do not provide the counterexample yet, because one has to prove the existence of a torsion unit realizing the partial augmentations provided by the algorithm. By the double action strategy this reduces to a module theoretical problem, namely one has to prove that there is a certain Z(G × C n )-lattice which is isomorphic to a double action module by Proposition 2.3, where n is the order of the hypothetical unit which is determined by the partial augmentations (see the paragraph after Theorem 4.2). A first step to obtain this lattice consists in showing the existence of a Z p (G × C n )-lattice with the same character as the double action Q(G × C n )-module, which exists since the partial augmentations of the hypothetical unit satisfies the constraints of the HeLP-method, for every prime p. By the results of Cliff and Weiss, a unit satisfying also the Cliff-Weiss inequalities corresponds to a Z p (G × C n )-lattice which is free as Z p N -lattice. The fundamental ingredient which allows the construction to work at this point is that the p-Sylow subgroup N p of N is a direct factor in N . Hence Z p N = Z p N p ′ ⊗ Zp Z p N p and the representation theory of the first factor is easy to control. It turns out that assuming N is abelian and that a Z p (G × C n )-lattice which is free of rank 1 as Z p G-lattice (compare with Proposition 2.3), assuming only that the action of G on N satisfies a certain, relatively weak, condition [50,Section 5].
Once such a Z p (G × C n )-lattice M p is constructed for every prime p, one obtains a Z (π) (G × C n )-lattice with the same character as each M p , where Z (π) denotes the localization of Z at the set of prime divisors of the order of G. So one obtains what is usually called a semilocal counterexample. It remains to show how this lattice can be "deformed" into a Z(G × C n )-lattice with the same character. This is done in [50,Section 6] in a rather general context which could be applied also to noncyclic groups and other coefficient rings. In the situation of (ZC1) this boils down to checking that G does not map surjectively onto certain groups (which is, in this case, equivalent to the Eichler condition for ZG) and that D(u) has an eigenvalue 1 for any irreducible Q-representation D of G.
With all this machinery set up, to find a counterexample to (ZC1) remains a matter of calculations and it turns out that the candidates constructed as minimal possible negative solutions to Sehgal's 35th Problem in [108] are in fact negative solutions and as such counterexamples to (ZC1).
The construction gives rise to the following problem.
Problem 4: Classify those nilpotent groups N such that Sehgal's 35th Problem has a positive solution for any group G containing N as normal subgroup.
By Theorem 4.2 and [42] the class of groups described in this problem contains those nilpotent group which have at most one non-cyclic Sylow subgroup, cf. [109] for details. More technical results for the problem can be found in [109,108,107]. By the counterexamples to (ZC1) there are infinitely many pairs of different primes p and q such that the direct product of a cyclic group of order p · q with itself is not contained in this class. This is particularly the case for (p, q) = (7, 19), but not for (p, q) with p ≤ 5.
The evidence provided by positive solutions to (ZC1) and Problem 3 and by the counterexamples to (ZC1) suggests that the following might have a positive answer: (a) G is metabelian [48], (b) G has only abelian Sylow subgroups [80], (c) u is of prime order [90], (d) G has a Sylow tower [11]. In particular, for G supersolvable.
It was observed in [11] that the counterexamples to (ZC1) constructed in [50] can not provide negative solutions to (KP) as they have Sylow towers and are also metabelian. Actually, as explained above, the methods in [50] can probably allow to construct more counterexamples G, some of which might not have a Sylow tower and not be metabelian. However all units providing counterexamples with this method will live in V(ZG, N ) for a normal nilpotent subgroup N of G. So (SP) has a positive answer for a very big class of groups, a class for which probably there will never be an argument or algorithm that can tell if a specific group in this class satisfies (ZC1) or not. It is very interesting what this class can give for (KP): Problem 6: Does (KP) hold for solvable groups? One weaker version of (SP) which found some attention was also formulated by Kimmerle [83]. Recall that the prime graph, also called the Gruenberg-Kegel graph, of a group G is an undirected graph whose vertices are the primes appearing as order of elements in G and the vertices p and q are connected by an edge if and only if G contains an element of order pq.
The Prime Graph Question (PQ): Let G be a finite group. Do G and V(ZG) have the same prime graph? The structural advantage is that for (PQ) there is a reduction theorem, while this is not the case for any of the other questions given above. Recall that a group G is called almost simple if there is a non-abelian simple group S such that G is isomorphic to a subgroup of Aut(S) containing Inn(S), and in this case S is called the socle of G. So one might hope that the Classification of Finite Simple Groups can provide a way to prove (PQ) for all groups. But a lot remains to be done, since many series of almost simple groups still need to be handled. We summarize some important results.
p-subgroups
In this section we revise the main results and questions on the finite p-subgroups of U(ZG) for G a finite group and p a prime integer. The questions are the specialization to p-subgroups of V (ZG) of the problems given above which we refer to by adding the prefix "p-". For example, the p-versions of (ZC3) and (SIP) are as follows: (p-ZC3): Given a finite group G, is every finite p-subgroup of V (ZG) conjugate in QG to a subgroup of G? (p-SIP). What are the finite p-groups P satisfying the following property for all finite groups G? If V (ZG) contains a subgroup isomorphic to P then G contains a subgroup isomorphic to P . The following terminology was introduced in [103,90]. One says that G satisfies a Weak Sylow Like Theorem when every finite p-subgroup of V (ZG) is isomorphic to a subgroup of G.
That the role of p-subgroups in V(ZG) is very special is expressed already by the Lemma of Coleman (Theorem 3.2), which implies a positive solution for (p-NP). Also, the result of James Cohn and Donald Livingstone on the exponent of V(ZG), mentioned above, is equivalent to a positive solution to (p-SP).
By Theorem 4.2, the p-version of Sehgal's 35th Problem has a positive answer in general. This was in fact already observed earlier by Hertweck [62]. Moreover, as a consequence of Theorem 2.6, (ZC3) holds for p-groups and hence (p-ZC2), (p-AUT) and (p-ISO) hold. These latter also follows from the following result: Theorem 6.1. [91] If G is a finite group and P is a p-subgroup of a group basis of ZG then P is conjugate in QG to a subgroup of G.
In the situation of general p-subgroups the knowledge is much more sparse. There is no counterexample to (p-ZC3). Neither is there a general answer to (p-ZC1), not even for units of order p, though in this case (p-KP) holds, cf. Theorem 5.2. Note, that all positive results for (SIP), mentioned in Section 2, are in fact results for (p-SIP). A big step in the solution of this problem might be an answer to the following: Problem 7: Is (SIP) true for elementary-abelian groups? We collect here some results on (p-ZC3) and Weak Sylow Like Theorems: Theorem 6.2. If G is a finite group and p a prime integer then (p-ZC3) has a positive answer for G in the following cases: (a) G is nilpotent-by-nilpotent or supersolvable [46]. [46,47,79,28,90]. (f ) p = 2, the Sylow 2-subgroup of G has at most 8 elements and G is not isomorphic to A 7 [9,105]. Theorem 6.3. G satisfies a Weak Sylow Like Theorem for p-subgroups in the following cases: (a) p = 2 and the Sylow 2-subgroups of G are either abelian, quaternion [86] or dihedral [105]. (b) G has cyclic Sylow p-subgroups [84,65].
For G = PSL(2, r f ) the p-subgroups of V(ZG) found some attention starting with [68]. It is known today that (p-ZC3) has a positive answer for G if p = r, or p = r = 2, or f = 1 [103]. Also a Weak Sylow Like Theorem holds for G if f ≤ 3 [68,12].
Remark: A quarter of a century ago Sehgal included a list of 56 open problems in his book on units of integral group rings [133]. Several of those concerned topics mentioned in this article. Some have been solved, while others remain open. leo.margolis@vub.be, Vrije Universiteit Brussel, Department of Mathematics, Pleinlaan 2, 1050 Brussel, Belgium.
|
2018-09-03T21:04:43.000Z
|
2018-09-03T00:00:00.000
|
{
"year": 2018,
"sha1": "44f87879457dd906bf8d7a3a2ba6971bea1d8bcc",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "44f87879457dd906bf8d7a3a2ba6971bea1d8bcc",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
125960587
|
pes2o/s2orc
|
v3-fos-license
|
Teachers of mathematics interchanging by a social network: do they comprehend among them?
This paper is framed in a triennial project in the area of Information and Communication Technologies (ICT) in Mathematics Education. We constituted part of a group of university teachers who engaged in a kind of training teachers in service. In particular we were interested on the use of ICT in two senses: as a means of communication (Facebook) and as a teaching tool (GeoGebra). The work consisted on constructing a process of accompaniment to teachers of Mathematics at middle schools of four localities of Argentinean Patagonia. They were distributed in two groups of two schools each one. One group treated contents of algebra y the other one of functions. The process was to co-generate didactical cycles of three phases: priori analysis, commissioning classroom and posteriori analysis. In this presentation we pay attention to some aspects of the communicational use of the ICT which were made in the algebra group, with special focus in the produced levels of confidence and autonomy.
Introduction
In this paper, we share some experiences within the frame of a project about the use of Information and Communication Technologies (ICT) for teaching Mathematics in high school.
Here, we analyze some data after one year and a half of work with groups of teachers and researchers.A unique aspect of the project is the use of ICT in two directions: as a teaching tool and as a means of communication.For the former, we have particularly focused on the software GeoGebra (dynamic mathematics software for all levels of education that brings together geometry, algebra, spreadsheets, graphing, statistics and calculus in one easy-to-use package).For the latter, our atten-tion is focused on the possibility of workgroups by virtual communication.In this respect, we address some aspects of the communication between the members of the project.
The use of ict as a communication tool
Four secondary schools in the Province of Río Negro (Argentinean Patagonia) were invited to participate in the project.Three of them are located in a rural zone of plateau (Ministro Ramos Mexía, Sierra Colorada, and El Cuy) and the other one, in an urban zone (Allen).The distance between them is 400 km, in some cases.In this context, an electronic communication mechanism was justified.Therefore, a virtual means was proposed to promote interaction without the need for physical presence.Thus, we did not start with a model; we started with the characteristics of the context; in other words, the context configured our proposal.
At beginning, we proposed the use of the Moodle platform, but then, because of two reasons, the groups shifted to a social medium (Facebook).The first reason was the lack of infrastructure in the localities, and the second was the problems of the Moodle platform architecture.The localities where most of the schools are located do not have appropriate Internet connection (neither in the teachers' households nor at community institutions).These localities have between 500 and 1.500 inhabitants, and in some cases, there are no telephone networks.A Moodle platform requires a relatively suitable Internet connection.Most of the localities did not satisfy these technical requirements, hence many teachers could not communicate with their colleagues.
As for the second reason, the Moodle platform architecture has a classical organization, with an area where teachers provide material to their students and an area for evaluations, for example.The initial promoted relationship is a vertical one, where a teacher has more management rights than students and the teacher is the one who determinates the material and the tasks to be done.The underlying links are distant from the one proposed in our Project; the changes and concessions would be too costly.Finally, the groups moved to a cheaper platform in terms of requirements of connection and with a more horizontal relationship between the members.We agreed to establish communication through a social medium, such as Facebook, with relatively good functioning in rural localities.
We agree with Castells (1996) as regards the exponential growth of interactive computer networks.They create new forms and channels of communication, and "shape life while it shapes them" (CASTELLS, 1996, p. 28).
By virtual reality, this author means one that is transmitted, expressed or communicated through a system in which reality itself is trapped, captured, transformed into images or signs and, once communicated, becomes another reality and another experience to those who consume or receive it.In this communication system, the space sheds its geographical and historical roots and is part of a collage of images that replaces the space formed by specific locations.Time is cleared in this system: past, present and future merge in a timeless process.
Relational model with teachers
In our proposal, the researchers try to establish a horizontal relationship with teachers where the former do not take the role of experts nor do the latter take the role of students.Instead, in our proposal, the researchers assist the teachers, and they are main makers of their own development.The researcher promotes the teachers achievement, on the one hand, autonomy respect the researcher and, on the other hand, cohesion with their colleges.We understand that if an evolution of the practices is possible, it is because of collective (nor individual) actions.The researcher only indicates what he sees as correct or not, what he considers appropriate or not, and this is done because of three interrelated reasons: ethical principles of respect for the teacher's professionalism; lack of knowledge of the specific aspects of the institution where each teacher works (as factors that make the teachers the most suitable people to decide about modes of interaction with their students); our interest that the teachers develop their own collective pedagogical projects.
In this sense, we understand that "reality" is not an objective fact, shared by all; it is a subjective construction of each individual person.Thus, we admit that a researcher and a teacher may have different opinions based on their different perceptions, both of them perfectly valid ones.
In the frame of a subjectivity of interpretations, our position can be described as one where the researcher is a facilitator of teachers' collective development.In this sense, and paradoxically, we admit that attempting this type of relationship could be a contradiction itself.Indeed, regarding the perception about how practices can evolve, teachers could not share our position and wait or desire another relationship (as for example the habitual one, based on the roles of expert and student).Precisely, our presentation intents to approach the possibilities of this type of relationship, which we call horizontal or accompaniment, in a context where this kind of connection is neither habitual nor promoted.
One of the possible contradictions of our proposal is that we meant to promote a relationship probably neither expected nor desired, a priori, by the teachers.Another possible contradiction is the dynamic of the project.Indeed, if in the frame of keeping track of the teachers' development, the proposal of a particular type of relationship can be considered as a contradiction of the concept of accompaniment.Thus, it could also be the idea to suggest a type of work dynamic, as in our case, the pedagogical cycles.
Pedagogical cycles
The teachers, in two groups, were convened to carrying out together at least a pedagogical cycle.Each group was accompanied by researchers who we call coordinators.The cycle comprises three phases, as can be seen in Figure 1.ample the habitual one, based on the roles of expert and student).Precisely, our sentation intents to approach the possibilities of this type of relationship, which we ll horizontal or accompaniment, in a context where this kind of connection is neither bitual nor promoted.e of the possible contradictions of our proposal is that we meant to promote a ationship probably neither expected nor desired, a priori, by the teachers.Another ssible contradiction is the dynamic of the project.Indeed, if in the frame of keeping ck of the teachers' development, the proposal of a particular type of relationship can considered as a contradiction of the concept of accompaniment.Thus, it could also the idea to suggest a type of work dynamic, as in our case, the pedagogical cycles.
PEDAGOGICAL CYCLES e teachers, in two groups, were convened to carrying out together at least a dagogical cycle.Each group was accompanied by researchers who we call ordinators.The cycle comprises three phases, as can be seen in Figure 1.◆ Priori analysis.Used to agree on the problem to be worked with in class using GeoGebra, to resolve the problem, to characterize the concepts required to solve it, the difficulties and potentialities of their students, and to analyze possible teaching interventions.◆ Commissioning classroom.The teachers bring the problem over to the classroom, audio and written registers are made, the class is shared among various teachers -but in our case that was difficult to accomplish because of the small number of teachers at the school.◆ Posteriori analysis.To collate and reflect about the eventual differences between the plan in phase 1 and what actually happened in phase 2. It contributes to the work of members at two levels: pedagogy and workgroup.
The moments of accompaniment
The relational model of accompaniment is not new; it is related to another model known as action research.It is not easy to differentiate them in their principles, but they do not coincide as far as teachers are concerned.We could say that in the accompaniment model, the researcher pretends to be present but always with the intention to leave.His interest lies in the promotion and consolidation of groups working autonomously without his own presence.
The role of the researcher is permanently unstable, his actions and interventions always try to be limited and short-lasting over time, so that teachers assume their roles as protagonists, as well as design and bring forward their own proposals.
In this context, the aim of the researcher is for the group to operate with autonomy, without directives or guidelines from him.This makes the researcher continually seek not to take on roles of leadership or central ones, given that the concept of leadership is questioned in this model.The idea that somebody leads others is not necessarily positive in the relational model.
Researchers who study the relational model of accompaniment include authors such as Beauvais (2006).She mentions three moments in an intervention of this type: ◆ Comprehension.It is particularly strong in the first stage, at the moment of constitution of roles, and will stay present for the rest of the time.It involves the comprehension of the other; comprehension of his cultural and contextual conditionings, his history and his own projects, while always leaving room for incomprehension.This was meant for assuring the privacy of the other and of oneself -privacy that promotes reciprocal freedom between the different actors.◆ Action.The researcher (coordinator) follows the teachers' work dynamic but he resists the temptations of saying and doing on their behalf.It is a central moment, trying to help a group of people to constitute, to route and to reach their own objectives.We understand that our proposal presents a contradiction with the model.Indeed, if accompaniment is a type of dynamic with and for the teachers, the fact that the coordinators propose to teachers a type of work characterized by pedagogical cycles could be interpreted as a contradiction itself.In our case, we assume this contradiction with the risks it could imply, including the failure of the proposal.◆ Reserve.This model seeks autonomization; in our case, that of a group of teachers.By autonomization, we understand a position of appropriation of the objectives of the project, generation of its own objectives and the possibility of proposing actions and implementing them by initiative from oneself and the group.This appropriation can be produced if the coordinator does not take on the role of holder of knowledge and allows teachers to be protagonists.
These three moments that characterize a relationship of accompaniment are not a sufficient condition but they are necessary for accomplishing it.This type of proposal where teachers take the initiative out of actions is neither evident nor immediate.In this report, we focus on how the first moment (comprehension) was developed, where the relationship between the members of the group started to build up, bearing in mind that this relationship wants teachers to be protagonist and autonomous.Also, we understand that this proposal is not habitual in in-service teacher training and it could become complex in a relational environment, such as the one of this project, where most communications are produced virtually.
Basically, in this paper, we focus on analyzing how these groups advance towards the comprehension of the other -the first moment, according to Beauvais (2006) -in a context where the interchanges are mostly virtual ones -in particular, by means of a social medium.
Method
The research has a qualitative approach of virtual ethnography.That is, internet-based ethnography, which studies the mediated interactions by ICT through a concrete experience of a virtual community (HINE, 2004).
The study is based on two types of data: the interchanges produced on Facebook by the teachers and the coordinators, and the audio recordings of interviews with the teachers and the principals of the schools.These different data are interrelated in two directions: the conclusions of one type of data will be confirmed by the other one, and one type of data brings clues for the search of con-clusions by the other one.Individual and group interviews were conducted.They were semi-structured with two or three issues on the agenda.
Two groups were conformed for making pedagogical cycles using GeoGebra, one about algebra and the other one about functions.Each group worked in two modalities: in person, with two annual meetings, and virtually, through a social medium known by the members.Here, the data come from the conversations from within the group, by the second modality.
In particular, in this paper we analyze some phenomena in the algebra group.It was formed by Mathematics teachers from two localities (El Cuy and Sierra Colorada).They were selected because of the reduced possibilities of the teachers to develop activities within the frame of in-service training.There are too few in one single school (to interchange their experiences) and they are too far from cities with some kind of offer in this sense (by on-site classes).
The group was comprised of five teachers (T 1 … T 5 ), who were coordinated by four researchers (R 1 … R 4 ).Some of the coordinators lived more than 700 km away from these localities.
There were 36 interchanges produced in the group, and they include 164 interventions.An intervention represents all expression in a written manner and an interchange is a set of interventions which is contained on the same post on Facebook.
We understand that "every narrative is defined by the space-time structure of the facts and the position of the narrator and the receiver in that space and time" (MONTIEL, 2009, p. 160).Moreover, in this case, we consider that the text becomes the primary means of the non-on-site interchange for the creation of meaning that, being shared, becomes the essential means to build communities as there is agreement on common purposes (HERRING, 1996).
Results
We include three central aspects about our findings: the combination between the virtual and on-site times, the process of autonomatization and the comprehension of the other (person, school, and context).
◆ About the virtual and on-site times.By practically, every interchange produced by a manner without face to face interaction is called a virtual interchange.These interchanges included the use of one social medium but also of others, e.g., Skype and e-mail.While all of these tools were promoted by the coordinators, the only one used by the group was Facebook.The virtual meetings were contrasting in various aspects compared with the on-site ones.The on-site meetings were strongly required by the teachers, despite the big distances they have to travel to attend them, and the fact that they have to make family and job arrangements.However, the teachers demanded more on-site meetings than the two initially provided.
At the beginning, we expected the dialogues to be progressively canalized by virtual means.But only incipient interchanges about pedagogical work were generated in the social medium.The more intensive discussions and analysis were produced in the on-site meetings, where the teachers participated and quickly advanced significantly and enthusiastically.The advances in the virtual dialogues were at a germinal level, not in a constant or a homogeneous manner among the members.For example, a teacher has the initiative to propose a pedagogical reflection and then a colleague intervenes by asking for a supposed on-site meeting.In other words, a teacher began a discussion in a virtual environment and it was suspended when another teacher introduced the on-site meeting issue.
(T 1 ) I was working at the last week of classes, with GeoGebra, doing a presentation to the pupils of the second and third years.They liked it very much working, at first with geometry… I left the concern to explore the software on the holidays… a good beginning ☺ (R 1 ) What a good new "T 1 "!! Thank you for sharing it!!More so, we have to meet so you tell us about your experience in more detail… a very good beginning ☺ (R 2 ) We should think about a convenient time to talk on Skype and share the experience or "T 1 ".It is pending for us to organize when we resume our activities.What do you think?? (T 1 ) It's an excellent idea… (T 4 ) Yes, it is a good idea.(T 3 ) Ok.
However, sometimes the teachers' pedagogical proposals were resumed by their colleagues, thus giving continuity to the discussion.In one of these cases, a coordinator interrupted the dynamic and proposed to continue the analysis in an imminent on-site meeting.
(T 2 ) Hi everybody, as I have a creativity low day, I searched for an algebra activity on the Web for resolution in the classroom with GeoGebra, and I found one about equation systems that actually seems very simple.The proposal is to resolve an equation system by the graphical method.At first, I found the values of X and Y by someone of the methods (equalization, substitution, etc.) and I introduced them.I introduced the equations and I found a result.At the graphic the lines joined at the point 1, being X = -1 and Y = 5.I don't know if there is another manner to resolve the systems with GeoGebra by the graphical method.Due to the weak state of confidence, it could be supposed that any fact would interrupt the dialogue.Also, the inexperience of the coordinators might have had an influence, as they looked strictly at the model without accepting that it is gradually built by reality.
◆ About the autonomization.By autonomy, we do not understand lonely, individual work; on the contrary, we aim for the teachers to form work teams.These teams avail common points but different ones, too.Autonomy is referred to the relationship with the coordinators.Indeed, in the frame of an accompaniment, we wished that the teachers could propose their own tasks, their own ideas and, based on the analysis of them, everyone could evolve in their knowledge -both teachers and coordinators.The members of the Project were conscious of this newness.But the lack of references from previous trainings caused some stagnation in the way of autonomization.
However, there were strong indications of adhesion to the proposal of autonomous work.For example, a group of teachers of a school, out of their own initiative, proposed to present the Project as an institutional project in order to involve the rest of the teachers of the school.
(R 1 ) Here, I'm sending the first summary of the meeting on May 10 th about what happened, the agreements, the future actions.Please, you can revise, expand, modify… (T 3 ) I'd add that we'll not only try to present it as an Institutional Educational Project, but also raise awareness at the Institution that it is a GROUP Project, not individual training.(R 2 ) Hi team!!!With the details of "T 3 " I can see how fruitful the working day was … it' so good!!What "T 3 " proposes is interesting to form work and reflection groups, concerned about the teaching of mathematics, in particular with algebra approach.
We understand autonomization as a process, a kind of link or contract that, beyond its enunciation, must be built.It is a process, a non linear progression, with advances and setbacks.This process requires time and stability at work too.The stability at work is not evident at the current educational system of the Province of Río Negro because there are many precarious charges (because the teachers do not have a teaching certificate or the lack of public admission contests for consolidating the teacher in his workplace).Despite this fact, the lack of references from previous experiences and the short time that the project lasted (one and a half of collective work), the teachers showed signs of adherence to the model of autonomy work and a genuine progress in this direction.
(T 1 ) Hi people, how are you?I'm sharing some activities I performed with the pupils of the second and third years… (T 3 ) Hi "T 1 "! It's so good to continue working on the Project.Regards.(T 1 ) Still in the running… (R 2 ) Hi group!! What do you say if we look at the file shared by "T 1 " and do some interchanges, even if by this virtual means?Don't leave "T 1 " alone!!! Who breaks the ice?A very big regards on Teacher's Day.(T 1 ) Yes, yes… don't leave me alone!!! ☺ When we say "work in autonomy", it is about teachers-coordinators but not among peers (teachers-teachers).I understand this kind of work is strongly related to the construction of comprehension of the other.
◆ About comprehension of the other.We understand that comprehension of the other is not an attitude of judgment or evaluation of the other.In contrast, it is a look at the other that allows to understand him as a result of the circumstances surrounding him, a result of his history lived and nor lived.
The consolidation of the comprehension of the other is based on knowledge of the other.For that purpose, in order to know his strengths and weaknesses, it is necessary to build confidence, to express doubt or fear of the unknown without feeling the pressure of being judged or evaluated by their peers.
In this sense, we observed important advances, more evident at the on-site meetings.However, they also happened in the virtual environment.
(T 2 ) Another recommended activity is the polynomial division by Ruffini, but I have not tried it yet, so I don't have any idea how to do it with GeoGebra.If someone wants to try it and tell us the steps, it would be great.Kisses!!!! (T 3 ) And now what!!!!! What do we do?.... (T 2 ) To experiment with ICT… something will happen ☺ These advances were produced although there were various factors to disadvantage, for example: instability of teachers' job position; lack of reference of previous similar works; geographical dispersion of the members; lack of knowledge of the context of teachers' working conditions.Indeed, some coordinators lived in localities more than 1000 km away from the southernmost schools.This distance is not only geographical but also cultural, i.e., urban and rural contexts.However, the coordinators and the teachers tried to be sympathetic to one another.For example, the questions of a coordinator to a teacher may apparently sound trivial, but they are basically meant to gather further information on the technological work conditions of the teacher and encourage
Conclusions
Teachers' autonomization regarding the coordinators requires the comprehension of the other.The particular aspect of the construction of the comprehension of the other, in our case, is that it would mostly be produced through interchanges in a virtual environment.Many teachers had communication experiences in virtual environments.Many of them used social media for communicating with kin and friends.However, in these cases, comprehension preceded virtual communication.Indeed, the link with kin and friends was built and the virtual interchanges allowed them to keep or refund it.In our Project, the relationship was reversed.The members had to build the comprehension of the other mostly by communicating virtually.That is, it is indispensable to create the necessary conditions for collaborative work (AIMI; PAGNOSSIN, 2012).We do not support the idea that this objective is impossible to achieve.However, we see, on the one hand, that projects need to be longer in time and, on the other hand, their evolution has to be underpinned by on-site meetings.The on-site meetings were strongly demanded by the teachers, and if we could not provide them, it was for financial constraints of the project.But we saw this demand as a legitimate one and, because the colleagues insisted in "looking at each other", we understand that on-site meetings allowed to build links that the virtual environment could not, at least in the reduced times of the Project.
The lack of comprehension of the other does not allow to advance in this type of proposals.Indeed, the advances were weak ones because the teachers and the coordinators avoided the discussion, as they thought it would lead to conflict rather than to debate.We understand it happens because a relationship of trust was not consolidated enough for addressing discussions without the risk of turning them into conflictive situations.
The lack of comprehension also blocks the dialogue from the point of view of the, of equality.We understand that in order to discuss collectively, one has to consider oneself at a high level of parity with the others.In this respect, the teachers of rural zones said to feel they have less prestige compared with the teachers who work in urban zones.There is prejudice to overcome, based on the assumption that teachers working in an urban center have better tools and knowledge that those working in a rural environment.Mutual comprehension is aimed at eradicating this prejudice and other types of prejudice, in order to enable the construction of a parity relationship and, thus, allow a horizontal debate with all the members of the group.Some kind of sensibility is needed (BEAUVAIS; RAY, 2012).
The lack of construction of comprehension of the other in a virtual environment is a possible interpretation for the difficulties, but it is not the only one.Indeed, collective work is a fundamental condition in our proposal -it is not a habitual feature in the Argentinean secondary school system.The usual one, unfortunately, is the lonely work of teachers, who interchange with their peers at very limited times (at playtimes, at the staffroom or, sporadically, at monthly / trimonthly / annual meetings), when they report progress in the syllabus and class schedule, describe their groups of pupils or inform general news.And this is mainly due to the characteristics of the current educational system, where teachers' work is strongly atomized in institutional terms.According to Lessard, Kamanzi and Larochelle ( 2009), the role of the school directives is crucial for the sustainability of this type of non-individual work.
The consequence of work atomization is the lack of habits, links and organization for collective work between teachers and this, we insist, occurs in the material reality.This certainly led us to question whether it is actually possible to build habits, links and an organization in a virtual environment when, in general, teachers do not have the experience of having built it in a material space.
We do not doubt that the challenge of constructing it in a virtual plane would be as or more difficult than doing so on a material plane.And this mainly due to one reason, at least.
We understand that the group has not achieved to build its own time space relation with the virtual reality (CASTELLS, 1996).It could have enhanced the communication flow as sequences of interchange and interaction.
Indeed, the narrative models and the representation that the virtual reality promotes and/ or allows were not sufficient for consolidating the integration between the teachers and the coordinators, at least in the period of the Project.
We can say that virtual reality and material reality remain relatively parallel with little intersection points.We do not observe that both realities interchange in an intensive manner, and neither does the one produced in the virtual reality allows an evolution of the produced in the material reality of the classroom.
We rescue this type of experiences from a broader perspective of human interaction, which involves mutual expectations and the need to share, in means that shift from circumstantial and limited to continuous and unlimited ones (FUNES, 2008).
Also, we understand that comprehension is a fundamental pillar for this type of proposals and we think it is necessary to keep on the direction of the relationships of accompaniment in projects with teachers.For that purpose, we believe that the training proposals have to consider the context where they will be implemented and, also, the time of construction of the new links Teachers of mathematics interchanging by a social network… has to be deemed as important, more important that the one provided in this Project.And it is not only for the teachers but also for the coordinators, where the main reference, albeit implicit, remains on traditional training.As stated by Gros ( 2008), the digital society demands the network, the participation, the collaboration and the virtual communication as good supports for the learning and professional development, but a considerable way has yet to be transited.
Attending to the vacancy announced by Hine ( 2004) about research studies on the uses of the Internet, we believe that we have contributed to the investigation on how a social medium was used by a group of Mathematics teachers in a particular context.We agree with Mantovani (1994) that it is difficult to sustain that technology has social effects regardless of the context in which it is used.
Finally, we emphasize, as Jerónimo ( 2009) and Santos (2015) do, that becomes relevant the need to train university teachers/researchers (who we called "coordinators" in our framework) in this new educational aspect of the analysis of the virtual speech, which can contribute to construct a "presence" tutoring based on electronic speech, an accompaniment forward the comprehension among teachers.
ure 1 .
Pedagogical cycle• Priori analysis.Used to agree on the problem to be worked with in class using GeoGebra, to resolve the problem, to characterize the concepts required to solve it, the difficulties and potentialities of their students, and to analyze possible teaching interventions.•Commissioning classroom.The teachers bring the problem over to the classroom, audio and written registers are made, the class is shared among I've uploaded the file with the proposal and you can say what you think about it, and if we could take this proposal to the classroom, or another one that you can suggest.Kisses.(T 3 ) I will put it into practice now and I'll see what else emerges.(T 1 ) I will practice it to see how I will and then I'll take it to the classroom.I still have to develop a unit to get equations… It is an interesting proposal.(T 4 ) I'll see what I can do.(T 2 ) There's no hurry to get it to the classroom.At first we should analyze the activity and see what other proposals emerge.When we get the topic to the classroom, we can apply it.What do you think?(R 3 ) Yes, I agree with what "T 2 " is saying.The idea, I think, we can discuss it in detail when we meet on May 10th in Upper Valley.(T 1 ) Ok… I think it's a very good idea… thanks.
|
2018-12-20T22:47:43.208Z
|
2016-08-29T00:00:00.000
|
{
"year": 2016,
"sha1": "bced8c84727ad8d93e348b92df8cae958d923d2b",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.18256/2447-3944/rebes.v2n2p5-14",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "bced8c84727ad8d93e348b92df8cae958d923d2b",
"s2fieldsofstudy": [
"Mathematics",
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
437864
|
pes2o/s2orc
|
v3-fos-license
|
Characteristics of HIV seroprevalence of visitors to public health centers under the national HIV surveillance system in Korea: cross sectional study
Background In Korea, the cumulative number of HIV-infected individuals was smaller than those of other countries. Mandatory HIV tests, dominating method until 1990's, have been gradually changed to voluntary HIV tests. We investigated HIV seroprevalence status and its characteristics of visitors to Public Health Centers (PHCs), which conducted both mandatory test and voluntary test under the national HIV/STI surveillance program. Methods We used HIV-testing data from 246 PHCs in 2005 through the Health Care Information System. The number of test taker was calculated using the code distinguished by the residential identification number. The subjects were classified into four groups by reason for testing; General group, HIV infection suspected group (HIV ISG), HIV test recommended group (HIV TRG), and sexually transmitted infection (STI) risk group. Results People living with HIV/AIDS were 149 (124 male and 25 female) among 280,456 individuals tested at PHCs. HIV seroprevalence was 5.3 per 10,000 individuals. Overall, the male revealed significantly higher seroprevalence than the female (adjusted Odds Ratio (adj. OR): 6.2; CI 3.8–10.2). Individuals aged 30–39 years (adj. OR: 2.6; CI 1.7–4.0), and 40–49 years (adj. OR: 3.8; CI 2.4–6.0) had higher seroprevalence than 20–29 years. Seroprevalence of HIV ISG (voluntary test takers and cases referred by doctors) was significantly higher than those of others. Foreigners showed higher seroprevalence than native Koreans (adj. OR: 3.8; CI 2.2–6.4). HIV ISG (adj. OR: 4.9; CI 3.2–7.5), and HIV TRG (adj. OR: 2.6; CI 1.3–5.4) had higher seroprevalence than General group. Conclusion A question on the efficiency of current mandatory test is raised because the seroprevalence of mandatory test takers was low. However, HIV ISG included voluntary test takers was high in our result. Therefore, we suggest that Korea needs to develop a method encouraging more people to take voluntary tests at PHCs, also to expand the anonymous testing centers and Voluntary Counselling and Testing Program (VCT) for general population to easily access to HIV testing.
Background
The first case of acquired immunodeficiency syndrome (AIDS) in Korea was a foreign resident, and the first human immunodeficiency virus (HIV)-infected Korean contracted the virus during overseas travel in 1985 [1]. This stimulated a national preventative system for this emerging infection in Korea, which became prevalent in the United States and Europe in the early 1980s [2,3]. In 1985, mandatory HIV tests were established in commercial sex workers (CSWs) with sexual contact with foreigners [1,4], and in 1986 mandatory test was expanded to CSWs with sexual contact with Koreans and hemophiliacs. Prison inmates and donated blood samples were included in the mandatory testing group in July 1987 [5]. HIV mandatory testing was extended to seafarers with long-term overseas contracts in 1988 [6], and voluntary anonymous HIV testing was allowed to ensure broader screening in 1989. Article 8 of the Prevention Act for Acquired Immunological Deficiency Syndrome (1986) established the national surveillance system for the detection of HIV-positive individuals. Public health centers (PHCs) were instructed to conduct HIV screening tests for sexually transmitted infection (STI) risk groups periodically, and for volunteer groups arbitrarily [7]. The number of mandatory HIV tests in Korea reached approximately 2 million until 1997, with the addition of hair salon employees, restaurant employees, and food industry employees, food industry sanitation workers and public health personnel. However, in 1998 the Korean government amended the HIV testing policy from mandatory testing to voluntary testing by the Act of Health Check for Food Industry Employees and Others. This exempted restaurant, food industry employees and others from mandatory HIV testing [8].
UNAIDS estimated 40.3 million people living worldwide with HIV/AIDS in 2005, and AIDS-associated deaths were up to 20 million [9]. HIV/AIDS has been expanding rapidly in India, China, and Southeast Asia. China has increased recently with an estimated 650,000 infected individuals [10]. The seroprevalence in Russia increased more than five times from 1997 to 2002 [11]. Frequent visits between Korea and these countries might have increased the risk of individuals spreading HIV/AIDS in Korea [12]. The percentage of HIV infections in Korean adults aged 19-45 years as estimated by the World Health Organization (WHO) in 2003 was relatively low at 0.01% [13]. A total of 4,341 HIV infections were identified in Korea from 1985 to 2005, including 88% Korean and 12% foreigners. The proportion of HIV infections in the male (89%) was much higher than that in the female (11%) [14]. The number of newly diagnosed HIV infections has increased yearly in Korea since 1998: 137 in 1998, 244 in 2000, 457 in 2002, and 763 in 2004 [14]. The Korean government conducted various projects to slow down the increasing HIV infection; HIV diagnosis and financial support of treatment were offered for people living with HIV/AIDS (PLWHA) [14]. There are no epidemiological studies to estimate or project the rate of HIV transmission and to evaluate HIV/AIDS prevention projects in Korea, even any investigation about essential HIV seroprevalence. As the one of HIV testing centers in Korea, PHCs are responsible to implement HIV/STI prevention for the susceptible, low-income individual, and the anonymous, through taking both HIV mandatory testing and voluntary testing under national HIV surveillance system. In 2005, PHCs conducted about 6% of HIV tests in Korea, and detected approximately 19% of newly diagnosed HIV infections [15]. It was urgent to identify the status of HIV seroprevalence among visitors to PHCs whose HIV positivity was higher than other HIV test centers. Therefore, we investigated HIV seroprevalence status and its characteristics of visitors to PHCs.
Data Collection
PHCs in Korea have adopted a nationally incorporated computerized data processing system, known as the health care information system (HCIS), since 2000. The system handles various health-related test results along with other demographic and laboratory information. A total of 372,692 HIV tests were carried out at 246 PHCs in 2005. The following data from the HCIS were collected from each center: institutional identification, reception number, reception date, gender, date of birth, residence areas, test method, test kit, test result, reason for testing, and code. The code is a parameter to identify the testing frequency of each individual, and it is linked to each residential identification number (RID, 13 digit numbers), unique number for each Korean, which is coded by the date of birth, gender, birth place and check number. The RID were declared during initial testing and in subsequent tests the unique code was internally assigned inside HCIS. The RID cannot be deduced from the code and it ensures an individual's confidentiality.
Public HIV testing system under the National HIV surveillance
In Korea, HIV reactive samples from primary screening tests at PHCs are referred to local Institute of Health & Environment (IHE) for confirmatory testing. There are 17 local IHEs in 7 cities and in 10 provinces. Confirmed positive samples are referred to the division of AIDS at the Korean Centers for Disease Control and Prevention (KCDC), and the division makes final decision on HIV infection status for each sample [7].
Subjects
All individuals in this study took free of charge HIV testing for personal physical examinations in community health improvement programs by PHCs. There were 14 reasons for HIV testing by the HCIS, and these were classified into four groups: The General group, the HIV infection suspected group (HIV ISG), the HIV test recommended group (HIV TRG), and the STI risk group. We grouped the STI risk group and the HIV TRG according to the "Guidelines for HIV/AIDS control" [7]. Individuals in the General group took HIV test as a part of health checkup, and the HIV ISG took HIV test based on suspicion of HIV infection ( Table 1). As anonymous tests could not determine the number of individuals, it was not included in the seroprevalence estimates. We calculated the positivity for the anonymous testing group.
Data Analysis
HIV seroprevalence was defined as the number of confirmed PLWHA divided by total number of HIV-tested individuals during the study period. Initially indeterminate samples that were positive on follow-up were included as HIV positive samples, and the first test date was defined as the date of diagnosis.
Tests without full RID or with incorrect RID (19,252 tests), and anonymous cases (9,877 tests) were excluded from the initial 372,692 tests. The testing frequency of each individual in a single year was referred to as the repeated number. The mean of the repeated number was calculated in each group and by gender. HIV seropreva-lence was expressed as the number of PLWHA per 10,000 individuals. Logistic regression analysis was performed to examine factors that were independently associated with HIV seroprevalence. The multi-variable models were fit adjusting for variables (gender, age group, nationality, region, and reason for testing). Statistical significance (p < 0.05) was defined at the 95% confidence interval. All statistical analyses were performed using SAS 9.1.
Results
A total of 280,456 individuals were tested at 246 PHCs in 2005, and 149 PLWHA were identified. The repeated number was significantly greater in the STI risk groups than in other groups (p < 0.0001), and there were statistically significant differences in the repeated number between the male and the female in the HIV ISG and the STI risk groups, but not for the General group and the HIV TRG ( Figure 1). Table 2 presents demographic data about HIV seroprevalence in Korea. The female accounted for 68.7% of all tested individuals (n = 280,456), and twenties were the leading group (45.4%) while metropolitan city dweller took 40.4%. The STI risk group was outstanding (51.6%) in reason for testing. Overall HIV seroprevalence was 5.3 per 10,000 individuals and the male showed significantly higher seroprevalence than the female (adjusted Odds Ratio (adj, OR), 6.2; 95% Confidence Interval (CI), 3.8- [18,19]. The prevalent of the HIV ISG for the male and the aged 30-49 years might be due to MSM. However, we do not have any information about this, and it will be requested to study the sexual behavior in order to find the characteristics of HIV transmission route in Korea.
HIV seroprevalence in the aged 20-29 years was lower than those in older individuals. This is similar to the seroprevalence of herpes simplex virus type 2 (HSV-2) [20] but differs from HIV seroprevalence in the U.S., Russia, and the many European countries [21][22][23].
There are about 910,000 foreigners in Korea, and about 65% are from China, the Philippines, Thailand, Viet Nam, and Russia. [24]. Foreigners who are engaged in entertainment or sports for more than 3 months are required to take HIV tests in accordance with the Prevention Act for Acquired Immunological Deficiency Syndrome [7]. In this study, foreigners revealed significantly higher seroprevalence than native Korean (p < 0.0001). Two of 18 HIV-infected foreigners belonged to the STI risk group (14.1 per 10,000 individuals) and the others were included the General group (21.0 per 10,000 individuals). The major reason to take tests in the General group was to get medical certification, which contributed to the high seroprevalence. For the foreigners, medical certification approved by a compulsory medical checkup is required for a job or for residence, which may be a stricter require- In Japan, this is 13% in 2004, which the infectious status is similar to our result [25]. The Korean government provides foreigners with free of charge HIV testing, counseling, education, and medical tests in PHCs to prevent domestic population from new HIV infection, also to take care the foreigners' health.
The female in the STI risk group is to take a mandatory test every six months, but we found that the number of HIV test in them was less than twice per year. HIV-infected females from the STI risk group cannot be gainfully employed in businesses with routine testing. Therefore people employed in the high risk industries will likely take anonymous tests. If they are positive, they do not take due mandatory test, will leave the industry or go underground. The seroprevalence of female is lower in the STI risk group than in the General group, so that the change in testing policy for STI risk group need to be considered. A limitation of mandatory testing was demonstrated in Austria where HIV prevalence among registered CSWs is 0%, but it is 3.7% among illegal sex workers [26].
HIV infection is a major health problem in prisons around the world [27,28], and they show higher seroprevalence than other groups in Korea, although the reasons were not addressed in this study. HIV seroprevalence in metropolitan cities was higher than that in small towns or rural areas, and this result is similar to other countries [29].
The HIV seroprevalence in tuberculosis (TB) patients treated in PHCs was similar to that in the General group. Although Korea is under low HIV prevalence, and is under intermediate TB burden; the TB prevalence was estimated 12.3 per 10,000 in 2006 by WHO [30]. TB is the most common opportunistic infection in AIDS patients, and AIDS is a major cause of TB or pneumocystis-associated deaths in Korea [31,32]. However, HIV testing is not compulsory to TB patients in Korea, so it is difficult to find HIV prevalence among them. Fortunately, we could find HIV infections of TB patients in our study because TB patients in PHCs were recommended to take HIV tests.
By statute, the STI risk group and the HIV TRG are required to take mandatory testing at PHCs for the prevention of HIV transmission, and the HIV ISG to take voluntary tests for early detection. For the General group, HIV testing is free of charge, which leads low-income individuals to take test in PHCs. We can find the status of seroprevalence in each group and compare to characteristics of seroprevalence among groups, through taking mandatory testing and voluntary testing under the national HIV sur- veillance system. Our results were derived from the national HIV surveillance data collected by the HCIS. HIV seroprevalence of visitors to PHCs can be used for the evaluation of community health policy as well as national HIV/AIDS and STI policy. Therefore our results were highly significant to the study of HIV epidemiology in PHC testing takers.
Our study had several limitations. First, The General group included about 31% of the total testing takers in PHCs, and included major individuals with low income who were tested due to job, or welfare requirement. Therefore, patterns of HIV seroprevalence in the General group identified in this study may not be nationally representative. Secondly, the seroprevalence in the STI risk group may be underestimated because of the possibility that the individuals could undergo anonymous testing prior to mandatory testing. Third, the HIV seroprevalence (5.3 per 10,000 individuals) might be underestimated due to excluding the anonymous testing takers. In 2005, 35 HIV positive cases were detected among 9,877 anonymous tests at PHCs and their positivity were very higher for 35.4 per 10,000 tests. Fourth, getting medical certification for a job or residence application in the General group seems to be mandatory currently. This is more common in foreign workers.
Conclusion
UNAIDS/WHO do not support mandatory testing of individuals except blood donors on public health grounds. Voluntary testing is more likely to result in behaviors change to avoid transmission HIV [33]. Also, our result revealed that voluntary testing might be superior in identification of HIV-infected individuals, since HIV seroprevalence was low in the STI risk group while high in the HIV ISG. In addition, the mandatory testing on STI risk group raises a conflicting issue between human rights of the group vs. prevention of STI transmission. It is currently discussed to change the policy on the national HIV testing in Korea. We suggest there are needs in Korea to develop a method encouraging more people to take voluntary tests at PHCs and also to expand anonymous testing centers and Voluntary Counseling and Testing Program (VCT) for general population to easily access to HIV testing.
|
2016-05-12T22:15:10.714Z
|
2009-05-05T00:00:00.000
|
{
"year": 2009,
"sha1": "e3882933cfe7705cdf17cc2f2eb609c973916f2e",
"oa_license": "CCBY",
"oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/1471-2458-9-123",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e3882933cfe7705cdf17cc2f2eb609c973916f2e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
235349771
|
pes2o/s2orc
|
v3-fos-license
|
A novel algorithm for finding top-k weighted overlapping densest connected subgraphs in dual networks
The use of networks for modelling and analysing relations among data is currently growing. Recently, the use of a single networks for capturing all the aspects of some complex scenarios has shown some limitations. Consequently, it has been proposed to use Dual Networks (DN), a pair of related networks, to analyse complex systems. The two graphs in a DN have the same set of vertices and different edge sets. Common subgraphs among these networks may convey some insights about the modelled scenarios. For instance, the detection of the Top-k Densest Connected subgraphs, i.e. a set k subgraphs having the largest density in the conceptual network which are also connected in the physical network, may reveal set of highly related nodes. After proposing a formalisation of the approach, we propose a heuristic to find a solution, since the problem is computationally hard. A set of experiments on synthetic and real networks is also presented to support our approach.
interested into. "The proposed algorithm" section presents our heuristic; "Experiments" section discusses the case studies; finally "Conclusion" section concludes the paper.
Related work
Many complex systems cannot be efficiently modelled using a single network without losses of information. Therefore the use of dual networks is growing (Wu et al. 2016;Sun and Kardia 2010). These applications span a large number of fields as introduced before: from bioinformatics to social networks. In genetics, dual networks are used to describe and analyse interactions among genetic variants. They can discover the common effects among multiple genetic variants (Sun and Kardia 2010), using a protein-protein interaction network that represents physical interactions and a weighted network that represents the relations between two genetic variants, usually measured by statistical tests.
A relevant problem in network analysis is that of discovering dense communities, as they represent strongly related nodes. The problem of finding communities in a network or a dual network is based on the specific model of dense or cohesive graph considered. Several models of cohesive subgraph have been considered in the literature and applied in different contexts. One of the first definition of a cohesive subgraph is a fully connected subgraph, i.e. a clique. However, the determination of a clique of the maximum size, also referred to as the Maximum Clique Problem, is NP-hard (Hastad 1996), and it is difficult to approximate (Zuckerman 2006). Moreover, in real networks communities may have missing edges; therefore, the clique model is often too strict and may fail to find some important subgraphs. Consequently, many alternative definitions of cohesive Fig. 1 Workflow of the proposed approach. In the first step the input conceptual and physical networks are merged together using a network alignment approach; then Weighted-Top-k-Overlapping DCS is applied on the alignment graph. Each extracted subgraph induces a connected subgraph in the physical network and one of the top-k overlapping weighted densest subgraph in the conceptual one subgraphs that are not fully interconnected have been introduced, including s-club, s-plex and densest subgraph (Komusiewicz 2016;.
A densest subgraph is a subgraph with maximum density (where the density is the ratio between the number of edges and number of nodes of the subgraph) and the Densest-Subgraph problem asks for a subgraph of maximum density in a given graph. The problem can be solved in polynomial time (Goldberg 1984;Kawase and Miyauchi 2018) and approximated within factor 1 2 (Asahiro et al. 2000;. Notice that the Densest-Subgraph problem can be extended also to edge-weighted networks. Recently, Wu et al. (2016), proposed an algorithm for finding a densest connected subgraph in a dual network. The approach is based on a two-step strategy. In the first step, the algorithm prunes the dual network without eliminating the optimal solution. In the second step, two greedy approaches are developed to build a search strategy for finding a densest connected subgraph. Briefly, the first step finds the densest subgraph in the conceptual network. The second step refines this subgraph to guarantee that it is connected in the physical network.
In this contribution we use an approach based on local network alignment (LNA) that aims to find (relatively) small regions of similarity among two or more input networks. Such regions may be overlapping or not, and they represent conserved topological among networks. For instance, in protein interaction networks these regions are related to conserved motifs or pattern of interactions (Guzzi and Milenković 2017). LNA algorithms are usually based on building an intermediate structure, defined as alignment graph, and on the subsequent mining of it (Milano et al. 2020). For instance, Ciriello et al. (2012) and its successor AlignMCL (Mina and Guzzi 2014) are based on the construction of alignment graphs (see related papers for complete details about the construction of the alignment graph). GLAlign (Global Local Aligner), is a new local network alignment methodology (Milano et al. 2018) that mixes topology information from global alignment and biological information according to a linear combination schema, while the more recent L-HetNetAligner (Milano et al. 2020) extends the local alignment to heterogeneous networks.
While the literature of network mining has mainly focused on the problem of finding a single subgraph, recently the interest in finding more than a subgraph has emerged (Balalau 2015; Galbrun et al. 2016;Hosseinzadeh 2020;Cho et al. 2013). The proposed approaches usually allows overlapping between the computed dense subgraphs. Indeed, there can be nodes that are shared between interesting dense subgraphs, for example hubs. The proposed approaches differ in the way they deal with overlapping. The problem defined in Balalau (2015) controls the overlap by limiting the Jaccard coefficient between each pair of subgraphs of the solution. The Top-k-Overlapping problem, introduced in Galbrun et al. (2016), includes a distance function in the the objective function. In this paper, we follow this last approach and we extend it to weighted networks.
Definitions
This section introduces the main concepts related to our problem.
Definition 1 Dual Network.
A Dual Network (DN) G(V , E c , E p ) is a pair of networks: a conceptual weighted network G c (V , E c ) and a physical unweighted one G p (V , E p ). Now, we introduce the definition of weighted density of a graph.
Definition 2 Density.
Given a weighted graph G(V, E, weight), let v ∈ V be a node of G, and let be the sum of the weights of the edges incident in v. The density of the weighted graph G is defined as Given a graph (weighted or unweighted) G with a set V of nodes and a subset Z ⊆ V , we denote by G[Z] the subgraph of G induced by Z. Given E ′ ⊆ E , we denote by weight(E ′ ) the sum of weights of edges in E ′ . Given a dual network we denote by G p [I] , G c [I] , respectively, the subgraphs induced in the physical and conceptual network, respectively, by the set I ⊆ V .
A densest common subgraph DCS, formally defined in the following, is a subset of nodes I that induces a connected subgraph in the conceptual network and a connected subgraph in the physical network.
Definition 3 Densest Common Subgraph.
Given a dual network G(V , E c , E p ) , a densest common subgraph in G(V , E c , E p ) is a subset of nodes I ⊆ V such that G p [I] is connected and the density of G c [I] is maximum.
In this paper, we are interested in finding k ≥ 1 densest connected subgraphs. However, to avoid taking the same copy of a subgraph or subgraphs that are very similar, we consider the following distance functions introduced in Galbrun et al. (2016).
Notice that 2 − |A∩B| 2 |A||B| decreases as the overlapping between A and B increases. Now, we are able to introduce the problem we are interested into.
Problem 1 Weighted-Top-k-Overlapping DCS
Output: a set X = {G[X 1 ], . . . , G[X k ]} of k connected subgraphs of G, with k ≥ 1 , such that the following objective function is maximised: Weighted-Top-k-Overlapping DCS, for k ≥ 3 , is NP-hard, as it is NP-hard already on an unweighted graphs . Notice that for k = 1 , then Weighted-Top-k-Overlapping DCS is exactly the problem of finding a single weighted densest connected subgraph, hence it can be solved in polynomial time (Goldberg 1984).
Greedy algorithms for DCS
One of the ingredient of our method is a variant of a greedy algorithm for DCS, denoted by Greedy, which is an approximation algorithm for the problem of computing a connected densest subgraph of a given graph. Given a weighted graph G, Greedy (Asahiro et al. 2000; iteratively removes from G a vertex v having lowest vol (v) and stops when all the vertices of the graph have been removed. It follows that at each iteration i, with 1 ≤ i ≤ |V | , Greedy computes a subgraph G i of G. The output of this algorithm is a densest of subgraphs G 1 , . . . , G |V | . The algorithm has a time complexity O(|E| + |V | log |V |) on weighted graphs and achieves an approximation factor of 1 2 (Asahiro et al. 2000 ;. We introduce here a variant of the Greedy algorithm, called V-Greedy. Given an input weighted graph G, V-Greedy, similarly to Greedy, at each iteration i, with 1 ≤ i ≤ |V | , removes a vertex v having lowest vol(v) and computes a subgraph G i , with 1 ≤ i ≤ |V | . Then, among subgraphs G 1 , . . . , G |V | , V-Greedy returns a subgraph G i that maximises the value: Essentially, when selecting the subgraph to return among G 1 , . . . , G |V | , we add to the density the correction factor 2( ρ(G i ) |V i | ) . This factor is added to avoid returning a subgraph that is not well-connected in terms of edge connectivity, that is it contains a small cut. For example, consider a graph with two equal size cliques K 1 and K 2 having the same (large) weighted density and a single edge of large weight connecting them. Then the union of K 1 and K 2 is denser than both K 1 and K 2 , hence Greedy returns the union of K 1 and K 2 . This may prevent us to find K 1 , K 2 as a solution of Weighted-Top-k-Overlapping DCS. In this example, when the density of K 1 and K 2 is close enough to the density of their union, V-Greedy will return one of K 1 , K 2 .
The proposed algorithm
In this section we present our heuristic for Weighted-Top-k-Overlapping DCS in dual networks. The approach is based on two main steps: 1. First, the input networks are integrated into a single weighted alignment graph preserving the connectivity properties of the physical network 2. Second, the obtained alignment graph is mined by using an ad-hoc heuristic for Weighted-Top-k-Overlapping DCS based on the V-Greedy algorithm
Building of the alignment graph
In the first step the algorithm receives in input: a weighted graph G c (V , E c ) (the conceptual graph); an unweighted graph G p (V , E p ) (the physical graph); an initial set (seed nodes) of node pairs P, where each pair defines a correspondence between a node of G c and a node of G p ; a distance threshold δ that represents the maximum threshold distance that two nodes may have in the physical network. For example, when δ is set to one, only adjacent nodes in both networks are considered. Given the input data, the algorithm starts by building the nodes of the alignment graph. The alignment graph contains a node for each pair in P. The edges and weights of the alignment graph are defined as follows: • An edge {u, v} is defined in the alignment graph when the nodes corresponding to u and v are adjacent in G p and in G c ; the weight of {u, v} is equal to the weight of the edge connecting the nodes corresponding to u and v in G c • An edge {u, v} is defined in the alignment graph when u and v are adjacent in G p and have distance lower than δ in G c ; the weight of {u, v} is equal to the average of the weights on a shortest path connecting the nodes corresponding to u and v in G c .
A heuristic for Weighted-top-k-overlapping DCS
In the second phase of our algorithm, we solve Weighted-Top-k-Overlapping DCS on the alignment graph G computed in phase 1 via a heuristic. We present here our heuristic for Weighted-Top-k-Overlapping DCS, called Iterative Weighted Dense Subgraphs (IWDS). The heuristic starts with a set X = ∅ and consists of k iterations. At each iteration i, with 1 ≤ i ≤ k , given a set X = {G[X 1 ], . . . , G[X i−1 ]} of subgraphs of G, IWDS computes a subgraph G[X i ] and adds it to X.
The first iteration of IWDS applies the V-Greedy algorithm (see "Greedy algorithms for DCS" section) on G and computes G[X 1 ] . In iteration i, with 2 ≤ i ≤ k , IWDS applies one of the two following cases, depending on a parameter f, 0 < f ≤ 1 , and on the size of the set C i−1 = i−1 j=1 X j (the set of nodes already covered by the subgraphs in X). Case 1. If |C i−1 | ≤ f |V | (that is at most f|V| nodes of G are covered by the subgraphs in X ), IWDS applies the V-Greedy algorithm on a subgraph G ′ pf G obtained by retaining α nodes ( α is a parameter) of C i−1 having highest weighted degree in G and removing the other nodes of Case 2. If |C i−1 | > f |V | (more than f|V| nodes of G are covered by the subgraphs in X ), IWDS applies the V-Greedy algorithm on a subgraph G ′′ of G obtained by removing (1 − α) nodes (recall that α is a parameter of IWDS) of C i−1 having lowest weighted degree in G. IWDS computes G ′′ [X i ] as a weighted connected dense subgraph in G ′ , distinct from those in X.
Complexity evaluation.
We denote by n (by m, respectively) the number of nodes (of edges, respectively) of the dual network. The first step requires the analysis of both the physical and the conceptual graph, and the construction of the novel alignment graph. This requires O(n 2 )(calculation-edge-weights) time. The calculation of edge weights requires the calculation of a shortest path among all the node pairs in the physical graph using the Chan implementation (Chan 2012), therefore it requires O(nm p ) time ( m p is the number of edges of the physical graph).
As for Step 2, IWDS makes k iterations. Each iteration applies V-Greedy on G and requires O(mn log n) time, as the Greedy algorithm . Iteration i, with 2 ≤ i ≤ k , first computes the set of covered nodes in order to find those nodes that have to be removed (or retained). For this purpose, we sort the nodes in C j−1 based on their weighted degree in O(n log n) time. Thus the overall time complexity of IWDS is O(kmn log n).
Experiments
In this section, we provide an experimental evaluation of IWDS on synthetic and real networks. 1 The design of a strong evaluation scheme for our algorithm is not simple, since we have to face two main issues: 1. Existing methods for computing the top k overlapping subgraphs (Galbrun et al. 2016) are defined for unweighted graphs and cannot be used on dual networks. 2. Existing network alignment algorithms do not aim to extract top k densest subgraphs.
Consequently, we cannot easily compare our approach with the existing state of the art methods, and we design an ad-hoc procedure for the evaluation of our method based on the following steps. First, we consider the performance of our approach on synthetic networks. In this way, we show that, in many of the cases we considered, IWDS can correctly recover top k weighted densest subgraphs. Then we apply our method to four realworld dual networks. The alignment algorithm described of "A heuristic for Weighted-top-k-overlapping DCS" section is implemented in Python 3.7 using the NetworkX package for managing networks (Hagberg et al. 2008). IWDS is implemented in MATLAB R2020a. We perform the experiments on MacBook-Pro (OS version 10.15.3) with processor 2.9 GHz Intel Core i5 and 8 GB 2133 MHz LPDDR3 of RAM, Intel Iris Graphics 550 1536 MB.
Synthetic networks
In the first part of our experimental evaluation, we analyse the performance of IWDS to find planted ground-truth subgraphs on synthetic datasets.
In Synthetic1, each planted dense subgraph contains 30 nodes and has edge weights randomly generated in the interval [0.8, 1]. In Synthetic3, each planted dense subgraph contains 20 nodes not shared with other planted subgraphs. The subgraphs are arranged in a cycle, 5 nodes of each subgraph are shared with the subgraph on one side and 5 nodes are shared with the subgraph on the other side. Edge weights are randomly generated in the interval [0.8, 1].
These cliques are then connected to a background subgraph of 100 nodes. We consider three different ways to generate the background subgraph: Erdös-Renyi with parameter p = 0.1 , Erdös-Renyi with parameter p = 0.2 and Barabasi-Albert with parameter equal to 10. Weights of the background graphs are randomly generated in interval [0, 0.5]. Then 50 edges connecting cliques and the background graph are randomly added (with weights randomly generated in interval [0, 0.5]).
Based on this approach, we generate four different sets of synthetic networks, called Synthetic1, Synthetic2, Synthetic3 and Synthetic4. Synthetic1 (for the non-overlapping case) and Synthetic3 (for the overlapping case) are generated as described above. Syn-thetic2 and Synthetic4, respectively, are obtained by applying noise to the synthetic networks in Synthetic1, Synthetic3, respectively. The noise is added by varying 5%, 10% and 15% of node relations of each network. A set of pairs of nodes are chosen randomly: if they belong to the same clique, the weight of the edge connecting the two nodes is changed to a random value in the interval [0, 0.5]; else an edge connecting the two nodes is (possibly) added (if not already in the network) and its weight is randomly assigned a value in the interval [0.8, 1].
Outcome. We present the results of our experimental evaluation, in particular, the average running time, density, distance and F1-score, 2 varying the parameter α . We recall that F1-score is the average mean of precision and recall, and, as in Galbrun et al. (2016) we consider this measure to evaluate the accuracy of our method to detect the ground-truth subgraphs. Following Yang and Leskovec (2012), we consider the number of shared nodes between each ground-truth subgraph and each detected subgraph, so that we are able to define the best-matching of ground-truth subgraphs and detected subgraphs. Then, we compute the F1[t/d] measure as the average F1-score of the bestmatching ground-truth subgraph to each detected subgraph (truth to detected) and F1[d/t] measure as the average F1-score of the best-matching detected subgraph to each ground-truth subgraph (detected to truth). Notice that in most of the cases considered, the running time of IWDS increases with the increasing of α . Also, generally, the solutions returned by IWDS for larger values of α are denser than for small values, while the solutions with small values of α have a higher value of distance (hence the subgraphs returned have a smaller overlapping).
Tables 1 and 3 report average results of running time (in minutes), density, distance and F1 scores for the two noiseless datasets. Table 1 shows the experimental results for the noiseless Synthetic1 dataset, where ground-truth subgraphs are disjoint. In this case IWDS is able to detect the ground-truth subgraphs for all values of α , averaged over 300 examples. For Synthetic4, the added noise has a significant impact on the quality of computed solutions, even for noise value equal to 0.05. While the noise increasing has a limited effect on IWDS for small value of α ( α ≤ 0.25 ), for higher values of α leads to a degrade in performance, in particular for F1[t/d].
Dual networks
We evaluate IWDS on four real-world dual network datasets: Datasets. G-graphA. The G-graphA dataset is derived from the GoWalla social network where users share their locations (expressed as GPS coordinates) by checking-in into the web site (Cho et al. 2011). Each node represents a user and each edge links two friends in the network. We obtained the physical network by considering friendship relation on the social network. We calculated the conceptual network by considering the distance among users. Then we run the first step of our algorithm and we obtained the alignment graph G-graphA, containing 2,241,339 interactions and 9878 nodes (we set δ =4). In this case a DCS represents set of friends that share check-ins in near locations.
DBLP-graphA. The DBLP-graphA dataset is extracted from a computer science bibliography and represents interactions between authors. Nodes represent authors and edges represent connections between two authors if they have published at least one paper together. Each edge in the physical network connects two authors that coauthored at least one paper. Edges in the conceptual network represent the similarity of research interests of the authors calculated on the basis of all their publications. After running the first step of the algorithm (using δ=4), we obtained an alignment graph DBLP-graphA dataset containing 553,699 interactions and 18,954 nodes. In this case a DCS represents a set of co-authors that share some strong common research interests and the use of DNs is mandatory, since physical network shows only co-authors that may not have many common interests and the conceptual network represents authors with common interest that may not be co-authors.
HS-graphA. HS-graphA is a biological dataset and is taken from the STRING database (Szklarczyk et al. 2016). Each node represents a protein, and each edge takes into account the reliability of the interactions. We use two networks for modelling the database: a conceptual network represents such reliability value; and a physical network stores the binary interactions. The HS-graphA dataset contains 5,879,727 interactions and 19,354 nodes (we set δ=4).
Protein-interaction We extracted from the STRING database a subnetwork of proteins involved into the SARS-COV-2 infection (Szklarczyk et al. 2016). The physical network contains interacting proteins, while the conceptual network contains the strength of the association among them. Protein-Interaction contains 192 nodes and 418 edges (Table 5).
Outcome For these large size datasets, we set the value of k to 20, following the approach in Galbrun et al. (2016). Table 6 reports the running time of IWDS, and the density and distance of the solutions returned by IWDS. As for the synthetic datasets, we consider six different values of α . As shown in Table 6, by increasing the value of α from 0.05 to 0.5, IWDS (except of one case, HS-graphA with α = 0.1 ) returns solutions that are denser, but with lower distance. Table 6 shows also how the running time of IWDS is influenced by the size of the network and by the value of α . We have put a bound of 20 h on the running time of IWDS and the method was not able to return a solution for HS-graphA for α ≥ 0.5 within this time. The running time is influenced in particular by the number of edges of the input network. DBLP-graphA and HS-graph-A have almost the same number of nodes, but HS-graph-A is much more denser than DBLP-graphA. IWDS for the former network is remarkably slower than for DBLP-graphA (1.986 slower for α = 0.05 , 6.218 slower for α = 0.25 ). The running time of IWDS is considerably influenced by the value of parameter α , since it increases as α increases. Indeed by increasing the value of α , less nodes are removed by Case 1 and Case 2 of IWDS, hence in iterations of IWDS V-Greedy is applied to larger subgraphs. This fact can be seen in particular for HS-graphA, for which IWDS failed to terminate within 20 h when α ≥ 0.5.
Biological evaluation of results
For biological data there is the possibility to evaluate the relevance of the results considering the relevance of the biological knowledge that results may convey.
Biological data are usually annotated with terms extracted from ontologies, e.g. Gene Ontology . Consequently, experiments of analysis of biological data may evaluated in terms of the biological knowledge inferred from the analysis of data and in terms of the statistical relevance of the results themselves. For instance, given a DCS extracted from two biological networks, it is interesting to determine the biological meaning of the DCS and how this is relevant, i.e. how this DCS may convey biological relevance with respect to another random one. Usually, Table 6 Performance of IWDS on real-world network for k = 20 , varying α from 0.05 to 0.9. For each network, we report the running time in minutes, the density and the distance subgraphs of biological networks may represent groups of interacting proteins sharing some common functions or playing similar biological roles. Consequently, it is possible to evaluate the biological relevance of obtained results by considering the role of proteins. Such information are stored and organised into biological ontologies such as Gene Ontology (GO) (Harris et al. 2004). GO functional enrichment has been proposed to evaluate the significant presence of common roles or function in a solution represented as a list of genes/proteins. It has been shown that the use of semantic similarities (SS) ) is a feasible and efficient way to quantify biological similarity among proteins. SS measures are able to quantify the functional similarity of pairs of proteins/genes, comparing the GO terms that annotate them, therefore proteins that share the biological role have high values of semantic similarity. As a consequence, genes/proteins that are found in the same solution should have a semantic similarity significantly higher than random expectation. These considerations have been used during the design of the evaluation of our results that we adapted from the evaluation scheme proposed in Mina and Guzzi (2014). Given a DCS DCS k we calculate its internal semantic similarity SS DCS k as the average semantic similarity of all the nodes pairs of the DCS as follows: We compare the DCS extracted from the biological network against random ones obtained by randomly sampling the input networks to prove their statistical significance. Given a DCS DCS i , we can test the null hypothesis: H 0 1 : the average semantic similarity of the protein internals to the DCS SS(DCS i ) is higher than by chance, where the background distribution can be estimated from the semantic similarity of random subgraphs RS i taken from the alignment graph SS(RS i ) , using for instance 0.05 as significance level.
Consequently we design this test as described in the following algorithm: • Let DCS i be a given DCS; • Let SS(DCS i ) be its internal semantic similarity • Let V s be the set of 100 random subgraph with same size V s ={RS j } j=0,..,99 • For Each RS j ∈ V s calculate SS j (RS j ) the internal semantic similarity of each random solution • Compare SS(DCS i ) and all the SS j (RS j ) using a non parametric test • Accept or Refuse the Hypothesis SS(DCS i ) is significantly higher than SS j (RS j ) Consequently, for each graph in the solution we generate 100 random graphs of the same size, by sampling the obtained alignment graph. For each graph we calculated its internal semantic similarity using the Resnick measure (Resnik 1999). Results demonstrate that (1) SS DCS k = n i ∈DCS k n j ∈DCS k ,j� =i SS DCS k (n i , n j ) �SS DCS k ��SS DCS k −1 � Table 7 Comparison of the average semantic similarity for the two biological networks considered
Random solutions
0.3 ± 0.1 DCS 0.6 ± 0.1 our solution is biologically relevant and the relevance is higher than by chance as summarised in Table 7.
Conclusion
DNs are used to model two kinds of relationships among elements in the same scenario. A DN is a pair of networks that have the same set of nodes. One network has unweighted edges (physical network), while the second one has weighted edges (conceptual network). In this contribution, we introduced an approach that first integrates a physical and a conceptual network into an alignment graph. Then, we applied the Weighted-Top-k-Overlapping DCS problem to the alignment graph to find k dense connected subgraphs. These subgraphs represent subsets of nodes that are strongly related in the conceptual network and that are connected in the physical one. We presented a heuristic, called IWDS, for Weighted-Top-k-Overlapping DCS and an experimental evaluation of IWDS. We first considered as a proof-of-concept the ability of our algorithm to retrieve known densest subgraphs in synthetic networks. Then we tested the approach on four real networks to demonstrate the effectiveness of our approach. Future work will consider a possible high performance implementation of our approach and the application of the IWDS algorithm to other scenarios (e.g. financial or marketing datasets).
|
2021-06-06T13:48:06.837Z
|
2021-06-05T00:00:00.000
|
{
"year": 2021,
"sha1": "61a62967d4bce1eb4943b696d9e58517fa51b504",
"oa_license": "CCBY",
"oa_url": "https://appliednetsci.springeropen.com/track/pdf/10.1007/s41109-021-00381-8",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "da1705d5565536e517eca27cfadf86057111956c",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
}
|
270815782
|
pes2o/s2orc
|
v3-fos-license
|
Analysis of cancer-specific survival in patients with metastatic colorectal cancer: A evidence-based medicine study
BACKGROUND Metastatic colorectal cancer (mCRC) is a common malignancy whose treatment has been a clinical challenge. Cancer-specific survival (CSS) plays a crucial role in assessing patient prognosis and treatment outcomes. However, there is still limited research on the factors affecting CSS in mCRC patients and their correlation. AIM To predict CSS, we developed a new nomogram model and risk grading system to classify risk levels in patients with mCRC. METHODS Data were extracted from the United States Surveillance, Epidemiology, and End Results database from 2018 to 2023. All eligible patients were randomly divided into a training cohort and a validation cohort. The Cox proportional hazards model was used to investigate the independent risk factors for CSS. A new nomogram model was developed to predict CSS and was evaluated through internal and external validation. RESULTS A multivariate Cox proportional risk model was used to identify independent risk factors for CSS. Then, new CSS columns were developed based on these factors. The consistency index (C-index) of the histogram was 0.718 (95%CI: 0.712-0.725), and that of the validation cohort was 0.722 (95%CI: 0.711-0.732), indicating good discrimination ability and better performance than tumor-node-metastasis staging (C-index: 0.712-0.732). For the training set, 0.533, 95%CI: 0.525-0.540; for the verification set, 0.524, 95%CI: 0.513-0.535. The calibration map and clinical decision curve showed good agreement and good potential clinical validity. The risk grading system divided all patients into three groups, and the Kaplan-Meier curve showed good stratification and differentiation of CSS between different groups. The median CSS times in the low-risk, medium-risk, and high-risk groups were 36 months (95%CI: 34.987-37.013), 18 months (95%CI: 17.273-18.727), and 5 months (95%CI: 4.503-5.497), respectively. CONCLUSION Our study developed a new nomogram model to predict CSS in patients with synchronous mCRC. In addition, the risk-grading system helps to accurately assess patient prognosis and guide treatment.
INTRODUCTION
Colorectal cancer (CRC) is one of the most common malignant neoplasms, ranking third in incidence (10.2%) and second in mortality (9.2%) [1][2][3].In countries in Eastern Europe, Latin America, and Asia, the incidence and mortality of CRC are increasing annually [4].There are no obvious signs or symptoms of CRC in the early stages, and more than one-fifth of patients have developed distant metastases at the time of diagnosis [5].Among patients with CRC, patients with simultaneous metastases have lower survival rates than patients with heterochronous metastases [6].The most common metastatic organs for CRC are the liver and lung, while bone metastases are rare, and brain metastases occur in only 1% of CRC patients [7].Although metastatic CRC (mCRC) has the worst prognosis, there are large differences in survival outcomes between patients with different metastatic organs.The 1-year survival rate for patients with liver and lung metastases is greater than 80%, while the 1-year survival rates for patients with bone and brain metastases are 30% and 11%, respectively [8].Therefore, accurate screening for different risk factors is critical for physicians to predict mCRC outcomes.
Currently, the American Joint Committee on Cancer (AJCC) staging system is the primary method for predicting survival outcomes in patients with mCRC [9].However, the T stage, N stage, and M stage are the only factors for distinguishing different prognoses, and this scheme is far from satisfactory in terms of prediction accuracy [10].A nomogram is a visual tool used to predict the probability of an endpoint occurring and to quantify survival risk.According to the different regression coefficients, the columniogram can include significant factors to improve the prediction accuracy.To date, nomograms have been successfully used to predict the prognosis of patients with CRC but have rarely been used for patients with mCRC [11].
Therefore, our goal was to develop a new nomographic model to predict tumor-specific survival for patients with simultaneous mCRC and to divide this model into different risk levels to accurately assess patient prognosis.
Research subjects
This study obtained all the data from the Surveillance, Epidemiology, and End Results (SEER) program of the National Cancer Institute using SEER Stat software (version 8.3.6).The data were collected and reported using data items and codes recorded by the North American Association of Central Cancer Registries.The inclusion criteria for patients were as follows: (1) Were diagnosed with CRC between 2018 and 2023; (2) were diagnosed with simultaneous metastasis; and (3) had a histological diagnosis.The exclusion criteria were as follows: (1) No patients with distant metastasis; and (2) unknown missing data, such as race, primary tumor site, T stage, N stage, carcinoembryonic antigen (CEA) status, surgical status, and survival time.
The following variables were collected: Race, sex, age at diagnosis, primary site, grade, T stage, N stage, CEA status, distant metastatic status (liver, lung, bone, brain), surgery (primary tumor resection), chemotherapy, cancer-specific survival (CSS), and survival time.CSS was assessed by 1-, 2-, and 3-year survival rates, defined as the time from the date of diagnosis to the date of death or study due to CRC, according to the eighth edition of the AJCC tumor-node-metastasis staging system.
Research method
All eligible patients were randomly divided into training and validation groups (at a ratio of 7:3).The Pearson chi-square test was used to examine demographic differences between all coqueues, training coqueues, and validation coqueues.A multivariate Cox proportional risk model was used to explore independent risk factors for CSS, and a predictive nomogram model was built using a training cohort.The C-index, calibration curve, and decision curve analysis (DCA) were used for internal and external verification.
Nomogram analysis
X-tile software was used to determine the optimal critical value according to the total score of the column graph to establish a risk grading system, and all patients were divided into low-, medium-, and high-risk groups.Kaplan-Meier (K-M) curves of CSS were constructed and compared with a logarithmic rank test.Statistical analysis was performed
Statistical analysis
SPSS 23.0 statistical software was used for analysis.The χ 2 test was used for comparison of counting data, and the t test was used for comparison of measurement data.The survival rate was calculated by the life table method, the survival curve was plotted by the K-M method, and comparisons were performed by the log-rank method.Multiple factor analysis was performed by the Cox proportional risk regression model, and P < 0.050 was considered to indicate statistical significance.
Baseline population information
According to the inclusion criteria, a total of 15838 patients eligible for inclusion were included in this study, among whom 11088 (70.0%) patients were randomly assigned to the training cohort and 4750 (30.0%) patients were randomly assigned to the verification cohort.The demographic characteristics of this study population are shown in Table 1.
Prediction factor determination
The Cox proportional hazards model was used to identify independent risk factors for CSS.Multivariate analysis revealed that the independent risk factors in the training cohort were race, age at diagnosis, primary site, tumor grade, N stage, CEA status, liver metastasis, lung metastasis, bone metastasis, brain metastasis, surgery, and chemotherapy (Table 2).
Based on the significant risk factors for CSS, a predictive nomogram model of CSS was established (Figure 1).The regression coefficients and estimates of the training queue are shown in Table 3.The nomogram was evaluated with internal and external validation.The C-index of the column chart was 0.718 (95%CI: 0.712-0.725),and the C-finger number of the verification set was 0.722 (95%CI: 0.711-0.732),indicating good identification ability and better performance than TNM staging (C-index: Training set, 0.533, 95%CI: 0.525-0.540;verification set, 0.524, 95%CI: 0.513-0.535).A calibration diagram of the CSS showed good agreement between the predicted and actual values of the training and validation samples, with 1000 bootstrap samples (Figure 2).The DCA curve showed a large net gain between most threshold probabilities at different time points, indicating good potential clinical validity for predicting CSS (Figure 3).
Establishment of the risk classification system
In addition, X-Tile software was used to determine the optimal cutoff value and establish a risk classification system (Figure 4).All patients were classified as low risk (5852/11088, 52.78%, score: 0-164), medium risk (3487/11088, 31.45%,score: 165-247) or high risk (1749/11088, 15.77%, score: 248-524).In theory, the total score ranges from 0 to 524.K-M curves showed that the risk grading system had good layering and differentiation ability for different CSS groups (Table 4, Figure 5).
DISCUSSION
The prognosis of mCRC patients is significantly worse than that of non-mCRC patients.mCRC mortality varies widely from patient to patient, suggesting the importance and necessity of reclassifying the exact risk level based on the AJCC staging system [12][13][14].However, due to the limitations of the included factors, the existing prediction models lack individualization and comprehensive evaluation, and the sample sizes of most studies [15][16][17] are small, which also limits their universal applicability.In this study, we developed a new CSS predictive nomogram based on simultaneous mCRC data from large population cohorts.
We identified predictors of CSS that were consistent with previous studies, including race, age at diagnosis, primary site, grade, N stage, CEA status, liver metastasis, lung metastasis, bone metastasis, brain metastasis, surgery, and chemotherapy [18].For patients with mCRC, both surgery and chemotherapy are important for improving outcomes, as recommended by the United States National Comprehensive Cancer Network (NCCN) guidelines and the European Society of Medical Oncology guidelines [19].Modest suggested that the effective rate of first-line systemic treatment is 38% to 65%, and the disease control rate is 81% to 90% [20].Compared to earlier studies, this column chart is the first to include chemotherapy status as a risk predictor for predicting CSS.The highest score of mCRC patients who did not receive chemotherapy was 100, which was greater than that of mCRC patients who did not receive surgery, indicating that the regression coefficient of the effect of chemotherapy on CSS was greater than that of surgery [21][22][23].In addition, patients who did not receive chemotherapy or who did not receive chemotherapy were not separately recorded in the SEER database as confounding risk factors in this study, which may reduce the actual regression coefficient of not receiving chemotherapy [24][25][26].According to previous studies [27][28][29], chemotherapy is positively associated with survival benefits in patients with mCRC, and our study further highlights the unique advantages of simultaneous mCRC chemotherapy.
In addition to chemotherapy, our study revealed that primary tumor resection is also important for prognosis.Several studies [30][31][32] support this idea in mCRC, especially in patients with liver or lung metastases.The NCCN guidelines recommend that patients with mCRC should be evaluated by a multidisciplinary team and, if possible, that the metastatic disease and primary tumor should be removed.Therefore, primary tumor resection remains controversial for mCRC patients whose metastases cannot be resected.Studies [33][34][35] have shown that primary tumor resection significantly extends overall survival (OS) in mCRC patients with unresectable metastases (median OS: 13.8 months vs 6.3 months, P = 0.0001).Another study [36] also supported the idea that primary tumor removal resulted in better survival for mCRC patients with unresectable metastases (2-year CSS: 50.2% vs 28.1%, P < 0.001).In conclusion, primary tumor resection has a positive impact on patient survival.As mentioned above, the liver and lungs are the most common sites of CRC metastasis, and bone and brain metastases are very rare.In addition, the prognostic significance of different metastatic organs was inconsistent.The occurrence of brain metastases is often associated with the worst survival, and studies [37][38][39] have reported that the median survival of CRC patients with brain metastases is 3 to 6 months, that of CRC patients with bone metastases is 5 to 7 months, that of CRC patients with liver metastases is 22.8 months, and that of CRC patients with lung metastases is 36.2 to 49 months.Another study confirmed this idea, with brain metastases having the largest coefficient of impact among the four metastatic organs of CRC.Our study showed that the regression coefficients of CSS in descending order were brain metastasis, bone metastasis, liver metastasis, and lung metastasis.Due to the presence of the blood-brain barrier (BBB) and blood-cerebrospinal fluid barrier (CSF), brain metastases are often the ultimate organs of metastasis for CRC, while other extracranial metastases occur in areas such as the liver and lungs.The BBB and CSF also hinder chemotherapy efficacy, which may be another reason for the poor prognosis.
On the basis of multiple regression analysis, we developed a new nomograph to integrate multiple predictors and help accurately predict the survival of patients with synchronous mCRC.One study constructed a nomogram for predicting the survival of CRC patients.Another study also developed an OS nomogram model for predicting mCRC with strong consistency.Compared to existing predictive models, our column charts integrate more predictive variables, such as chemotherapy and surgery, to provide comprehensive predictions for CSS.In addition, through X-Tile software, we established a risk classification system with an optimal cutoff value that is more accurate and reliable.This approach helps to assess the level of risk in patients with mCRC, allowing for individualized treatment and an accurate prognosis.
In addition, we provide estimated points for each important prognostic factor to improve clinical application [40].
There are several limitations to our study.First, this study is a retrospective analysis of existing selection bias.Furthermore, the SEER database does not contain detailed information on chemotherapy regimens or targeted therapies, which hinders further subgroup analysis.Then, the SEER data are used to verify the validity of the column graph prediction, which lacks the verification of real data.
Figure 1
Figure 1 Nomogram for predicting the tumor-specific survival of patients with metastatic colorectal cancer.CEA: Carcinoembryonic antigen; CSS: Cancer-specific survival.
Figure 2
Figure 2 Calibration curves based on cancer-specific survival for metastatic colorectal cancer patients.A-C: Calibration curves based on 1-, 2-, and 3-year cancer-specific survival (CSS) of the training cohort; D-F: Calibration curves based on 1-, 2-, and 3-year CSS of the validation cohort.
Figure 3
Figure 3 The nomogram model predicts the clinical decision curve of cancer-specific survival in metastatic colorectal cancer patients.A-C: Clinical decision curves based on 1-, 2-, and 3-year cancer-specific survival (CSS) in the training cohort; D-F: Clinical decision curves based on 1-, 2-, and 3-year CSS in the validation cohort.
Figure 4 X
Figure 4 X-tile software was used to calculate the optimal truncation value and establish a risk classification system.A and B: The optimal cutoff values of the predicted total scores, including the low-risk group (score: 0-164), medium-risk group (score: 165-247) and high-risk group (score: 248-480); C: Kaplan-Meier curves for different risk levels according to the cancer-specific survival of the training cohort.
|
2024-06-29T15:13:30.890Z
|
2024-06-27T00:00:00.000
|
{
"year": 2024,
"sha1": "294bb7a938a2e59fafcc191f6db550543768469d",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.4240/wjgs.v16.i6.1791",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f6fad051f8410e3cd0290029f71ba90cbd7a6b89",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
251765287
|
pes2o/s2orc
|
v3-fos-license
|
Hybrid Fusion Based Interpretable Multimodal Emotion Recognition with Insufficient Labelled Data
This paper proposes a multimodal emotion recognition system, VIsual Spoken Textual Additive Net (VISTA Net), to classify the emotions reflected by a multimodal input containing image, speech, and text into discrete classes. A new interpretability technique, K-Average Additive exPlanation (KAAP), has also been developed to identify the important visual, spoken, and textual features leading to predicting a particular emotion class. The VISTA Net fuses the information from image, speech & text modalities using a hybrid of early and late fusion. It automatically adjusts the weights of their intermediate outputs while computing the weighted average without human intervention. The KAAP technique computes the contribution of each modality and corresponding features toward predicting a particular emotion class. To mitigate the insufficiency of multimodal emotion datasets labeled with discrete emotion classes, we have constructed a large-scale IIT-R MMEmoRec dataset consisting of real-life images, corresponding speech & text, and emotion labels (‘angry,’ ‘happy,’ ‘hate,’ and ‘sad.’). The VISTA Net has resulted in 95.99% emotion recognition accuracy on considering image, speech, and text modalities, which is better than the performance on considering the inputs of any one or two modalities. CCS Concepts: • Information systems → Sentiment analysis ; Multimedia and multimodal retrieval ; • General and reference → Cross-computing tools and techniques ; • Computing methodologies → Supervised learning by classification .
INTRODUCTION
The multimedia data has overgrown in the last few years, leading multimodal emotion analysis to emerging as an important research trend [2]. Research in this direction aims to help machines become empathetic as emotion analysis is used in various applications such as cognitive psychology, automated identification, intelligent devices, and human-machine interface [49]. Humans portray different emotions through various modalities such as images, speech, and text [8]. Utilizing the multimodal information from them could increase the performance of emotion recognition [75].
(1) The bimodal emotion recognition has been performed for each combination of speech, text, and image modalities using a hybrid of intermediate and late fusion. The SER, TER, and IER models proposed in the previous chapters have been utilized, and modality weights for fusion have been computed using grid-search. (2) A hybrid-fusion-based novel interpretable multimodal emotion recognition system, VISTA Net, has been proposed to classify an input containing an image, corresponding speech, and text into discrete emotion classes. (3) A novel interpretability technique, KAAP, has been developed to identify each modality's importance and important image, speech, and text features contributing the most to recognizing emotions. (4) A large-scale dataset, 'IIT-R MMEmoRec dataset' containing images, speech utterances, text transcripts, and emotion labels has been constructed.
Further in this paper, the related works have been reviewed in Section 2. The proposed dataset, system, and interpretability technique have been described in Section 3 along with the dataset construction procedure. Section 4 and 5 discuss the experiments and results and the paper is concluded in Section 6.
Unimodal emotion recognition
2.1.1 Speech emotion recognition. The traditional feature-based SER systems extract audio features such as cepstrum coefficient, voice tone, prosody, and pitch and use them for SER [13]. For instance, Rong et al. [55] worked on extracting the most important audio features from the speech samples, whereas Lee et al. [30] used the extracted audio features to identify negative and positive emotions in speech samples. The feature-based SER systems depend on the polarity of the emotional features. The features of high-key classes (happiness and anger) are similar in properties among themselves, and they are very different from the low-key classes (sad and despair) [64]. In the context of machine learning-based SER, Support Vector Machine (SVM) based classifiers and Hidden Markov model (HMM) based statistical techniques have also been explored [23,34]. However, they require manual crafting of acoustic features, and HMM-based models cannot always reliably estimate the parameters of global speech features [13]. Hence, it is challenging to develop an end-to-end SER system using them. The deep learning-based approaches using spectrogram features and attention mechanisms have shown state-of-the-art SER results [10]. In this context, Xu et al. [69] generated multiple attention maps, fused them, and used them for SER. They observed an increased performance as compared to non-fusion-based approaches. In another work, Mao et al. [41] used a Convolutional Neural Network (CNN) for spectrogram processing. On the other hand, Majumder et al. [38] performed Recurrent Neural Network (RNN) based SER. They determined speech embeddings and used them for speaker identification. In another work using attention maps, Seyedmahdad et al. [43] implemented local attention to learn the emotion features automatically.
Text emotion recognition.
Deep learning-based approaches have shown state-of-the-art TER results. With the evolution of deep learning, it has become possible to convert text into vectors and use Deep Neural Networks (DNNs) to process them. Emotion recognition in conversation could be useful to mine opinions from conversational data on platforms such as YouTube, Facebook, Reddit, Twitter, and others [51]. More examples of Deep Learning-based emotion analysis include personality detection from text using document modeling [37] and text-based emotion recognition using YouTube comments [19]. In another work, Huang et al. [22] used a pre-trained Bidirectional Encoder Representations from Transformers (BERT) model for emotion recognition in text dialogs. Shrivastava et al. [60] applied sequence-based CNNs, whereas Bambaataa et al. [3] utilized semantic and emotional information to train a deep TER model. Deep models focus on learning parameters such as features, sequence-related information, and contextual information from the text input.
2.1.3 Image emotion recognition. One of the most informative ways for machine perception of emotions is through facial expressions in images and videos. Identifying human emotions from facial expressions is a relatively more saturated research area than emotion recognition in other modalities. Various techniques such as face localization, face registration, micro-expression analysis, tracking the landmark points, shape feature analysis, eye gaze prediction, face segmentation, and detection have been developed for facial emotion recognition [9,31]. Image Emotion Recognition (IER) research is also an active domain. For instance, Kim et al. [? ] built a deep feed-forward neural network to combine different levels of emotion features obtained by using the semantic information of the image. In another work, Rao et al. [53] prepared hierarchical notations for emotion recognition in the visual domain. Traditional, feature-based IER analysis with low-level (shape, color, and edge) and mid-level (composition and optical balance) image features [20,24,36] used the semantic content of the images for emotion analysis. However, all the low and mid-level features are difficult to accommodate by handcrafted feature extraction techniques used for machine traditional and machine learning-based methods. On the other hand, deep learning-based IER approaches are capable of extracting high-level visual features, but they struggle to extract low and mid-level features. Moreover, they require well-labeled large-scale datasets for training [53].
Multimodal emotion recognition
Different emotion representation methods are used in various modalities [18,71]. For example, emotion representation in face and gestures utilizes features tracking, sensitivity analysis, and heat maps. In real-life scenarios, emotions are portrayed through various modalities such as vision, speech, and text. Analysis in a single modality may not be able to recognize the emotional context completely [75]. That fact has paved the way to draw researchers' attention towards multimodal emotion analysis [21]. Moreover, various modalities have different statistical properties associated with them. To correctly recognize complex human emotions portrayed through them, it is very important to consider their inter-relationships [49]. As discussed earlier in this chapter, various attempts have been made for emotion analysis using visual, spoken, and textual modalities individually. However, the emotion analysis in a multimodal manner considering the inter-relationships of these modalities is still an unexplored space [46]. The existing works in this direction are discussed as follows.
(Speech + text) emotion recognition.
In the context of recognizing emotions from speech utterances and corresponding text transcripts, Chuang et al. [7] worked on analyzing the overlapping emotional information in speech and text. A simple fusion of spoken and textual information has also been used for emotion recognition in some more works. For instance, Makuuchi et al. [39] performed separate acoustic and textual analyses and determined the emotional context based on their collective result. In another work, Yoon et al. [72] extracted the audio and text information using dual RNNs and then combined it to perform emotion recognition. On the other hand, some research attempts used textual information to improve the SER performance. For example, Tripathi et al. [65] performed emotion recognition on the Interactive EMOtional dyadic motion CAPture (IEMOCAP) dataset [5] using data from speech and text modalities and complementing it with hand movements and facial impressions data. In another work, Siriwardhana et al. [63] fine-tuned transformers-based models to improve the performance of multimodal speech emotion recognition.
2.2.2 (Text + image) emotion recognition. Several attempts have been made to recognize the emotional content portrayed in visual and textual modalities. In this direction, Kahou et al. [25] developed a framework, EmoNets for emotion recognition in video and text. Fortin et al. [46] implemented a multi-task architecture-based emotion recognition approach to perform predictions with one or two missing modalities by using a classifier for each combination of image, text, and tags. In another work, Xu et al. [70] modeled the interplay of visual and textual content for sentiment recognition using a co-memory based-network.
(Speech + image) emotion recognition.
Multimodal emotion analysis from audio-visual data has also started getting researchers' attention lately [21]. For instance, Aytar et al. [1] proposed SoundNet that extracts the emotional information by self-supervised learning of sound representations. In another work, Guanghui et al. [17] implemented the feature correlation analysis algorithm for multimodal emotion recognition. They extracted speech and visual features using two-dimensional and three-dimensional CNNs, fused the features, and used SVM for emotion classification on the fused features.
(Speech + text + image) emotion recognition.
There have been several attempts regarding emotion recognition in more than two modalities simultaneously. For example, Poria [50] used information fusion techniques to combine the context from audio, vision, and textual modalities for the sentiment analysis. They found the fusion of the textual and spoken description of the emotional information as an aid to the emotion analysis for visual modality. In another work, Tzirakis et al. [66] extracted the features from speech, image, and text modalities, analyzed the correlation among them, and trained an end-to-end emotion recognition system using them combinedly.
Explainable and interpretable emotion analysis
Explainability refers to the ability to describe an algorithm's mechanism that led to a particular output. In contrast, Interpretability understands the context of a model's output, analyzes its functional design, and relates the design to the output [4,28]. The deep learning-based techniques act like a black box. The challenges involved with explaining and interpreting their internal working have given rise to a new research area known as explainable AI [35]. Among the recent research carried out in this direction, Riberio et al. [54] pointed out the value of interpreting the internal working of deep learning-based classifiers. They also designed a framework that computes the importance of each input towards a particular output and interprets a classifier's predictions. In another research, a method to determine the input's part leading to a particular output has been developed by Fazi et al. [14]. Research has also been done for tracing every neuron's contribution and understanding the output part-by-part [59]. The existing Interpretability techniques can be divided into the following categories.
Attribution Interpretability techniques.
In these methods, the attribution values denoting the relevance of inputs concerning outputs are determined. A popular attribution value is 'Shapley Values' [58] . The attribution techniques are frequently used for local interpretability, which explains the impact of one instance instead of the overall model. The Shapley values have been used by Lundberg et al. [35] who implemented an interpretability framework, SHAP (Shapley Additive exPlanations), which determines each feature's contribution by analyzing its Shapley values [40]. The computation of Shaply values is very expensive because 2 models are required to be trained for a model with features [6,35]. Different approximations have been used to speed us Shapley values' computation. For instance, Shapley values sampling [6] and KernelSHAP [35]. The attribution techniques are further classified into perturbation and back-propagation-based approaches, which are explained further.
Perturbation Interpretability techniques.
These techniques make a small change in the input and observe its impact. The insights thus obtained are used to interpret the model's working [15]. The most frequently used perturbation technique is Local Interpretable Model-agnostic Explanations (LIME) [54] that perturbs the given instance and synthesizes new data. The new data is weighted as per the closeness of the new instance to the original instance. The output is computed by training the original model on the perturbed data. The trained model's weights denote the approximate values of each feature's contribution. The LIME can be used with any machine learning model, though it is computationally expensive as it involves the generation of new data.
Backpropagation Interpretability techniques.
The backpropagation-based interpretability techniques calculate the attributions by backpropagating through the network multiple times. A popular backpropagation-based technique is 'Saliency Map' [61] that has the label output's absolute gradient of each input feature as an attribution. Another popular technique is Gradient-weighted Class Activation Map (Grad-CAM), which assigns a score to each feature and computes the activation map using this score [57]. The Grad-CAM goes until the last convolutional layer instead of backpropagating back to the image. It generates a map that highlights the important features of the input image. If the input image is slightly changed, it generates an entirely different map [27].
As suggested by the above survey, LIME, SHAP, and Grad-CAM are the most frequently used interpretability techniques for machine learning models. The LIME is involved with a very high computational cost, whereas Grad-CAM is incapable of withstanding small changes in the input image. The SHAP technique does not suffer from the aforementioned limitations. Furthermore, the DNN interpretability has been applied to the visual modality but it has not been fully explored for the speech and text modalities and multimodal analysis. It inspired us to develop an interpretability technique for multimodal emotion recognition to explain the importance of each modality and identify the important features of each modality that lead to the prediction of a particular class. Table 1 shows some samples from the IIT-R MMEmoRec dataset while the process to construct it has been elaborated as follows. It contains generic (facial, human, non-human objects) images (as opposed to only facial images/videos in other known trimodular emotion datasets, IEMOCAP [5], and MOSEI [74]), speech utterances, text transcripts, emotion label ('angry,' 'happy,' 'hate,' and 'sad'), the probability of each emotion class given by each modality and probability of final emotion class. The IIT-R MMEmoRec dataset has been constructed on top of the 'Balanced Twitter for Sentiment Analysis' (B-T4SA) dataset [67]. The B-T4SA dataset contains images, text, and sentiment ('positive,' 'negative,' neutral) labels, whereas the IIT-R MMEmoRec dataset, has been compiled to have discrete emotion labels for image, text, and speech modalities. The following steps have been followed to construct the IIT-R MMEmoRec dataset. Table 1. A few samples from IIT-R MMEmoRec dataset. Here, 'Img_Prob,' 'Sp_Prob,' 'Txt_Prob,' and 'Final_Prob' are image, speech, text and final prediction probabilities whereas angry, happy, hate and sad emotion labels are denoted as 0, 1, 2 & 3 respectively.
Data compilation
• The text from the BT4SA dataset is pre-processed by removing links, special characters, and tags, and then the cleaned text is converted to speech using the pre-trained state-of-the-art text-to-speech (TTS) model, DeepSpeech3 [47]. The rationale for using the TTS model is governed by the recent studies that prove TTS models generate high-quality speech signals that can be used as a valid approximation of natural speech signals [42,44,47]. • The image, speech, and text components are passed through pre-trained IER, SER, and TER models trained on Flickr & Instagram (FI) [73] dataset, IEMOCAP [5] dataset, and ISEAR dataset [56], respectively, and the prediction probabilities of each emotion class are obtained for each modality. • The prediction probabilities are then averaged to obtain the ground-truth emotion of each data sample. The averaging is done to ensure that the chosen ground truth is the one that is supported by the majority of modalities. Fig. 1 shows an example of emotion label determination. The probabilities for each emotion class given by each modality are shown. The 'happy' class has an average prediction probability of 0.500 compared to 0.233 for 'angry,' 0.133 for 'hate,' and 0.133 for 'sad.' The final emotion label for the sample is determined as 'happy.' • The data is segregated according to classes, and the samples having an average prediction probability of less than the threshold confidence value of 0.55 times of maximum probability for the corresponding class are discarded. The threshold confidence is determined in Section 4.3.2. • The four emotion classes, 'angry,' 'happy,' 'hate,' and 'sad,' are common in various datasets of different modalities considered in this work. The samples labeled as 'excitement' & 'disgust' have been re-labeled as 'happy' & 'hate' as per Plutchik's wheel of emotions [48]. The final dataset contains a total of 1, 12, 455 samples with 53, 317 labeled as 'angry,' 44, 980 as 'happy' and 10, 327 & 3, 831 as'sad' and 'hate' respectively as described in Table 2.
3.1.1 Determining threshold confidence value for dataset construction. The original B-T4SA dataset contained 4.7M data samples labeled as 'positive,' 'negative,' and 'neutral.' While constructing the IIT-R MMEmoRec dataset with discrete emotion labels, i.e., 'angry,' 'happy,' 'hate,' and 'sad,' it was essential to retain only the samples having high confidence in the associated emotion label. After passing the image, speech, and text components of the inputs to respective emotion recognition models as discussed in Section 3.2, we computed a value for each data sample in each class representing at what percentage compared to the class maximum that sample is in its particular class. It gave us the confidence of each data sample in its particular class. To determine the appropriate threshold, we plotted possible threshold values vs. the ratio of the class present (the number of each class sample and the total number of samples) as shown in Fig. 2.
The higher the threshold, the higher the confidence and the better the quality of data. However, a higher threshold value also leads to two issues -i) reduction in the size of the dataset and ii) disruption in the distribution of emotion classes compared to its original distribution. As seen in Fig. 2 the distribution of each class at a threshold approaching 1 is very different as compared to when all samples are taken at the 0 thresholds. An appropriate threshold value needs to be chosen, leading to a good trade-off between high confidence and appropriate size & distribution of the dataset. Till the threshold value of 0.33, the distribution is almost the same as the original, but this confidence is too low to be acceptable. The next possible value is above 0.5 but below 0.6. Between these two values, the distribution of various classes is almost the same, and the confidence is also above 0.5, which is acceptable. Hence an average value of 0.55 is chosen as the threshold confidence value.
3.1.2 Human evaluation. The MMEmoRec dataset has been evaluated by having 8 people evaluate the data samples. We had two human readers (one male and one female) who spoke out and recorded the text components of the data samples. The evaluators listened to the machine synthesized speech against the human speech recorded by the human readers and scored the contextual similarity between them on a scale of 0 to 100. The human evaluators also evaluated whether the data samples' speech, image, and text components agree with the annotated emotion sample individually and combinedly. The samples have been picked randomly, and the average of the evaluators' scores has been reported in Table 3 where −ℎ denotes the percentage of evaluators reporting the synthetic speech (ss) to be similar to human speech (hs).
& ℎ denotes the percentage of speech components of synthetic and human speech portraying the annotated emotion. Likewise, and denote the agreement of annotated emotion class by image and text components. − − and ℎ − − show the samples showing agreement of the annotated emotion class by all three modalities on considering synthetic and human speech, respectively.
We had two readers read the text of the data samples and called their output human synthesized speech. 60.72% evaluators found the synthetic speech to be contextually similar to the human synthesized speech. 74.49% synthetic speech samples and 78.91% human synthesized speech samples were found to be portraying the annotated emotion labels. As per the further observations, 69.26% images and 78.81% text components of the data samples correspond to the annotated emotion labels. Moreover, the evaluators also reported that 72.99% of the samples considering machine synthesized speech along with the corresponding text & image were in line with the determined emotion label, whereas this is comparable to the value of 76.74% on considering human synthesized speech along with the corresponding text & image.
VISTA Net
The proposed system, VISTA Net's architecture, is shown in Fig. 3 The three modalities are fed into two types of networks: a pre-trained and a simpler network. The intuition behind this approach is to build a fully automated multimodal emotion classifier by including various modalities' in all possible combinations and learning their weights while training without any human intervention. The proposed system contains and for image, and for speech, and and for text, denoting pre-trained and simpler networks respectively. The input speech has been converted to a log-mel spectrogram before feeding into the network. 1), to make it compatible with VGG16. Further, it is passed from and , consisting of the same architecture as and respectively.
The text input is similarly passed from containing a BERT [12] and consisting of an embedding and LSTM layer with 64 units. Both & are followed by 512-dimensional dense layers. In the intermediate fusion, all pairs of the pre-trained and simpler networks from different modalities are created by passing them from the ℎ layer that we have defined. It gives us six such combinations passed from 2 dense layers with 1024 neurons, giving the classification based on each pair. The Eq. 1 shows all the possible pairs formed from the combination of pre-trained and simpler networks such that both the networks do not belong to the same modality.
where 1 , 2 , 3 , 4 , 5 and 6 are the classification outputs for various pairs of pre-trained and simpler networks. The ℎ layer ensures that during training, the weight of any weighted addition is learned using back-propagation without any human intervention. Each weight in the ℎ layer is randomly initialized and then passed from the softmax layer, giving us positive values used as final weights and learned during training.
Late Fusion Phase.
In this phase, the information from various modalities' all possible pairs is combined in a hybrid manner. The intermediate classification outputs obtained from above Eq. 1 are passed from another ℎ layer, which combines these outputs dynamically, giving us the final output as depicted in Eq. 2. The output is passed from a dense layer with dimensions equal to the number of emotion classes, i.e., four.
where denotes the final output and 1 , 2 , 3 , 4 , 5 and 6 are the intermediate classification outputs.
KAAP
This Section proposes a novel multimodal interpretability technique, K-Average Additive exPlanation (KAAP), depicted in Fig. 4. It computes the importance of each modality and its features while predicting a particular emotion class. The existing interpretability techniques do not apply to speech and multimodal emotion recognition. Moreover, the most frequently used and accepted interpretability technique for images and text is SHAP [35], which is an approximation of Shapley values [58]. It requires ( 2 ) computational time-complexity whereas KAAP requires a time of ( 2 ) where <= is a given hyper-parameter. Moreover, KAAP applies to multimodal emotion analysis and a single modality or a combination of any two modalities as well.
Define
: DNN model, Define : Image data, Define : text data, Define ℎ: Speech spectogram data, Define : The type of data whose perturbed probability is required ( ) 4 with all the features { 1 , 2 , . . . , ( −1) , , ( +1) , . . . , }. The 'Marginal Contribution' of an edge connecting Node and Node is defined as the difference between the prediction probabilities on using their features. For a given predicted label , the marginal contribution of the feature for the edge from Node 1 to Node 2 is calculated using Eq. 3. Here, { } is the probability of label calculated by having only feature 1 in the input and perturbing all other features to zero.
To calculate the overall importance of , we need to calculate the weighted average of all 'marginal contribution' of given by Eq. 4.
Where 12 and 34 are the weights for the weighted addition. Now, there are two conditions on the weights: i) the sum of weights equals one; this is done to normalize the weights; ii) the weight 34 must be ( − 1) times the weight 12 . The second condition is based on the fact that ,{ } is the effect of addition of in an empty set of features, while ,{ 1 , 2 ,..., } is the effect of addition on a set containing ( − 1) features. This results in Eq. 5.
Where the values of 12 and 34 , shown in Eq. 6 are computed using Eq. 5.
The KP values shown in Eq. 7 are computed using Eq. 4 and Eq. 6.
3.3.2 Calculating KAAP values. This Section computes the KAAP values and uses them to determine the importance of each modality and its features. The information of image, text, and speech modalities are in the same data format, i.e., continuous format. A single pixel can not define an object that can lead to a particular emotion for an image, but a group of pixels will. For speech, the spectrogram at a single instance of time & frequency alone can not define anything, but a time interval will. Likewise, for text, a single letter may not define an emotion, but a word is capable of doing so. KAAP values have been defined based on the motivation from the aforementioned fact. They are computed using the KP values for a group of features. First, the input of size is divided into parts, where is a hyperparameter decided through ablation study in Section 4.3. These parts correspond to the features of the input. Then, for a feature group , ( ) values are computed for the given value of using Eq 6. It represents how a group of features will perform compared to all remaining groups. However, these groups can vary in size, i.e., can have various values that lead to different groups and thus to different KP values from groups of different sizes, thus affecting the original features' importance. To deal with this issue, the weighted average of all the KP values is taken for ∈ {2, 3 . . . , } where weights are equal to the number of features in that group of features, given by the Eq. 8. It should be noted that = 1 is ignored here, as the whole input as one feature will not make any sense.
For input image and speech spectrogram, both of width 128 and height 128, their KP values for a given are calculated by dividing the input into parts along both the axes. As a matrix defines both image and speech spectrogram, this gives us with * feature group, the equation for calculating the KAAP values for the above two inputs is given by Eq. 9. It gives us a matrix showing the importance of each pixel for a given image and speech input. This matrix directly represents the importance of the image. At the same time, for speech input, the values are averaged along the frequency axis to reduce the KAAP value matrix to the time axis, hence giving importance to speech at a given time.
For input text, the division is done such that each text word is considered a feature, as the emotion can only be defined by a word, not a single letter, as discussed above. Then the text is divided into parts, and as a linear array can represent text, the KAAP values are calculated using Eq. 8. Also, the value of used for image, speech, and text modalities have been determined as 7, 7, and 5 respectively in Section 4.3.2. Furthermore, the modalities' importance defined by symbols , , and for visual, spoken, and textual features, respectively, are computed assuming that image, speech, and text are three distinct features and calculating each modality's KAAP value for = 3. While finding the importance of the features of a particular modality, all the other modalities are perturbed to zero. The KAAP technique is depicted in Algorithms 3 which uses Algorithm 2 that calculates the KAAP values for each data instance and 1 for probability prediction.
Experimental setup
The network training for the proposed system has been carried out on Nvidia Quadro P5000 GPU, whereas the testing & evaluation have been done on an Intel(R) Core(TM) i7-8700 Ubuntu machine with 64-bit OS and 3.70 GHz, 16GB RAM.
Training strategy and hyperparameter setting
The model training has been performed using a batch-size of 64, train-test split of 80-20, Adam optimizer, ReLU activation function with a learning rate of 1×10 −4 and ReduceLROnPlateau learning rate scheduler with patience value of 2. The baselines and proposed models converged in terms of validation loss in 10 to 15 epochs. As a safe upper bound, the models have been trained for 50 epochs with EarlyStopping [52] with patience values of 5. The loss function used is the average of categorical focal loss [32] and categorical cross-entropy loss. Accuracy, macro f1 [45], and CohenKappa [68] have been analyzed for the model evaluation.
Ablation studies and models
The ablation studies have been performed to determine the threshold confidence value for data construction, appropriate network configuration for VISTA Net, and suitable values for KAAP. 4.3.1 Ablation study 1: Determining baselines and proposed system's architecture. To begin with, the emotion recognition has been performed for a single modality at a time, i.e., separate IER, SER, and TER using pre-trained VGG models [62] for Image & speech and BERT [12] for text. The performance has been evaluated in terms of Accuracy, CohenKappa metric (CK), F1 score, Precision, and Recall and summarized in Table 4. The CK metric measures if the distribution of the predicted class is in line with the ground truth or not. Next, we moved on to the combination of two modalities. The chosen two modalities are fed into respective pre-trained models and then passed from a dense layer of 512 neurons. Then the information from these modalities is added using the ℎ layer defined in 3.2.1, this output is next passed from three dense layers of size 1024, 1024 and 4 neurons, which then classifies the emotion. Image + text comes out to be the best combination, beating the remaining two combinations in both Accuracy and CK values.
At last, the information from all three modalities is combined and fed into their respective pretrained models and is then passed from a dense layer of size 512, which is then passed from a ℎ layer; the output of this layer is passed from 3 dense layers as in the combination of two modalities. Combining all three modalities has performed better than the remaining models in all the evaluation metrics. As observed during the experiments above, combining the information from the complementary modalities has led to better emotion recognition performance. Hence, the baselines and proposed model have been formulated, including all three modalities and various information fusion mechanisms in Section 4.4.
Ablation study 2:
Determining values for KAAP. An in-depth ablation study has been conducted here to decide the value of used in Section 3.3.2. The dice coefficient [11] is used to determine the best values. It measures the similarity of two data samples; the value of 1 denotes that the two compared data samples are completely similar, whereas a value of 0 denotes their complete dis-similarity. For each modality, KAAP values are calculated at ∈ {2, 3 . . . , 10}. The dice coefficient is calculated for two adjacent values. For example, at = 3, the KAAP values at = 2 and = 3 are used to calculate the dice coefficient. The procedure mentioned above has been performed for all three modalities, and the results are visualized in Fig. 6. The effect of increasing values can be observed in the figure. For image & speech, the value converges to 1 at = 7, while for text, the optimal value of k is 5.
Baselines and proposed models
The 'Image + Speech + Text' configuration described in Section 4.3.1 is considered as baseline 1, whereas further baselines models' architectures have been formulated by incorporating further improvements in the information fusion mechanisms.
The baseline models are made on a common idea as described below. Firstly all the three modalities are fed into , , , , and as described in Section 3.2, and are then passed from a dense layer of 512 neurons, resulting in a 512-dimensional outputs which are then combined using ℎ to give three outputs. The following strategy is being followed for combining them: any pre-trained network must be combined with another simpler network. At least one combination must contain the network from different modalities because if all the modalities combine with themselves, then such a combination will not lead to any information exchange. Thus, six such configurations are possible, as described in Eq. 10.
The configuration (#1) is discarded as it does not hold the condition that at least one combination must combine with a different modality. The configurations (#2), (#3), (#6) are partially-complete combinations as one of the three outputs of these combinations combine the pre-trained and simpler network from the same modalities. On the other hand, the configurations (#4) and (#5) are complete.
Using the above strategy puts us in two disadvantages: i) only two out of five such baselines are complete while others are partially-complete; ii) different datasets have different requirements. For example, a particular multimodal dataset may have better images and speech components, while other datasets may have a better quality of text components. To generalize for any dataset and scenario, an automated multimodal emotion recognition system, VISTA Net, has been proposed, which combines all output of baselines 2-6 leaving any self combination and taking the weighted average of remaining all. Hence, it automatically decides the weights of each combination according to the requirements of problem statements and the dataset. The baselines' and proposed system's results are summarized in the following Section in Table 5.
RESULTS AND DISCUSSION
The emotion classification results have been discussed in this Section, along with their interpretation and a comparison of sentiment classification results with existing methods.
Quantitative results
The VISTA Net has achieved emotion recognition accuracy of 95.99%. Its class-wise accuracies are shown in Fig. 7 while its results, along with the results of baselines, are shown in Table 5. In the waveform, yellow and blue correspond to the most and least important features, respectively. As observed from Fig. 8, speech and text were the most contributing modalities for the prediction of 'angry' and 'hate' classes, whereas image and text modalities contributed equally to the determination of 'happy' and 'sad' classes.
Results comparison
The emotion recognition results have been reported in Section 5.1. The IIT-R MMEmoRec dataset has been constructed from the B-T4SA dataset in this paper; hence, there are no existing emotion recognition results for it. However, sentiment classification (into 'neutral,' 'negative,' and 'positive' classes) results on the B-T4SA dataset are available in the literature, which have been compared with VISTA Net's sentiment classification results in Table 6. Table 6. Results comparison for sentiment classification on BT4SA dataset with existing approaches. Here, 'V,' 'S,' and 'T' denote visual, spoken and textual modalities.
Performance for missing modalities
In real-life scenarios, some of the data samples in the multimodal data may be missing information about one of the modalities. The VISTA Net has been evaluated for such scenarios. We formulated four use-cases with image, speech, text, or no modality missing respectively and divided the test dataset into randomly selected equal parts accordingly. Then the information of the missing modality has been overridden to null, and VISTA Net has been evaluated for emotion recognition. Table 7 summarizes the results thus observed. As observed from Table 7, the emotion recognition performance for Missing no modality (i.e., having the information from all three modalities) is in line with the results observed in Section5.2. Further, missing image modality information has caused the least dip in the performance. Moreover, the information from speech and text modalities combinedly has resulted in emotion classification accuracy of 82.59%, whereas including all the modalities resulted in 95.90% accuracy. The aforementioned observations are in-line with the observations in Section 4.3.1 where IER performance was lesser than TER and SER performance.
Discussion
Various research tasks may require a particular modality's information more than the others; for example, text and visual information may be secondary for multimodal speech recognition. Likewise, a multimodal emotion dataset might contain better quality information for a particular modality than other modalities. In such cases, it would require human intervention to decide which modality is more important for the analysis. However, the VISTA Net is capable of deciding that automatically. It considers all possible combinations of various modalities' information and weighs them accordingly.
As the proposed MMEmoRec dataset contains the information of complementary modalities, it enables the deep learning models to learn the contextually related representation of the underlying emotions. The ground truth is obtained by applying unimodal models. The final label is obtained by averaging the probability of each emotion obtained for each model and is considered the ground truth of the dataset. If the same unimodal emotion recognition models are used for emotion recognition during the dataset construction, then there will be a slight bias in the final performance. However, there will be no bias in developing and using a newer multimodal emotion recognition model. Furthermore, the human evaluators evaluated the IIT-R MMEmoRec dataset for the consistency of the determined emotion labels and appropriateness of the speech component, which has been synthesized via test-to-speech.
The qualitative & qualitative results (Fig. 7 & 8 and Table 5 & 6) have affirmed the importance of utilizing the information from complementary modalities. As observed from Fig. 8, different modalities have played a key role in determining the overall emotion portrayed by the input data sample. In some cases, the information for a particular modality may be missing from some of the data samples. The proposed system, VISTA Net, has been evaluated for such cases with missing modality information, and the observations are in accordance with the results previously observed and the insights gained during ablation studies.
The proposed interpretability technique, KAAP, computes the importance of each modality and the importance of their respective features towards the prediction of a particular emotion class. The existing interpretability techniques such as SHAP and LIME are not applicable to speech modalities, whereas KAAP is applicable to all image, text, and speech modalities. The proposed technique is expected to pave the way for growth in multimedia emotion analysis. We also hope that the IIT-R MMEmoRec dataset will inspire further advancements in this context.
CONCLUSIONS AND FUTURE WORK
The proposed system, VISTA Net, performs emotion recognition by considering the information from image, speech & text modalities. It combines the information from these modalities in a hybrid manner of intermediate and late fusion and determines their weights automatically. It has resulted in better performance on including image, speech & text modalities than including only one or two of these modalities. The proposed interpretability technique, KAAP, identifies each modality's contribution and important features towards predicting a particular emotion class. The future research plan includes working on transforming emotional content from one modality to another. We will also work on controllable emotion generation, where the output contains the desired emotional tone.
|
2022-08-25T06:47:48.920Z
|
2022-01-01T00:00:00.000
|
{
"year": 2022,
"sha1": "2e764bb42b23487a82b6b2cbb7741acc389f4000",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "2e764bb42b23487a82b6b2cbb7741acc389f4000",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
997052
|
pes2o/s2orc
|
v3-fos-license
|
Using verbal autopsy to track epidemic dynamics: the case of HIV-related mortality in South Africa
Background Verbal autopsy (VA) has often been used for point estimates of cause-specific mortality, but seldom to characterize long-term changes in epidemic patterns. Monitoring emerging causes of death involves practitioners' developing perceptions of diseases and demands consistent methods and practices. Here we retrospectively analyze HIV-related mortality in South Africa, using physician and modeled interpretation. Methods Between 1992 and 2005, 94% of 6,153 deaths which occurred in the Agincourt subdistrict had VAs completed, and coded by two physicians and the InterVA model. The physician causes of death were consolidated into a single consensus underlying cause per case, with an additional physician arbitrating where different diagnoses persisted. HIV-related mortality rates and proportions of deaths coded as HIV-related by individual physicians, physician consensus, and the InterVA model were compared over time. Results Approximately 20% of deaths were HIV-related, ranging from early low levels to tenfold-higher later population rates (2.5 per 1,000 person-years). Rates were higher among children under 5 years and adults 20 to 64 years. Adult mortality shifted to older ages as the epidemic progressed, with a noticeable number of HIV-related deaths in the over-65 year age group latterly. Early InterVA results suggested slightly higher initial HIV-related mortality than physician consensus found. Overall, physician consensus and InterVA results characterized the epidemic very similarly. Individual physicians showed marked interobserver variation, with consensus findings generally reflecting slightly lower proportions of HIV-related deaths. Aggregated findings for first versus second physician did not differ appreciably. Conclusions VA effectively detected a very significant epidemic of HIV-related mortality. Using either physicians or InterVA gave closely comparable findings regarding the epidemic. The consistency between two physician coders per case (from a pool of 14) suggests that double coding may be unnecessary, although the consensus rate of HIV-related mortality was approximately 8% lower than by individual physicians. Consistency within and between individual physicians, individual perceptions of epidemic dynamics, and the inherent consistency of models are important considerations here. The ability of the InterVA model to track a more than tenfold increase in HIV-related mortality over time suggests that finely tuned "local" versions of models for VA interpretation are not necessary.
Background
Verbal autopsy (VA) has become a widely established approach for characterizing cause of death patterns in settings where individual deaths are not routinely certified as to cause, with a variety of methods being used for both interview and interpretation phases [1]. Most often, VA has been applied for particular times, or over relatively short periods, to obtain point estimates of cause-specific mortality. However, as archives of VA data accumulate over time, possibilities of studying epidemic dynamics using VA approaches emerge. This is of interest in terms of measuring potential newly emerging causes of death [2], as well as for monitoring the dynamics of epidemiological transition [3]. But it also raises new methodological challenges, for example around consistent interpretation of VA into causes of death over long periods of time and consequently around practitioners' developing perceptions of new situations. More generally, it raises the question of how effectively VA methods are able to detect newly emerging causes of death.
Over the past two decades, southern Africa has experienced a massive and rapidly developing epidemic of HIV infection and associated mortality [4][5][6]. However, large-scale modeled estimates provide a rather imperfect picture of the epidemic, given that most deaths in southern Africa are neither certified nor medically investigated [7]. Localized populations with intensive surveillance, such as member centers of the INDEPTH Network [8], provide opportunities to look at specific examples in detail [9][10][11], even if this may generate a subsequent debate as to generalizability. A number of studies elsewhere have established the validity of VA methods for attributing deaths to HIV/AIDS, particularly among adults [12][13][14][15][16]. Nevertheless, there remain some unresolved issues about how to best handle cocauses of mortality in cases of HIV-related death, and willingness to attribute deaths to HIV, whatever methods are used, may be influenced by nonmedical factors such as social stigmatization [17,18].
HIV-related deaths are complex to count, since HIVpositive individuals are frequently affected by other diseases as a result of being immunologically compromised, and it can be difficult from VA data, in the absence of HIV serology, to determine the relative significance of AIDS versus other diseases in the processes leading to death. The 10 th version of the International Classification of Diseases (ICD-10) uses codes B20 to B24 as underlying causes representing HIV/AIDS in combination with other disease categories (B20 infectious and parasitic diseases, B21 malignant neoplasms, B22 other diseases including wasting, B23 other conditions, and B24 nonspecific AIDS) [19]. However, differentiating probable HIV-related deaths detected by VA into these subcategories may not be easy to achieve, particularly where there is no explicit evidence of HIV positivity.
The ability to interpret any VA interview reliably depends on several factors, including the quality and detail of information on signs and symptoms provided by the informant. In settings where stigma is high around a particular cause of death -as is often the case for HIV -sensitive information may be withheld from the interviewer. Extent of nondisclosure is likely to vary as an epidemic develops, starting from minimal levels when key symptoms are not yet widely known by informants, and when physicians may also not yet be attuned to a particular diagnosis. As a significant epidemic such as HIV/AIDS develops, stigma is likely to rise, together with nondisclosure of relevant details. In a mature epidemic -particularly in the case of HIV as antiretroviral treatments are rolled out -nondisclosure may wane. These patterns may have significant effects on the outcomes of VA interpretation.
The Agincourt Health and Socio-Demographic Surveillance Site in the rural northeast of South Africa has been documenting a geographically-defined population (around 70,000 people in 2005) since 1992, including registering deaths and following those up with VA interviews [20]. The start of this surveillance in 1992 coincided with the early stages of the HIV epidemic (at least in terms of HIV-related mortality) in this area, and hence the accumulated VA data enable a methodological exploration as to how the epidemic evolved. Our primary aim is to characterize the epidemic of HIV-related mortality in this population, comparing both physicianinterpreted causes of death and probabilistically modeled causes of death from the same VA interview material. As subsidiary aims, we investigate (1) approaches for handling common co-causes of HIV-related mortality, such as tuberculosis, malnutrition, and chronic gastroenteritis, and (2) variations between different coding physicians' responses to the emerging epidemic. Although this paper deals specifically with an epidemic of HIV-related mortality, findings are discussed in terms of using VA for monitoring long-term dynamics in mortality patterns.
Methods
The analyses in this paper are based on the entire series of 6,153 deaths (among all ages) in the Agincourt population from 1992 to 2005, as previously described in terms of primary-care planning [21] and in a comparison between physician and modeled VA interpretation [22]. VA interviews were successfully completed for 5,794 deaths (94.2%), using a questionnaire developed before international standards were agreed upon. These VA interviews were subsequently coded by two independent physicians who attempted to reach consensus where their diagnoses differed, with a third reviewing and intervening in case of disagreement. If no consensus could be reached, the cause of death was recorded as "undetermined." During the period from 1992 to 2005, 14 physician reviewers were involved in VA interpretation during various subperiods. In 373 (6.4%) of VA reviews, it was not possible to trace the identities of the coding physicians. The InterVA model (http://www. interva.net) was also applied to the VA interview material, as described previously [22]. This public-domain model relates input indicators (history, signs, symptoms from VA interview material) to likely cause(s) of death using Bayesian probabilities. A standard grid of conditional prior probabilities was defined by an expert panel of physicians [23]. The model has subsequently been evaluated in a number of settings [22,24]. As a standard model designed for cause of death determination in low-and middle-income countries, it has the advantage of consistency over time and place [25].
A dataset was compiled (using Microsoft FoxPro) containing the two independent physician interpretations (main cause, possible immediate and contributing causes with ICD-10 codes), the physicians' consensus finding as to underlying cause (based primarily on the individual physicians' main cause findings), and the InterVA version 3.2 results (up to three likely causes per case, each associated with a quantified likelihood). The HIV level for the InterVA model was set to "high" and malaria set to "low," based on existing knowledge of causes of death in this population, as discussed previously [22]. The concept behind this setting in the InterVA model is analogous to a coding physician knowing that HIV or malaria represent more-common or less-common public health problems in a particular population, irrespective of the details around any individual death or detailed prior knowledge of cause-specific mortality. Age groups were defined as under 1 year, 1 to 4 years, 5 to 19 years, 20 to 49 years, 50 to 64 years, and 65 years and over. Analyses used Stata 10.
Surveillance-based studies in the Agincourt subdistrict were reviewed and approved by the Committee for Research on Human Subjects (Medical) of the University of the Witwatersrand, Johannesburg, South Africa (protocol M960720). Informed consent was obtained at the individual and household levels at every follow-up visit, whereas community consent from civic and traditional leadership was secured at the start of surveillance and reaffirmed from time to time. Feedback on cause of death patterns is presented to local communities and health service providers annually.
Results
The evolving epidemic of HIV-related mortality Figure 1 shows the evolution of HIV-related mortality, both overall and by age group, in the Agincourt population, calculated as the rates (per 1,000 person-years) of physician consensus underlying cause being coded as ICD-10 B20-B24 (1,136 deaths, 18.4%), or the rates of most likely cause from InterVA being HIV/AIDS-related death (1,146 deaths, 18.6%). Both approaches showed very similar patterns over time and within age groups, with a huge increase from no HIV-related deaths in 1992 to 2.5 per 1,000 person-years in 2005 according to physician coding, and correspondingly from 0.2 to 2.6 per 1,000 person-years according to InterVA. Table 1 shows numbers of deaths according to physicians and InterVA, by age, sex, and period.
Only 63/6,153 (1.0%) of the overall VA records explicitly mentioned HIV positivity in the interview material, so the overwhelming majority of conclusions on HIVrelated deaths both by the physicians and the model reflected circumstantial findings. When data for the period from 1992 to 1994 were rerun with InterVA set to "low" HIV, the number of cases most likely due to HIVrelated causes decreased from 57 to eight out of a total of 707 deaths (8.1% to 1.1%). Physician consensus findings for the same period recorded 13 cases (1.8%), although a total of 20 cases (2.8%) were HIV-related according to at least one physician. However, among the 51 cases rated as HIV-related by the model ("high" setting) but not by physician consensus for this period, the most common underlying cause attributed by physicians was malnutrition (nine cases, 17.6%). By contrast, overall physician consensus results for 1992-1994 recorded 4.2% for malnutrition, compared with 1.3% for 1995-2005.
Effects of different approaches for estimating HIV-related mortality
In addition to the physician consensus material on underlying causes of death that were identified as HIVrelated, an additional 18 cases involved HIV as the physician consensus contributory cause. From this revised total of 1,154 HIV-related deaths, 693 (60.0%) were concluded in the physician consensus to have an infection (ICD B20), out of which 148 (12.7%) were specifically mentioned as tuberculosis. Ten cases (0.9%) had malignancies (B21), and 99 (8.6%) had chronic gastroenteritis or malnutrition (B22).
Using the alternative approach of the InterVA model, a total of 1,237 cases were rated as probably HIVrelated, although in 91 of these HIV was not the most likely cause. Of the 1,237 cases, 156 (12.6%) were also identified as being associated with tuberculosis and 10 (0.8%) with other infections (B20), three (0.2%) with malignancies (B21), and 18 (1.5%) with chronic gastroenteritis or malnutrition (B22).
Interphysician variations in attributing HIV-related mortality
Of the 14 physicians coding this series of VAs, two completed very few (two and 16 cases respectively) and have been excluded from further consideration of interphysician variation. Of the 12 remaining, there were between two and five physicians coding VAs in any one year. No individual carried out work over the entire period. Figure 2 shows the overall proportions of physician consensus and InterVA HIV-related deaths by year, together with the proportions rated by the various physicians. In addition, the "low" HIV InterVA results for 1992-1994 are shown. Table 2 shows the proportions of HIV-related deaths as coded by first and second physician coders (irrespective of individual physician identity) compared with the revised physician consensus proportions, by year. The overall proportion of HIV-related mortality after achieving consensus was around 8% lower than single physician opinions (19.9% compared with 21.6%, ratio 0.92).
Discussion
It is clear that the progression of the epidemic of HIVrelated mortality in this rural South African community, with population-based rates increasing more than tenfold over a 14-year period, was successfully detected and tracked by means of VA, in the absence of any more rigorous routine procedures for following up deaths and their causes. Although one might not argue for VA as the epidemiological method of choice for this purpose, the reality across much of the world is that there is no realistic alternative for the time being [26]. Even where deaths are supposed to be certified, there can be considerable difficulties in accurately capturing and recording deaths related to HIV/AIDS [27]. How VA material can best be interpreted into cause of death findings including HIV-related mortality is thus a very important issue, which can then form the basis of understandings of population health, for example patterns of social disparities [28].
The validity, reliability, and consistency with which VA data can be interpreted, particularly in terms of HIV-related mortality, are important issues. Both the physician-based and modeled approaches presented here yielded very similar results in terms of characterizing the epidemic. Intuitively plausible trends, such as the increasing age of HIV-related deaths observed as the epidemic developed (according to both approaches), presumably following developments in care and treatment, are encouraging. The InterVA model was not specifically designed to deliver ICD-10 codes, and so the major comparison here was equivalence at the B2* level rather than at the third digit level. As is usually the case where VA is used, there is no gold standard against which to absolutely compare these findings. Even if we knew the HIV serostatus for every death, there would still be difficulties in determining which deaths were actually attributable to HIV. However, it is very unlikely that the closely similar epidemic patterns shown for the two methods in Figure 1 would be similar entirely by chance, and in that sense both lend credence to the other. But, as we have noted previously [22], the physician approach was very time-consuming and expensive compared with probabilistic modelling, and the delays and expense involved in the physician process may be hard to justify from these results.
Since the "two physicians plus arbitrator" model of physician interpretation seems to have become a de facto (but not necessarily "gold") standard in much VA work, it is perhaps surprising that there have been few detailed analyses of individual physicians' opinions compared with physician consensus findings in VA studies using this method, with some exceptions [29,30]. It is also important in this context to remember that concurrent findings do not necessarily constitute "truth" [31]. In the particular setting of this epidemic, where the incidence of HIV-related deaths was changing at a rate that was not necessarily clear to physicians at the time, especially in the early stages of epidemic, it was particularly relevant to examine the ways in which individual physician interpreters responded to the changing situation, as well as the effect on consensus findings. It is also noteworthy that a relatively large number of individual physicians were involved in the process over the 14-year period; it would be surprising if this were not the case in most longer-term VA operations. It is worth noting that large studies using multiple physicians to interpret cause of death are difficult to interpret and understand if details about interobserver effects are not presented. It is also clear from the results in Figure 2 that, in general, consensus rates tended to be slightly lower than individual physician rates, particularly in the later years. This could have important implications in considering whether to use only a single coding physician per case, as has previously been suggested [32]. While there was generally good consistency between first and second physician findings (averaging over individual physicians) as shown in Table 2, the generally slightly lower rates of HIV-related mortality from the consensus process would probably result in slightly higher levels of "undetermined" cause of death in an all-cause analysis than might have resulted from using only a single physician coder.
Around the inception of this HIV-related mortality epidemic, the relationship between individual physicians, consensus results and the "low" and "high" HIV settings for the InterVA model are particularly interesting. The proportional differences in rates among the various approaches were greatest during the first three years, as is clear from Figure 2. Initial work on the InterVA model suggested that only causes likely to vary by an order of magnitude in terms of overall proportion needed to have an adjustment [23], with the crossover between "low" and "high" being at around 1% of total mortality. The "high" setting was therefore the appropriate one overall here. The analogous "setting" in physician coding is represented by a physician's awareness of how common HIV-related mortality is in a population, irrespective of the detailed circumstances of a particular case. Physician consensus rates gave the lowest measure of HIV in the early years, and it seems that in the uncertain early stages of the epidemic it was particularly difficult to achieve consensus, even though some deaths were considered as HIV-related by one physician. This supposition is indirectly supported by finding that the physicians' highest rates of malnutrition-related mortality were recorded during that period, probably representing a misclassification of deaths that were at least partly HIV-related. Thus the reality here is that the HIV-related mortality rates between 1992 and 1994 were probably somewhere in between the various estimates shown in Figure 2. Conversely, individual physicians recorded appreciably more HIV-related mortality in the later years, compared with both the consensus and modeled findings, possibly reflecting physicians' inflated views of HIV latterly. Additionally, nondisclosure of sensitive details in VA interviews at various stages of the epidemic may have compromised both the physicians' and model's findings. In the case of the model, it is important to note that the HIV rates over the period increased tenfold without any information being given to the model about a likely increase over time. This illustrates the relatively noncritical magnitudes of the cause-specific prior probabilities incorporated in the model, and supports the notion that a single model can be used for interpreting VA data over wide ranges of time and place, maximizing the benefits of consistency for comparative purposes over different settings. Conclusions VA was clearly able to identify the emergence and growth of a very significant epidemic of HIV-related mortality in this population, and using either physicians or probabilistic modeling to derive cause of death findings gave closely similar results. The evidence suggests that physicians were perhaps a little slow to recognize the early stages of the epidemic, while the model (at least when set to expect a "high" level of HIV mortality) may have slightly overestimated initially. However, the fact that a numerically constant model was able to characterize a greater-than-tenfold increase in HIV-related mortality over time is an important demonstration of the relative robustness of probabilistic modeling for VA interpretation. This suggests that there is no need for finely tuned "local" versions of models for VA interpretation, the proliferation of which would detract from the comparability of results over time and place.
|
2014-10-01T00:00:00.000Z
|
2011-08-05T00:00:00.000
|
{
"year": 2011,
"sha1": "d744814b91e2067681e4f9a984cc7ad2f422e27f",
"oa_license": "CCBY",
"oa_url": "https://pophealthmetrics.biomedcentral.com/track/pdf/10.1186/1478-7954-9-46",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1bb9c444992dac95e19cd42bb902e7f108f07296",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
257921340
|
pes2o/s2orc
|
v3-fos-license
|
The Economic Effect of Gaining a New Qualification Later in Life
Pursuing educational qualifications later in life is an increasingly common phenomenon within OECD countries since technological change and automation continues to drive the evolution of skills needed in many professions. We focus on the causal impacts to economic returns of degrees completed later in life, where motivations and capabilities to acquire additional education may be distinct from education in early years. We find that completing an additional degree leads to more than \$3000 (AUD, 2019) extra income per year compared to those who do not complete additional study. For outcomes, treatment and controls we use the extremely rich and nationally representative longitudinal data from the Household Income and Labour Dynamics Australia survey (HILDA). To take full advantage of the complexity and richness of this data we use a Machine Learning (ML) based methodology for causal effect estimation. We are also able to use ML to discover sources of heterogeneity in the effects of gaining additional qualifications. For example, those younger than 45 years of age when obtaining additional qualifications tend to reap more benefits (as much as \$50 per week more) than others.
Introduction
Pursuing educational qualifications later in life is an increasingly common phenomenon within OECD countries (OECD, 2016). Technological change and automation continues to drive the evolution of skills needed in many professions, or to oust the human workforce in others. This is particularly true for middle-income workers performing routine tasks (Autor, Katz andKearney, 2008, Acemoglu andAutor, 2011). Also at the lower end of the income-distribution, such as among welfare recipients, governments are increasingly trying to promote the idea of life-long learning.
This paper contributes to understanding one efficacy dimension of these policy and individual choices by estimating the causal effects on earnings and by focusing on mature-age students. We add to previous work on the returns to education for 'younger students'. Previous research points to positive and significant wage premiums for younger cohorts with more education, ranging between 5 and 13% (Angrist and Keueger, 1991, Harmon, Oosterbeek and Walker, 2003, Machin, 2006 or even higher than 15% as in the case of Harmon and Walker (1995). The wage returns to education may be more uncertain for older students as they face higher opportunity costs to study and need to navigate a more fragmented system in the postsecondary education setting.
We also add to the literature that investigates the economic returns for mature-age learners at community or training colleges (Jacobson, LaLonde and Sullivan, 2005, Chesters, 2015, Zeidenberg, Scott and Belfield, 2015, Polidano and Ryan, 2016, Xu and Trimble, 2016, Belfield and Bailey, 2017a, Dynarski, Jacob and Kreisman, 2016, 2018, Mountjoy, 2022. The evidence on the labour market returns to vocational and community college education is strong and positive, particularly for female students (Belfield and Bailey, 2017a, Zeidenberg, Scott and Belfield, 2015, Perales and Chesters, 2017. The results are even stronger once authors account for the different earnings-growth profiles of students and non-students before undertaking the degree Kreisman, 2016, 2018).
By focusing on the one institutional setting -the community or training college -the results of such studies may not be generalisable to the entire mature-age education market, such as to students who seek different degree types or who study at different institutions (Belfield andBailey, 2017b, Mountjoy, 2022). We add to this literature by estimating the returns across all formal degree-types (post-graduate degrees, training certificates, diplomas etc), and spanning all subjects and institutions at which the study took place. This means we analyse the effects for a group of students with a larger span of demographic and socio-economic background characteristics. The broad remit of students that we analyse also allows our study to compliment studies that evaluate government-run training programs, which tend to enrol low-productivity workers (Ashenfelter, 1978, Ashenfelter and Card, 1985, Bloom, 1990, Leigh, 1990, Raaum and Torp, 2002, Jacobson, LaLonde and Sullivan, 2005, Card, Kluve and Weber, 2018, Knaus, Lechner and Strittmatter, 2022. We contribute the first evidence in systematically identifying which groups of matureage students tend to benefit more from further education. We also compliment previous studies that already find significant heterogeneity by degree-type, institutional setting, and by the background characteristics of the student (Blanden et al., 2012, Zeidenberg, Scott and Belfield, 2015, Polidano and Ryan, 2016, Dorsett, Lui and Weale, 2016, Xu and Trimble, 2016, Belfield and Bailey, 2017a, Perales and Chesters, 2017, Böckerman, Haapanen and Jepsen, 2019. A benefit of a systematic, data-driven approach to heterogeneity analysis is that it can reduce the risk of overlooking important sub-populations compared to less data-driven approaches (Athey and Imbens, 2017, Knaus, Lechner andStrittmatter, 2021).
A key challenge in estimating the causal returns to later-life education is that factors that enable mature-age learners to pursue and complete a qualification may also be precursors to later-life success. Moreover, the drivers of degree completion may be numerous and related to other variables in complex, unknown ways. We use a machine learning (ML) based methodology in this work since it allows us to intensively control for many confounding factors, as well as discover sources of treatment heterogeneity. ML algorithms also automatically discover nonlinear relationships that may be unknown to the researcher. For high-dimensional and complex datasets such as we use in this research, these methodological abilities are crucial in reducing bias from model mis-specification and confounding (e.g. selection into treatment), and reducing variance from correlation/collinearity. We adapt ML tools for causal inference purposes. We recognise that, as with all statistical models, we make assumptions when we use ML techniques for causal inference, and these need to be tested. One key assumption is that the controls included in the ML models sufficiently account for selection into treatment. We propose to undertake a replication exercise where we compare the results of the ML model with that of baseline models, using Ordinary Least Squares (OLS) and Fixed Effects. We also contrast the selected control variables in the ML model with those that were manually selected in Chesters (2015), and comment on the potential biases from manual variable selection. We have chosen this published work because it uses the same data (HILDA) and examines the same topic.
The results show that an additional degree in later-life increases total future earnings by more than an average of $3,000 per year compared to those who do not complete any further study. We consistently estimate this causal effect using a selection-on-observables strategy based on T-learner, Doubly Robust and Bayesian models. The estimate is based on 19 years of detailed nationally representative Australian data from the Household Income and Labour Dynamics Australia (HILDA) survey. Two dimensions of these data are important. The first is that they contain a wealth of information about each respondent. For example, we begin with more than 3,400 variables per observation, including information about the respondents' demographic and socio-economic background, and on their attitudes and preferences. Access to this broad range of information means that by controlling for them, we can potentially proxy for unobservable differences between those who do and do not obtain a new qualification. Secondly, this dataset contains many variables that are highly correlated, so we require a systematic approach to reduce such information redundancy -something that ML models are adept at.
Our ML approach also identifies new sub-populations for which the treatment effects are different. We document that the starting homeloan amount and employment aspirations are significant factors related to the extent of gain from further study. We also find that the starting levels of and pre-study trends in personal and household income are hugely important. Age and mental health variables also account for variation in estimated effects. All of these variables are consistently selected as being significant for prediction out of the 3,400 features within the HILDA data. This selection is consistent across different ML models (which includes linear and non-linear model classes) and across numerous boostrap draws of the original sample.
Previous studies have found that individuals who seek a futher degree tend to have slowergrowing earnings in the period before their study starts compared to similar individuals who do not seek further study (Jacobson, LaLonde and Sullivan, 2005, Dynarski, Jacob and Kreisman, 2016, 2018. By accounting for dynamic selection into obtaining a further degree, we can be confident that we compare the earnings paths of mature-age students to the paths of similar non-students who displayed the same earnings (and other) paths before study began. In this paper, we explicitly control for the trajectories of socioeconomic and demographic circumstances before study starts. Standard fixed effects estimation would miss these dynamic confounders. We find that our ML estimates are significantly smaller than the size of the standard fixed effects results. We also estimate lower returns compared to Ordinary Least Squares (OLS) models. We document the additional confounder variables that we include in our models but are usually omitted from standard OLS specifications. These variables suggest there is significant selection into mature-age students who undertake a further degree.
We adapt ML models for the purpose of estimating causal effects. Standard off-the-shelf ML models are better suited to predictive purposes. When obtaining a prediction, off-theshelf ML models can find generalisable patterns and minimise overfitting issues, though the use of cross-validation, because the true outcomes are observed. This means that we can optimize a goodness-of-fit criterion. Causal parameters, however, are not observed in the data, which means we cannot directly train and evaluate our models.
In this paper, we take the difference between the two optimal outcome models, which can achieve the optimum bias-variance trade-off point for the conditional average treatment effect. Specifically, we model the response surfaces for two conditional mean equations -one using the treatment observations and another using the control observations. We estimate these equations with ML methods such as the T-learner and Doubly Robust. Here, we employ both linear (LASSO and Ridge) and non-linear (Gradient Boosted Regression) model classes. We compare and evaluate their comparative performance using nested cross-validation. We then test the statistical significance of our causal parameters by examining the distribution of the estimates through bootstrapping. Last, we use a variety of Bayesian ML models following the formulation presented in Hahn, Murray and Carvalho (2020) that reduce effect estimation bias within the Bayesian paradigm. These models have several properties that may be desirable, such as the ability directly parameterise heterogeneous prognostic and treatment models.
Context: Higher education and Vocational study in Australia
Mature-age education in Australia is among the highest in the world. In 2014, Australia's participation in vocational education by those aged 25-64 was the highest among OECD countries. The tertiary education rate for those aged 30-64 was the second highest (Perales and Chesters, 2017). Mature-age Australians are increasingly enrolling in university or college to change employers, change careers, gain extra skills, improve their promotion prospects and earning capability or search for better work/life balance. Redundancy and unemployment have also been driving forces for individuals to return to education later in life (Coelli, Tabasso and Zakirova, 2012).
The increase in mature-age learners accessing higher education has in part been driven by government policy. In 2009, the Australian government adopted a national target of at least 40% of 25-34-year-olds having attained a qualification at bachelor level or above by 2025 (O'Shea, May and Stone, 2015). This was part of a policy that transitioned Australia to a demand-driven system (Universities Australia, 2020). The policy had a large effect on access to higher education, as it removed the cap on the number of university student places. By 2017, 39% of 25-34-year-olds had a bachelor's degree or higher (Caruso, 2018).
While the initial uptake of university places in the demand-driven system was strong, especially among mature-age students 1 (Universities Australia, 2019), growth in undergraduate enrolments slowed since 2012. In 2018, mature-age enrolments even dropped below the previous year. The 40+ age group showed the worst growth, receding by 10%, while the 25-29's and 30-39's showed growth of around -4% (Universities Australia, 2020). The decline of enrolments coincided with the freezing of the Commonwealth Grant Scheme (CGS) which capped funding at 2017 levels, effectively ending the demand-driven system (Universities Australia, 2020).
Access to Commonwealth Supported Places (CSPs) have since been limited to 2017 levels, with cap raises from 2020 subject to performance measures (Universities Australia, n.d.). As a proportion of the working age population, mature-age students also participated less in vocational education and training (VET) over the same period. It appears the introduction of the demand-driven system also increased VET participation between 2010 and 2012, before continuing its decline (Atkinson and Stanwick, 2016). Total VET enrolments since 2018 stabilised, with 2019 and 2020 enrolments slightly above 2018 levels 2 (NCVER DataBuilder, 2021). The impact of COVID-19 on 2021 enrolments is yet to be fully determined. So far, VET enrolments for the first half of 2021 are well above the previous 4 years across all age groups, with ∼1 million enrolments in 2021 compared to ∼870 thousand enrolments in 2017 3 (NCVER DataBuilder, 2021).
The cost of a bachelor's degree for domestic students in Australia is the sixth highest among OECD countries (Universities Australia, 2020). In 2018, the average annual cost of a bachelor's degree was around $5,000 in Australia, about half of the top 2 most expensive countries where it costs around $9,000 in the US and $12,000 in the UK 4 . VET and TAFE courses in Australia cost a minimum of $4,000 per year on average while postgraduate courses cost a minimum of $20,000 per year on average 5 (Studies in Australia, 2018).
Mature-age students can cover the cost of further study themselves or they can receive support from the government. Students at university or approved higher education providers can access financial support from the Higher Education Loan Program (HELP) scheme, which provides income-contingent loans. This allows students to defer their tuition fees until their earnings reach the compulsory repayment threshold, upon which repayments are deducted from their pay throughout the year at a set rate. Postgraduate students can access the Commonwealth Supported Place (CSP) scheme, which subsidises tuition fees for those studying at public universities and some private higher education providers. However, most CSPs are for undergraduate study.
FEE-HELP is the HELP scheme available to full-fee paying students who don't qualify for a CSP i.e., post-graduate students. VET Students Loans (formerly VET FEE-HELP) are also part of the HELP scheme and are available to students undertaking vocational education and training (VET) courses outside of higher education (Universities Australia, 2020). CSPs and HELP loans are withdrawn from students who fail half of their subjects, assessed on a yearly or half-yearly basis depending on the level of study. 6
Data
We use data from the Household Income and Labour Dynamics Australia (HILDA) survey. These data are rich, and we exploit the full set of background information on individuals (beginning with more than 3,400 variables per observation).
HILDA covers a long time span of 19 years, starting in 2001. We use the 2019 release. This means we observe respondents annually from 2001 to 2019.
Sample exclusions
Our main analysis sample contains respondents who were 25 years or above in 2001. This allows us to focus on individuals who obtain a further education -beyond that acquired in their previous degree.
Our main analysis focuses on measuring the impact of further education using wave 19 outcomes. Here, the feature inputs to the models are taken from the individuals in 2001. We delete any individuals who were 'currently studying' in 2001. This also ensures that our features, which are defined in 2001 are not contaminated by the impacts of studying but clearly precede the study spell of interest. These sample exclusions result in 7,359 respondents being dropped because they are below the age of 25 in 2001 and a further 1,387 respondents being dropped because they were studying in 2001.
We then restrict the sample to those who are present in both 2001 and 2019. This ensures that we observe base characteristics and outcomes for every person in our analysis sample. This results in a further 5,727 respondents being dropped from the sample. Our analysis sample has 5,441 observations. More details of our main analysis sample and data can be found in the Online Appendix Document 1. 7
Outcomes
We measure outcomes in 2019 across the groups of individuals who did and did not get re-educated. We use annual earnings to measure the economic returns to education. We also analyse outcomes related to the labour market such as employment, changes in earnings, changes in occupation, industry, and jobs. 8
Treatment
We define further education as an individual who obtains a further degree in a formal, structured educational program. These programs must be delivered by a certified training, teaching or research institution. Thus, we do not analyse informal on-line degrees (such as Coursera degrees). We also do not consider on-the-job training as obtaining further education.
Our treatment variable is a binary variable that takes the value of 1 if an individual has obtained an additional degree anytime between wave 2 (2002) and wave 17 (2017). As we analyse outcomes in 2019, this means we calculate the average returns between 2 years and up to 17 after course completion. We delete any respondent who obtained a qualification after wave 17. This allows us to analyse outcomes at least two years after course completion. 7 For sensitivity analysis, a second sample of respondents are examined. They are slightly younger when they began study, their feature values are taken in the two years before study began and their outcomes are measured four years after their study began. In this second sample, there are 1,814 individuals who started and completed a further educational degree, and 60,945 person-wave control observations who never completed a further degree. We detail our second approach in the Online Appendix Document 2.
8 A second approach is to use outcomes measured four years after the start of a study spell. For sensitivity analysis, we repeat our main estimations using this second approach. Here, as many individuals in our dataset never started a further degree i.e. they are in our control group, we assign a time stamp to them for every year the control person theoretically could have started to study. We do this for every year from 2003 to 2019. This implies that control group individuals can be duplicated multiple times in the dataset. We then measure the control individuals' outcomes 4 years after their theoretical time stamp.
HILDA documents formal degree attainment in two ways. The first is to ask respondents, in every, wave what is their highest level of education. The second way is to ask respondents, in every wave, if they have acquired an additional educational degree since the last time they were interviewed.
We utilise both these questions to construct our measure of further education. Using the first question, we compare if the highest level of education in 2019 differs from that in 2001. If there has been an upgrade in educational qualification between these two years, we set the treatment indicator to be one and zero otherwise. This question, however, only captures upgrades in education; it fails to capture additional qualifications that are at the same level or below as the degree acquired previously by the respondent. We rely on the second survey question to fill this gap.
These two survey questions thus capture any additional qualification obtained from 2002 to 2017, inclusive. Additional qualifications refer to the following types of degrees: Trade certificates or apprenticeships; Teaching or nursing qualifications, Certificate I to IV, Associate degrees, Diplomas (2-year and 3-year fulltime), Graduate Certificates, Bachelor, Honours, Masters and Doctorate degrees.
Covariates/features
We define our covariates, or features as they are known in machine learning parlance, using 2001 as the base year. Since we delete any respondents who were currently studying in 2001, we ensure that all features were defined before a respondent begins further study. 9 A unique approach to our feature selection strategy is that we use all the information available to us from the HILDA survey in 2001. This means that we have more than 3,400 raw variables per observation. Before using the features in a ML model, we delete any features that are identifiers or otherwise deemed irrelevant for explaining the outcome.
In order to reduce redundancy in this vast amount of information, we next apply a supervised Machine learning model to predict outcomes 5 years ahead of 2001 i.e., in 2006. We then select the top 100 variables that are most predictive of the outcome in 2006. 10 These variables are listed in Table 1. 9 We also test the sensitivity of our results to using feature inputs that are taken from the individuals closer to the timing of their study, namely two years before study began. Here, we use both the year and the two years preceding the start of a study spell to define our features. This allows us to capture both level and growth values in the features.
10 Confounders are features that both have an impact on the outcome and on the treatment. Chernozhukov et al. (2018) suggest including the union of features kept in the two structural equations (outcome on features and treatment on features). Here, we only include the features that predict the 3.5 Missing variables from the baseline model As part of a replication exercise, we constrast the results from the ML model with published work using Ordinary Least Squares (OLS) and Fixed Effects models. We also contrast the features selected in the ML model with an approach that manually selects the variables as in the case of Chesters (2015). We call this the 'baseline' model.
As a descriptive exercise, Table 2 presents the features that were 'missed' by the baseline model. In the baseline model, we included features such as age, gender, state of residence, household weekly earnings, highest level of education attained, and current work schedule. This collection of variables have been informed by theory or previous empirical results.
The data-driven model identifies more salient variables compared to the baseline model. Additional variables include employment conditions such as work schedule, casual employment, firm size, tenure or years unemployed; financial measures such as weekly wage, investment income and mortgage debt; health measures such as limited vigorous activity and tobacco expenses; and work-life preferences related to working hours and child care.
We identify variables as missing from the baseline model if those variables explain the residual variation in the outcome. Specifically, we regress the residuals from the baseline models (without the treatment included) on the features included in the data-driven model and train a LASSO model to highlight the salient variables that were missed. The variables that are chosen are listed in Table 2. We also document how these variables are correlated to the outcome and to the treatment in order to give us a sense of the direction of the bias their omission may induce.
Most of the omitted variables bias the OLS estimates is upwards. 11 The upward bias is consistent with the ML-models estimating an economic returns on obtaining a new qualification that is significantly smaller than the returns from an OLS model or a Differencein-Difference -Fixed Effects (DD-FE) model. In the DD-FE model, we use the same 5,441 individuals as the other methods but they are followed over two waves: 2001 and 2019 (i.e. there are 10,882 person-wave observations). We control for individual and wave fixed-effects. Figure 10 displays the estimated returns from six different models. The first three bars show significantly higher returns based on the OLS (no controls), OLS (with controls) and the DD-FE models compared to the last three bars, which are based on the ML outcome equation because including features that are only predictive of the treatment can erroneously pick up instrumental variables (see Pearl (2012) for a discussion of this issue).
11 Exceptions include casual employment status, the presence of a past doctorate qualification, years unemployed, parental child care and dividend and business income. models -Gradient Boosted Regression, Doubly Robust and Bayesian Causal Forest. We discuss these methods in more detail below.
It is important to highlight that our approach to identifying missing variables from the baseline model is a descriptive one. As previously mentioned, the ML algorithm randomly selects variables that are highly correlated thus we may have missed out on reporting the label of important variables omitted from the baseline model.
Descriptive Figures and Tables
We calculate the average returns to degree completion for mature-age students who completed degrees between 2002 and 2017. The window in which study and degree-completion took place is noticeably large. However, sample size limitations with our survey data mean that it is not feasible to run an ML analysis, disaggregated by the timing-of-completion.
In order to obtain some insights into the potential heterogeneity over time, we present a series of descriptive graphs in this section. Here, our aim is not to present any causal analysis but to describe which groups studied earlier in the time period (and thus had more time to accumulate returns). These graphs can also point to the potential different factors driving study across the time period, and different effects on earnings depending on how much time has elapsed since completion. Figure 3 presents the distribution of degree completion over time. There is a steep decline in degree-completion proportions over time. This is likely to reflect the aging profile of HILDA survey respondents and that further study is disproportionately higher among the younger cohorts (25-44 year olds) (See Figure 4).
Over time, Figure 5 shows that the composition of degrees completed has shifted. Among those who completed a degree in later years, compared to those who completed a degree in the earlier period, a higher percentage completed a Certificate III or IV, Diploma or Advanced Diploma as opposed to a lower-level degree (Certificate I or II or below). In all years, the most frequently completed degrees are Cert 3 or 4, Associate degrees, Diplomas and Advanced Diplomas.
The predominance of Cert 3 or 4 degrees is common across gender. Although, Figure 6 shows the distribution of degrees is more heavily skewed towards these degrees for men then they are for women. Figure 7 shows an increase in both average earnings and employment overtime between 2002 and 2017. Despite the upward trajectory, these outcomes show more volatility following 2008. This is likely to reflect the smaller samples in the later years of the survey. In our main analysis we average the returns over time as the samples within each year are inadequate to draw inference about heterogeneity across time.
Method
We aim to estimate the causal impact of obtaining a new qualification. Our empirical challenge is a missing data one in the sense that we do not observe the counterfactual outcome for each person -what would have their income been if they had/had not obtained a new qualification?
We use capitalisation to denote random variables, where Y ∈ R + is the outcome variable, T ∈ {0, 1} is the binary treatment indicator, and X ∈ X are the conditioning variables (which can be a mix of continuous or categorical in type). Small case is used to denote realisations of these random variables, e.g. y, t and x, and we may use a subscript for an individual realisation, e.g. y i for individual i from a sample of size n.
Under the potential outcomes framework of Imbens and Rubin (2015), Y (0) and Y (1) denote the outcomes we would have observed if treatment were set to zero (T = 0) or one (T = 1), respectively. In reality, we only observe the potential outcome that corresponds to the realised treatment, The missing data problem (or the lack of counterfactuals) is especially problematic when the treated group is different from the control group in ways that also affect outcomes. Such selection issues mean that we cannot simply take the difference in the average of the non-missing values of Y (0) and Y (1).
To address the missing data problem, we turn to a range of ML-based techniques. Standard ML tools are purposed to predict, but our aim is to estimate the causal parameter. These are different aims, and so we have to adapt the ML tools. We may potentially bias our causal parameter of interest if we were to use the off-the-shelf tools. For example, if we were to select the important confounders using an ML model to predict the outcome Y , then we may undervalue the importance of variables that are highly correlated to the treatment T but only weakly predictive of Y (Chernozhukov et al., 2018).
We approach filling the missing data indirectly with three types of ML models that have been specially adapted to causal inference. They are: the T-Learner, Doubly Robust and Bayesian models. For all our models, we require the following identification assumptions.
Identification assumptions
To interpret the estimated parameter as a causal relationship, the following assumptions are needed: 1. Conditional independence (or conditional ignorability/exogeneity or conditional unconfoundedness) Rubin (1980): This assumption requires that the treatment assignment is independent of the two potential outcomes. Practically, this amounts to assuming that components of the observable characteristics available in our data, or flexible combinations of them, can proxy for unobservable characteristics. Otherwise, unobservable confounding bias remains.
A benefit of using all the features the HILDA dataset has to offer is that we may minimise unobserved confounding effects. Specifically, we rely on the 3,400 features and complex interactions between them as well as flexible functional forms to proxy for components of this unobserved heterogeneity. For example, while we do not observe ability or aptitude directly, we may capture components of it with other measures that are observed in HILDA such as past educational attainment or the long list of income and other sources of income variables (see Table 1 for a list of the features).
The reader is likely to conceptualise other dimensions of unobserved heterogeneity that may not be captured in Table 1. There are two likely scenarios in this case. First, HILDA may not be exhaustive enough, even with its existing richness, to capture all dimensions of unobserved heterogeneity. As a result, our estimates may be biased.
Another potential scenario is that the source of unobserved heterogeneity in question (or some components of it) is still captured but modelled under the guise of another variable label. Variables that are highly correlated with each other are unlikely to be simultaneously included in the model. This is because the ML algorithm, in attempting to reduce the amount of information redundancy, may have randomly dropped one or more of those correlated variables.
2. Stable Unit Treatment Value Assumption (SUTVA) or counterfactual con- Assumption 2 ensures that there is no interference, no spill-over effects, and no hidden variation between treated and non-treated observations. SUTVA may be violated if individuals who complete further education influence the labour market outcomes of those who do not complete further education. For example, if the former group absorb resources that would otherwise be channelled to the latter group. Alternatively, the former group may be more competitive in the labour market and reduce the probability of promotions or job-finding for the latter group. As those who complete further education are a relatively small group, it is unlikely that these general equilibrium effects would occur.
3.
Overlap Assumption or common support or positivity -no subpopulation defined by X = x is entirely located in the treatment or control group, hence the treatment probability needs to be bounded away from zero and one.
The overlap is an important assumption because counterfactual extrapolation using the predictive models, is likely to perform best for treatment and control subpopulations that have a large degree of overlap in X . If the treatment and control groups had no common support in X , we would be pushing our counterfactual estimators to predict into regions with no support in the training data, and therefore we would have no means by which to evaluate their performance.
This means the optimum bias-variance trade-off point for the conditional average treatment effect may not align with the optimum bias-variance trade-off point for the separate µ 1 (x) and µ 0 (x) models. Since, ultimately we are interested in the CATEs (as opposed to the predictive accuracy of the individual conditional mean functions), this can mean that we have biased CATEs.
4. Exogeneity of covariates (features) -the features included in the conditioning set are not affected by the treatment.
To ensure this, we define all of our features at a time point before any individual started studying. Specifically, we use the first wave of HILDA (in 2001) to define our features. We only look at those individuals who completed further education in 2002 onwards. Furthermore, we delete any individuals who were currently studying in 2001 to ensure the features cannot reflect downstream effects of current study.
With the strong ignorability and overlap assumptions in place, treatment effect estimation reduces to estimating two response surfaces -one for treatment and one for control.
T-Learner model
The first adaptation of ML models for causal estimation is the T-learner approach. We aim to measure the amount by which the response Y would differ between hypothetical worlds in which the treatment was set to T = 1 versus T = 0, and to estimate this across subpopulations defined by attributes X.
The T-learner is a two-step approach where the conditional mean functions defined in Equations (2) and (3) are estimated separately with any generic machine learning algorithm.
Machine learning methods are well suited to find generalizable predictive patterns, and we employ a range of model classes including linear (LASSO and Ridge) and non-linear (Gradient Boosted Regression). Once we obtain the two conditional mean functions, for each observation, we can predict the outcome under treatment and control by plugging each observation into both functions. Taking the difference between the two outcomes results in the Conditional Average Treatment Effect (CATE).
To show this, we define our parameter of interest, the CATE, which is formally defined as: which, with the assumptions outlined previously, is equivalent to taking the difference between two conditional mean functions µ 1 (x) − µ 0 (x): In this estimation, we are not interested in the coefficients from regressing Y on X. What we require is a good approximation of the function τ (x), and hence good estimates from µ 1 (x) and µ 0 (x), which is within the perview of machine learning methods.
A benefit of our set-up is that when we take the difference between the two conditional mean functions, we coincidently find the optimum bias-variance trade-off point for the conditional average treatment effect. This means that we have an indirect way to obtain the best prediction of the CATE through two predictive equations, where we observe the true outcomes (and thus are able to regularise).
In practice, however, this indirect way of minimising the mean squared error for each separate function to proxy for the minimum mean squared error of the treatment effect can be problematic. See, for example, Künzel et al. (2019), Kennedy (2020) for settings when the T-learner is not the optimal choice. One potential estimation problem arises when there are fewer treated individuals than control individuals and the individual regression functions are non-smooth. In this instance the response surfaces can be difficult to estimate them in isolation, and the T-learner does not exploit the shared information between treatment and control observations. For example, if X relates to Y in the same fashion for treated and control observations the T-learner cannot utilise this information.
As a result, the estimate µ 1 tends to over smooth the function; in contrast, the estimate µ 0 regularises to a lesser degree because there are more control observations. This means a naïve plug-in estimator of the CATE that simply takes the difference between µ 1 − µ 0 will be a poor and overly complex estimator of the true difference. It will tend to overstate the presence of heterogeneous treatment effects. We turn to other ML models to address this potential problem.
Doubly Robust model
The second approach is the Doubly Robust learner (DR-learner). It is similar to the T-learner in that it separately models the treatment and control surfaces, but it uses additional information from a propensity score model. In this case the propensity score model is a machine learning classifier that attempts to estimate the treatment assignment process, where ρ(x) as a probabilistic machine learning classifier. This allows information about the students' background, and the nature and complexity of their situation that may have led them to pursue further education to be incorporated into the model. Thus, the doubly robust approach can improve upon the T-learner approach because it can reduce misspecification error either through a correctly specified propensity score model or through correctly specified outcome equations. Another feature of the Doubly Robust approach is that it places a higher weight on observations in the area where the relative count of treatment and control observations is more balanced (i.e. the area of overlap). This may allow better extrapolations of the predicted outcomes within the region of overlap. The ATE is estimated from three separate estimators, Previously, with the T-learner, we were just estimating µ 0 (x) and µ 1 (x). With the DRlearner, we augment µ 0 (x) and µ 1 (x). For example, for the treated observations, we augment µ 1 (x) by multiplying the prediction error by the inverse propensity scores. This up-weights those who get treated but who are statistically similar to the control observations. We then apply this same augmentation to the µ 0 (x) for the control observations.
Bayesian Models
The third approach is to use Bayesian models. We follow the general formulation presented by Hahn, Murray and Carvalho (2020) that suggests a predictive model of the following form, where is known as the 'prognostic' effect, and is the impact of the control variates, X, on the outcome without the treatment. Then we are left with τ (x i ), which is the individual treatment effect, Average treatment effect is then just simply estimated as, The advantage of this approach are manifold. From a Bayesian perspective, it allows us to place explicit and separate priors on the prognostic and treatment components of the models. For example, it may be sensible to expect the prognostic component to be flexible and strongly predictive of the outcome, while me may expect that the treatment component is relatively simple and small in magnitude (Hahn, Murray and Carvalho, 2020). Furthermore, this separation of model components and inclusion of the propensity score minimises bias in the form of regularisation induced confounding (RIC) which is discussed in more detain in (Hahn et al., 2018, Hahn, Murray andCarvalho, 2020). And finally, it is a very natural way to estimate heterogeneous treatment effects, since we can parameterise τ (x i ) directly as an additive effect on µ 0 , rather than having to separately parameterise control and treatment surfaces.
We explore three different model classes for µ 0 and τ , the first is a linear model for both prognostic and treatment models, the next uses a Gaussian process (GP), and lastly we use Bayesian additive regression trees (BART). We detail these models in the following sections.
Hierarchical Linear Model
The first Bayesian model uses linear prognostic and treatment components from Equation (8), We have used the following hierarchical priors, where I d is the identity matrix of dimension d, which is the number of control factors. The propensity score, ρ(x i ), is obtained from a logistic regression model. We also tested a gradient boosted classifier (Friedman, 2001) for this using five-fold nested cross validation. It did not seem to be more performant than the logistic model on held-out log-loss score.
For model inference, we use the no U-turn MCMC sampler (Hoffman and Gelman, 2014) in the numpyro software package (Bingham et al., 2019, Phan, Pradhan and Jankowiak, 2019). The choice of an uniform improper and non-informative prior over the regression weight scales, λ * , is motivated by the advice in Gelman (2006) where we desire a noninformative prior that admits large values. We choose a broader prior for the treatment component of the model to minimise bias as suggested by Hahn, Murray and Carvalho (2020). We first burn in the Markov chain for 30,000 samples, then draw 1000 samples from the posterior parameters to approximate the ATE, where (s) denotes a sample from the posterior parameters has been used to construct a random realisation of the treatment model component, and S = 1000.
Gaussian Process Regression
Gaussian process (GP) regression can be viewed as a non-linear generalisation of Bayesian linear regression that makes use of the kernel trick (Williams andRasmussen, 2006, Bishop, 2006). Another way of understanding a GP is that is parameterises a distribution over functions (response surfaces) directly, rather than model weights as is the case with Bayesian linear regression.
, a Gaussian process models the covariance of f (x) directly using a kernel function, where δ ij is a Kroneker delta, and is one iff i = j, otherwise zero. This formulation also assumes E[Y ] = E[f (x)] = 0 for simplicity -and can be used directly if the outcomes are transformed to be zero mean, or we can model an additional mean function (see Williams and Rasmussen (2006) for details). The Gaussian process can be written as, where y = [y 1 , . . . , y i , . . . , y n ] is the vector of all outcome samples, K is the covariance matrix with elements K ij = k(x i , x j ), and I n the n-dimensional identity matrix.
To implement the functional relationship in Equation (8) in a Gaussian process, we create the kernel function over x, t pairs, Here k µ 0 and k τ are the prognostic and treatment kernels respectively, σ µ 0 and σ τ allow us to scale the contribution of these kernels to the functional relationships learned, and τ 0 permits a constant treatment effect. This induces the functional relationship we want; We use the same propensity model for ρ(x i ) as the linear model previously.
We have chosen isotropic Matérn 3 2 kernel functions for k µ 0 and k τ , where l is the length scale parameter, and controls the width of the kernel function. Smaller length scales allow for more high-frequency variation in the resulting function f (x i ). The Matérn kernel is a stationary and isotropic kernel, but does not have excessive smoothness assumptions on the functional forms it can learn -this kernel leads to the response surface being at least once differentiable (Williams and Rasmussen, 2006). A Gaussian process with this kernel can learn non-linear and interaction-style relationships between input features and the outcome. Our composite kernel is not necessarily stationary however, as we have included a non-stationary term, t i t j .
A-priori, we expect reasonably smooth variation E[y i · y j ] so we choose a long length-scale for the prognostic kernel function, l µ 0 = 10, and an amplitude, σ 2 µ 0 = 1. We expect an even smoother relationship with less contribution for the treatment, and set the corresponding kernel parameters as; l τ = 50, σ 2 τ = 0.1 and τ 0 = .001. These parameters are then optimised using the maximum likelihood type-II procedure outlined in Section 5.4.1 of Williams and Rasmussen (2006).
The ATE is then approximated as, are samples from the Gaussian process posterior predictive distribution 12 with kernel inputs k * ( x i , t , x i , t ), which is equivalent to sampling from the distribution over τ (·). We use S = 100 samples.
Bayesian Causal Forests
The last Bayesian model we use is the Bayesian causal forest introduced In Hahn, Murray and Carvalho (2020). Broadly it models the prognostic and treatment components As Bayesian additive regression trees (BART), We use the accelerated BART (XBART) implementation of this algorithm detailed in Krantsevich, He and Hahn (2022). BART (Chipman, George and McCulloch, 2010) has been shown to be an effective and easily applicable non-parametric regression technique that requires few assumptions in order to capture complex relationships that can otherwise confound effect estimation. We follow Hahn, Murray and Carvalho (2020) in our choice of BART priors, This choice prefers a more simple treatment effect model, τ (x i ), that is less likely to branch, and more likely to have shallower trees than the prognostic model. Similarly, we use 200 trees for the prognostic model, and 50 for the treatment. We take 500 burn-in sweeps, and then 2000 sweeps to estimate the posterior BART distributions.
ATE is estimated in the same way as for the linear model in Equation (9), but where the BART posterior is used for the treatment effect distribution.
Model selection and model evaluation
For the non-Bayesian models we separate the evaluation of the model class and estimation of the ATE and CATE parameters in two procedures. We evaluate the predictive capacity of each model class using nested cross-validation. The procedure is represented in Figure 1. Here, our aim is to compare the predictive performance of three model classes: LASSO, Ridge and Gradient Boosted Regression (GBR). Our second procedure is to estimate the ATE and CATE parameters. The procedure is represented in Figure 2. We use bootstrap sampling (with replacement) to generate uncertainty estimates for the parameters, which we obtain over several draws of the same model class, but with model parameter re-fitting.
Focusing on the first procedure, we apply nested cross-validation to evaluate which model class performs best. In a first step, as Figure 1 shows, we pre-process the full dataset (containing 3,400 variables) to generate a dataset with a smaller set of highly predictive features (containing 91 variables). We apply a supervised machine learning approach with a LASSO model to select our top 91 predictors of the outcome of interest using outcomes measured in 2006. Note that in our later estimations of the treatment effect, the outcome is measured in 2019. We implement this intermediary step in order to reduce the correlation between variables and eliminate redundant information.
We assume that the top 91 13 features that are most predictive of the outcome in 2006 correlate with the features that would be most predictive of the outcome in 2019. By choosing to apply this pseudo-supervised ML approach on the same outcome variable, but measured at a different time point, we obtain a good indication of the features that are useful for a model to perform well. Improved model performance here will also mean that the selected features are likely to represent the important confounders. We have chosen 2006 to ensure there is no overlap with 2019 outcomes to avoid overfitting issues with subsequent models. 14 Using the top 91 predictors, we then apply nested cross validation to evaluate the predictive capacity of each model class (LASSO, Ridge, GBR). First, we split the data into train and test folds with an 80-20 split. Within the 80 percent train fold we perform 5-fold cross-validation in order to train and evaluate the performance of each configuration of hyperparameters. We do this separately for the outcome surface using the treated observations and the outcome surface using the control observations. From this, we select the models with the best mean predictive scores. We then evaluate the predictive performance of the selected model on the holdout test.
We repeat this process ten times (10-outer scores) for each model class. This allows us to evaluate the performance based on the mean and standard deviation of these scores. Note that thus far, we have not evaluated any particular configuration of the model, rather the performance of the model class on random (without replacement) subsets of data. The 13 We were aiming for approximately 100 features, and 91 was the closest we could get the LASSO estimator to select by changing the value of its regularisation strength.
14 We do not compromise predictive performance when we use the selected subset of features as opposed to the full set of features. For example, the predictive performance from a Gradient Boosted Tree model that predicts earnings in 2006, using 5-fold nested cross-validation, is statistically similar between models that use the 91 feature set and the full, 3,400 feature set (with Root Mean-Squared Errors (RMSEs) of 484.251 and 482.286, respectively). This is a negligible loss in predictive performance. There is a slightly larger associated loss between the restricted and full feature sets from models predicting earnings in 2019 (RMSEs of 843.548 and 831.931, respectively), but this is still not statistically significant. nested cross validation procedure protects us against overfitting when reporting predictive performance, as the model selection and validation happens on different data. Table 3 shows that the GBR is the best performing model class. It yields the highest out-of-sample R-squared and the lowest MSE. This is true for both the outcome surfaces separately.
As the DR-learner model relies on the same treatment and control outcome surfaces estimated in the T-learner, we do not repeat Table 3 for the DR results. A further component of the DR model, however, is the propensity score. Here, we implement a regularised logistic regression to predict the likelihood of being treated (to obtain a further degree). Specifically, we use cross validation to fit a Logistic regression and obtain the predictions from the original sample. The holdout performance of the fitted Logistic regression model yields an area under the ROC curve of 0.71.
Inference via bootstrapping
Once we have selected the best performing model class, we turn to the estimation of the parameters and their associated uncertainty. We use a bootstrapped validation proceedure to capture the uncertainty arising from model hyperparameter selection in addition to that from estimating parameters of a fixed model from noisy, finite data.
A common approach to inference in the causal machine learning literature is to use cross-fitting (Chernozhukov et al., 2018) or sample splitting (Athey and Wager, 2019). These methods ensure that the standard errors on the estimators are not underestimated because they avoid using the same data point to both select hyperparameters of the model and to estimate the parameters of the outcome or effect surfaces. The result of using the same data for model selection and effect estimation is that our standard errors would suffer from pre-test bias since the model may suffer from overfitting. Sample splitting and cross-fitting are appropriate when the sample size is large. An issue with studies that rely on survey-based data is that sample sizes are often not large enough to efficiently use these methods. For example, there may not enough data to split the dataset into separate train and test datasets for each model such that each of these splits would cover all the common and uncommon values of the X-features that are observed in the full sample. Consequently, the ML models may not find representative functional forms for µ 0 (x) and µ 1 (x). As a result, our estimate treatment effects are likely to have a large degree of uncertainty.
A suitable alternate procedure is to use bootstrapping. Bootstrap resampling allows us to estimate variation in the point model parameter estimates. In this way, we side-step the need to rely on the assumption of asymptotic normality, and it is more efficient than sample splitting to generate standard errors. In our bootstrapping procedure, we ensure that the standard errors reflect the sources of uncertainty stemming from both the selection of the model and the estimation of the model. As a result, we generate standard errors that avoid any potential pre-test issues.
As a first step we obtain the 91 top predictors from the initial pre-processing of the full dataset, shown in Figure 2. That is, we train a supervised machine learning LASSO model to extract the features that best predict earnings in 2006.
The second step involves training our models using the 91 top predictors on a bootstrapped sample, s, to select the best models for µ (s) 1 (x) and µ (s) 0 (x). Within this bootstrap sample, we divide the dataset into five folds and perform cross-validation to select the best model configuration. Similar to the cross-validation description above, our model configuration is trained on subsets of the data, and then evaluated on holdout samples. We modify the 5-fold cross validation to ensure bootstrap replicated training data does not simultaneously appear in the training and validation set. We perform this model selection step within the bootstrapping procedure to capture the uncertainty coming from the selection of hyperparameters. If we simply re-estimated the same model with a given set of hyperparameters in each bootstrap model then the uncertainty is only over the model parameters, and not the model choice (e.g. the GBR tree depth).
Third, and once we have these predicted outcome surfaces, µ We can obtain a sample mean,τ (s) , by averaging 1 n n i=1 τ (s) (x i ) using the bootstrapped effect model. We repeat this procedure over S = 100 bootstrap samples. This provides an empirical distribution ofτ and τ (x i ). The grand mean over the bootstrap sample means,τ G = 1 S S s=1τ (s) , will converge to the sample treatment effect mean. We useτ G as an estimate of the ATE, and 1 S S s=1 τ (s) (x i ) as an estimate of the individual CATE. The bootstrap resample is the same size as the original sample because the variation of the ATE depends on the size of the sample. Thus, to approximate this variation we need to use resamples of the same size.
To obtain confidence intervals for the ATE and CATE estimates we use standard empirical bootstrap confidence interval estimators (Efron and Tibshirani, 1986).
For the DR-learners, similar to the T-learner, we train µ 1 (x) and µ 0 (x) models across 100 bootstrap samples and weight these outcome surfaces by the propensity score model, ρ(x), which is estimated using logistic regression (as described previously).
Inference for the Bayesian models
The inference process for the Bayesian models a little different since the hyper-paramters of the models are either fixed or selected automatically by the learning algorithm (maximum likelihood type-II or MCMC). Bayesian inference procedures tend to afford some protection against over-fitting since they are parsimonious when choosing posterior distributions over model parameters that vary from their prior distributions, which induces a natural model complexity penalty 15 . As such, we use all the available data to learn the model posterior distributions, which we then sample from to form empirical estimates of the (C)ATE as outlined in the previous section.
Results
There are clear economic benefits to gaining an additional qualification in later life (25 years or older). The effects remain strong up to a decade-and-a-half after course completion. Table 4 The effect sizes from the GBR model are smaller than that of the two linear models. GBR better captures non-linearities. For example, age is likely to exhibit a highly non-linear relationship with earnings in 2019. Those who were aged 46 or above in 2001 will be aged 65 or above in 2019. This means they are more likely to have retired by 2019 compared to those who were aged below 46 in 2001. As a result, we may expect a shift down in earnings at age 46.
Age fixed-effects alone are unlikely to capture the differential age effects across other variables such as across different occupations, or by gender, and earnings. The linear ML models include age fixed effects. However, they do not include interactions between age and other variables whereas GBR does include them.
To illustrate how GBR adequately captures non-linearities we re-estimated our results focusing on those who were aged 25-45 in 2001. This is the same as interacting a binary variable (for age 25-45) with every other feature in the model. In Appendix Figure 13, we see that the results across the models are now more similar than when we use the full sample.
The Doubly Robust (DR) models estimate smaller effects compared to the T-learner models. Table 4 displays a gain of approximately $62-69 per week in gross earnings across the DR approaches. The estimated effect sizes are statistically different from zero. The confidence intervals for the DR estimates also exclude the point estimates from the T-Learner approach.
One reason the DR approach differs from the T-learner approach is that the former uses additional information from the propensity score (i.e. we estimate machine learning models to gain a better understanding of the treatment assignment process, the students' background, and the nature and complexity of their situation that may have led them to pursue further education). Thus, the doubly robust approach can improve upon the T-learner approach because it can reduce misspecification error either through a correctly specified propensity score model or through correctly specified outcome equations. Another feature of the Doubly Robust approach is that it places a higher weight on observations in the area where the relative count of treatment and control observations is more balanced (i.e. the area of overlap). A benefit of this is that it can also provide better extrapolations of the predicted outcomes.
The Bayesian models estimate similar sized effects to the DR models for the most part. However, they tend to have more uncertainty associated with their estimates. They all remain significant with the 95% confidence intervals remaining above $0. The hierarchical linear model and the Gaussian process both estimate a gain of approximately $61-$63 per week in gross earnings, with the Gaussian process being more certain in its estimate. Interestingly, the Gaussian process prefers a much smoother and smaller treatment effect component compared to its prognostic component -the treatment kernel length scale is long, and the kernel has a small amplitude and offset (l τ = 243, σ 2 τ = 0.0517 2 , and τ 0 = 0.0312 2 ). Whereas the prognostic kernel parameters stay relatively close to their initial settings (l µ 0 = 16, and σ 2 µ 0 = 1.42 2 ). The Bayesian causal forest estimates a slightly higher gain of $84.50 per week in gross earnings, which is more inline with the GBR T-learner. This suggests that the tree ensemble methods may be able to more easily capture non-linear relationships than the other models.
Proportionate changes in earnings can be measured by taking the log of the earnings measures. In Appendix Figure 14, we see that the proportionate change in earnings was large at 50 percent. This is likely to be because of people entering the labour market as a result of the new qualification. We find that a new qualification increases the likelihood of employment by approximately 8 percent. See Figure 11.
As previously mentioned, the ML models estimate smaller returns than the returns estimated in DD-FE or cross-sectional models (OLS with and without controls) where features have been selected based on theory or previous empirical learnings. For example, the 'OLS Baseline model' uses the features in models estimated in Chesters (2015). The DD-FE eliminates all selection effects that are fixed over time. Figure 10 displays the estimated returns from six different approaches.
A potential reason for the smaller results estimated in the ML models is that the additional features included, as well as the non-linear specifications of the features, more effectively account for selection into treatment. The smaller results suggest individuals positively select into further study i.e. the characteristics that lead one to complete further study are positively correlated to future earnings. Once we control for this upward selection bias, we thus estimate smaller returns to further education.
The smaller estimated results relative to the DD-FE model are likely to stem from the inclusion of key time-varying variables such as the 'change in total gross income' in the ML models, as well as other non-linear specifications. For example, the ML models allow the treatment effects to vary in a highly flexible fashion across different parts of the feature distributions rather than making linear extrapolations.
This points to a benefit of using ML models, compared to conventional models, because they can more effectively identify confounders. We show evidence of the types of confounders missed in conventional models in Table 2, as well as the direction of the bias stemming from their omission.
In addition, we show evidence that models which allow for more flexible functionalform specifications lead to differences in the ATE. Within our ML models, the GBR tree ensemble tended to perform better (in terms of the nested cv results) compared to the linear-based models. The former yielded a slightly smaller ATE compared to the LASSO and Ridge results, for example, and they were also consistent with results from the Bayesian Causal Forest.
Sub-group analysis
Qualification advancements may not benefit individuals in the same way. In this section we analyse if there is heterogeneity in the treatment impacts. We use a data-driven approach to select the sub-groups.
Specifically, we identify the important variables for which we expect to see the largest changes in the treatment effects. This involves using a Permutation Importance procedure.
Permutation importance feature selection method
We use a permutation importance selection method (Breiman, 2001, Molnar, 2020 to evaluate the relative importance of individual features. Our aim here is to understand where the heterogeneous treatment effects are most pronounced. In other words, we aim to identify the sub-groups for which the treatment effects differ most significantly. In selecting the important features our objective is to understand how to partition the data by the treatment effects as opposed to predicting the outcomes themselves.
The permutation importance proceedure involves testing the performance of a model after permuting the order of samples of each individual feature, thereby keeping the underlying distribution of that feature intact but breaking the predicitve relationship learned by the model with that feature. The model performance we are interested in, as previously mentioned, is the one that maps the features to the individual treatment effects.
Following the approach described above, we compute the individual treatment effects. Note that we train the model on the bootstrapped sample but estimate the individual treatment effects using the feature values for individuals from the original sample. Thus, for every individual we have a distribution of values of their individual treatment effects.
After obtaining the individual treatment effects, we train another model that maps the features to the individual treatment effects. We use cross-validation to select our hyperparameters and obtain the optimal model. Using the original data, we take a single column among the features and permute the order of the data and calculate a new set of individual treatment effects. We compare the new and original individual treatment effects (based on the permuted data and those from the non-permuted data) and calculate the Mean Squared Errors (MSE).
We repeat this for all the features, permuting them individually and evaluating how they change the prediction of the individual treatment effect target. Features that yield the largest MSEs are likely to be more important than those features with lower MSEs since permuting those features breaks the most informative predictive relationships.
We then repeat the above steps across all the bootstrap samples. Note that a different bootstrap sample will change the value of the individual treatment effects since we train different outcome surfaces for µ We embed the permutation importance selection method in a bootstrapping procedure in order to capture hyperparameter uncertainty. For example, a different 'tree depth' could be chosen between different bootstrap samples. This would affect the type of nonlinear/interaction relationships that would be captured by the models, which in turn would affect which features turn out to be important.
Finally, we obtain an average MSE for each feature, averaged across all bootstrap samples. This average value allows us to rank the features by their importance. Again, those with the largest average MSE values are the most important. We can also evaluate the uncertainty of this estimate since we obtain a distribution of MSE values across the different bootstrap samples. Figure 8 displays the top ten features (based on the permutation importance procedure described above) and a residual category for all the other features. The features that are most important are: weekly gross wages on the main job and income-or wealthrelated variables. Together, this class of income/wealth variables accounts for 40% of the importance of all variables. We focus on these selected features since our Nested CV approach pointed to the better predictive performance of the GBR model over the linear models.
Other important features include those related to employment, including occupational status, employment expectations, and employment history. The demographic background of the individual, namely their age, is also important. The results from the T-learner model (using GBR) shows a similar story to the results from the permutation importance procedure using the DR model. Overall, as Appendix Figure 15 shows, income and employment-related variables are the most salient in explaining treatment effect heterogeneity.
Continuing to focus on the results from the Doubly Robust model, Figure 12 shows that there is heterogeneity in the treatment impacts. We have identified the features that were considered most important according to the permutation procedure. For each feature, we divide the sample into two groups. For continuous variables, we take the median value and divide the sample into those who are above and below this median value.
Weekly personal income has a large impact on the effect size. Those with below median income in 2001 derive more benefits than those with above median income, possibly because high income earners hit an earnings ceiling. Younger people in 2001 also derive more returns, as they may have had more time to accumulate returns. This result aligns with findings from previous studies (Polidano and Ryan, 2016, Dorsett, Lui and Weale, 2016, Perales and Chesters, 2017. Weekly personal income and age are likely to be highly correlated -with older individuals tending to earn a higher personal income. We cannot say which variable is the main driver of the heterogeneous treatment effects and there may also be interaction effects between them.
We also investigate if there are heterogeneous treatment effects according to commonly used variables in Figure 12. Females reap slightly higher returns compared to males although this is not statistically significant. Similar treatment effects apply to those with and without a resident children, although the effect sizes widen in favour of parents with older children in the household.
Acquiring an additional qualification may increase earnings through a number of potential mechanisms. We find evidence that, in Figure 11 for example, it increases the chance that individuals move from being unemployed or out of the labour force to being employed. The increase in employment is approximately 8 percentage points and is statistically significant. We also find evidence pointing to workers switching occupations or industries. This suggests that further education in later life can support the economic goals of a larger workforce as well as a more mobile one.
Sensitivity Analysis
For sensitivity analysis, we repeated the T-learner estimations using feature inputs values taken from individuals two years before they began study. Thus, we examine if our main results are sensitive to changes in the mapping equations for the treatment and control outcome equations when features are measured closer to the event of study, compared to taking input values in 2001. We also measured outcomes four years after study began. This means that the timing between when the feature input values are measured, when a further degree commenced and was completed, as well as when the outcomes are measured, are all closer together. This necessarily leads us to estimate the short-term returns of obtaining a further degree.
Our results from the sensitivity analysis are similar to that of the main results. Specifically, the gains in gross earnings from a further degree in the sensitivity analysis are: $74 per week (Ridge), $117 per week (LASSO) and $93 per week (GBR). The key take-away from these results is that the average treatment effects in the main analysis are not sensitive to whether our features use 2001 as the input year or use the two years before study.
Furthermore, the main results are not sensitive to when outcomes are measured i.e. the returns measured four years after the start of a study spell are comparable to the returns averaged over 2 to 17 years after study completion. This may point to the fact that the returns to further study are accrued in the immediate years following the completion of the degree. It also suggests the returns may not atrophy over time, especially since the majority of people who did complete a degree in the main analysis did so in the earlier years of the survey ( Figure 5). Unfortunately, our sample sizes are not sufficient to explore heterogeneity in treatment effects by the year of completion.
The importance of employment-related features such as earnings (individual and household), wages, and hours worked are reiterated in the sensitivity analysis using the panel structure of the data. Namely, when we define our outcomes 4 years after the start of a study spell and where we define features two years before study started, we also see similar results to that of the main results. However, in Figure 16, it is clear that the 'trend' or 'growth' in the values of features such as individual earnings, hours worked and household income are also important. This finding of dynamic selection is echoed in the literature (Jacobson, LaLonde and Sullivan, 2005, Dynarski, Jacob and Kreisman, 2016, 2018.
In Figure 16, the feature mental health is also picked. This result may reflect the fact that the timing of the measurement of features, treatment and outcomes are all closer together compared to the main results. This means that mental health is an important factor in explaining the heterogeneity in relatively 'short-term' treatment effects.
Conclusions
Using a machine learning based methodology and data from the rich and representative Household Income and Labour Dynamics Australia survey we have shown that completing an additional degree later in life can add $60-80 (AUD, 2019) per week to an individual's gross earnings. This represents roughly 7-8 percent of the weekly gross earning for the average worker in Australia. Our machine learning methodology has also uncovered sources of heterogeneity in this effect.
Our methodology has allowed us to exploit the full set of background information on individuals from the HILDA survey, beginning with more than 3,400 variables, to con-trol our analysis. We find that our automated feature selection method selects a set of controls/features that include those that have theoretical foundations and/or align with those chosen in past empirical studies. However, we also choose features that have been traditionally overlooked. These include variables such as household debt, wealth, housing, and geographic mobility variables. Other important predictors include the ages of both resident and non-resident children: non-resident children aged 15 or above matter and resident children aged 0-4 are important.
Qualification advancements do not benefit Australian workers in the same way: those with lower weekly earnings appear to benefit more from later-life study than those with higher earnings. One possible reason is that ceiling effects limit the potential returns from additional education. We also find that younger Australians (less than 45 years of age) benefit more than their older counterparts. Again, a ceiling effect phenomenon may apply since age is highly correlated to weekly earnings.
Acquiring an additional qualification may increase earnings through a number of potential mechanisms. We find evidence that it increases the chance that individuals move from being unemployed or out of the labour force to being employed. We also find evidence pointing to workers switching occupations or industries. This suggests that further education in later-life can support the economic goals of a larger workforce as well as a more mobile one. Uncertainty from model parameters Weekly earnings from all jobs in 2019 (w19 earning) records the weekly earnings from all jobs for the individual in 2019.
Tables and Figures
Working hours in 2019 (w19 wkhr ) records the total number of hours the individual works in all jobs in a week on average. Working hours are set to 0 for those not working.
• Masters
• Other Re-education completion based on both highest attainment and detailed qualifications (redufl ) records whether the individual has completed re-education based on both the variables reduhl and redudl. When either of these variables has a value of 1, this variable will take on the value of 1.
B.2.3 Input Variables
For each variable, missing values (if any) have been set to zero and a new binary variable has been generated to indicate the observations that are missing.
Demographics
Female (p fem) records whether the individual is female.
Remoteness records whether the individual lives in: • A major city (p urdg1 ) • An inner region (p urdg2 ) • Outer and remote areas or migratory in nature (p urdg3 )
Marital status in 2001 records whether in 2001 the individual was:
• Married (p mar1 ) • De facto (p mar2 ) • Single and never been married (p mar6 )
Parental Status
Number of dependents in 2001 (p noch) records the number of dependent children the individual had in 2001.
Physical Health
Severity of health conditions in 2001 records whether the individual had: • No health conditions (p ddeg1 ) • A mild condition (p ddeg2 ) • A moderate condition (p ddeg3 ) • A severe condition (p ddeg4 )
Labour Force Variables
Labour market status in 2001 records whether the individual was: • Employed (p lfs1 ) • Unemployed (p lfs2 ) • Not in the labour market (p lfs3 ) Extent of working hour match with preferences in 2001 records whether the match between the individual's total weekly working hours across all jobs and their preferred number of working hours made them: • Not working (p whp1 ) • Underemployed by at least 4 hours a week (p whp2 ) • Roughly Matched: Preferred and Actual Hours Worked differ by less than 4 hours a week (p whp3 ) • Secondary (p medu4 ) • Post-secondary, non-university (p medu5 ) • Post-secondary, university (p medu6 ) Father undertaken post-school qualification through employer or non-tertiary means (p fpsm) records whether the individual's father had undertaken his highest qualification through employers or other channels other than tertiary education, as reported in 2005.
Mother undertaken post-school qualification through employer or non-tertiary means (p mpsm) records whether the individual's mother had undertaken his highest qualification through employers or other channels other than tertiary education, as reported in 2005.
Father's Employment at age 14 records whether the individual's father was working when they were aged 14, in the following categories: • Father deceased or not living with respondent (p femp1 ) • Father not employed (p femp2 ) • Father employed (p femp3 ) Mother's Employment at age 14 (p memp) records whether the individual's mother was working when they were aged 14, in the following categories: • Mother deceased or not living with respondent (p memp1 ) • Mother not employed (p memp2 ) • Mother employed (p memp3 ) Father substantially unemployed growing up records whether the individual's father had been unemployed for 6 months or more when they were growing up, in the following categories: • Father not living with respondent (p fsue1 ) • Father not substantially unemployed (p fsue2 ) • Father substantially unemployed (p fsue3 ) Father's Occupation records whether at age 14 the individual's father was last known working as: • Father not in household (p focc1 ) Attitude towards having job in 2001 (p jbwk ) records the average score of attitude towards having a job reported by the individual in 2001 across two items (p jadnm and p jahpj ), in a scale ranging from 1 to 7, with a higher score indicating a more favourable attitude towards having a job.
Enjoy job without needing money in 2001 (p jadnm) records the extent the individual agreed with the statement that the person would enjoy having a job even if they did not need the money in 2001, in a scale ranging from 1 to 7, with a higher score indicating more agreement. Completed re-education after 2017 based on detailed qualifications (redllt) records whether the individual has completed any one of the following qualifications since last interviewed between 2018 and 2019: • 2008 (p rcom7 ) • 2009 (p rcom8 ) • 2010 (p rcom9 ) • 2011 (p rcom10 ) • 2012 (p rcom11 ) • 2013 (p rcom12 ) • 2014 (p rcom13 ) • 2015 (p rcom14 ) • 2016 (p rcom15 ) • 2017 (p rcom16 ) • 2018 (p rcom17 ) • 2019 (p rcom18 ) Locus of control in 2003 (p cotrl ) records the transformed composite score 18 for locus of control items reported by the individual in 2003, the first year in HILDA for which this information becomes available. The transformation results in a variable that is ranged between 7 and 49. Locus of control measures the degree to which individuals attribute outcomes to internal versus external factors or the extent their welfare are in their own control compared to external circumstances. A higher score indicates having a more external locus of control, which is considered as a favourable personality trait. Treated sample: For any person in HILDA who ever reported starting a degree (determined by taking a person who switches from reporting "not currently studying" in one wave to "currently studying" in the next wave) and/or completing a degree, we select their first study event as a treatment observation if it satisfies three other conditions.
They are: (1) at least 21 years old in the starting year of study 20 , (2) they were present in the two years before the start of study (in order to have information on their feature values), (3) there were not currently studying in any of the two years before the starting year of further study (to avoid reverse-causation issues), (4) they completed their further degree and (5) they were present in the survey and had a non-missing outcome 4 years after the start of study.
If a study event does not satisfy these conditions, we look to the next study event that satisfies these conditions or (if unavailable) delete the person from our sample completely.
Conditions (3) and (5) together mean that we analyse a sample of individuals who started their degrees anytime between 2003 and 2015.
In our treated group, 1,814 individuals started and completed a further educational degree.
Control sample: These are those who had never started re-education throughout HILDA. From these control observations, we assign a time stamp to them for the year the control person theoretically started to study. We do this for every year from 2003 to 2019. This implies that never re-educated individuals can be duplicated and used multiple times. For example, if a control individual is observed throughout the years 2001 to 2016, then they will be a control for the separate treated individuals that started re-education in 2003, in 2004, 2005 and up to 2017 i.e. the control individual will be duplicated 15 times.
There are 60,945 control observations i.e. individuals who never completed a further degree. However, as described above, these are non-unique observations in the sense that a control individual can be duplicated up to 15 times.
Sexual orientation (p1 lgtb) records that the individual's sexual orientation is not heterosexual. The variable is constructed from the Sexual Identity question that is only asked in waves 12 and 16. We combine answers from both waves to create a binary indicator for the individual ever reporting a sexual identity that is not heterosexual, treating sexual orientation as a fixed trait for a given individual.
Parental Status
Number of dependents (p1 totalkids) records the number of children under 15 the individual had in the household in the year prior to re-education start.
Having children (p1 anykid ) records the individual had any dependents in the household in the year prior to re-education start.
Children under 5 (p1 kidu5 ) records the individual had children under 5 in the household in the year prior to re-education start.
Age of youngest (p1 rcyng) records the age of the youngest children living with the respondent in the year prior to re-education start (including adult children).
Physical Health
Severity of health conditions (p1 disdeg) records whether, in the year prior to re-education start, the individual had: • No health conditions (value=0) • A mild condition (value=1) • A moderate condition (value=2) • A severe condition (value=3)
Labour Force Variables
Labour market status (p1 lfs) records whether the individual was: • Employed (value=1) • Unemployed (value=2) • Not in the labour market (value=3) Extent of working hour match with preferences (p1 whpref ) records whether, in the year prior to re-education start, the match between the individual's total weekly working hours across all jobs and their preferred number of working hours made them: • Underemployed by at least 4 hours a week (value=1) • Roughly Matched: Preferred and Actual Hours Worked differ by less than 4 hours a week (value=2) • Overemployed by at least 4 hours a week (value=3) Employee type (p1 emptype) records whether, in the year prior to re-education start, the individual was: • An employee (value=1) • An employee of own business (value=2) • Self Employed (value=3) • Unpaid family worker (value=4) Contract type (p1 contype) records whether, in the year prior to re-education start, the individual was: • On a fixed term contract (value=1) • On a casual contract (value=2) • On a permanent contract (value=3) • On other types of contracts (value=4) Occupation (p1 occ) records whether, in the year prior to re-education start, the individual was working as: • Armed forces (value=0) On Ab/Austudy (p1 onsdy) records the individual was on Ab/Austudy in the year prior to starting re-education On Bereavement Allowance (p1 onba) records the individual was on Bereavement Allowance in the year prior to starting re-education On Sickness Allowance/Speical Benefits (p1 onsab) records the individual was on Sickness Allowance/Speical Benefits in the year prior to starting re-education On Partner Allowance (p1 onpa) records the individual was on Partner Allowance in the year prior to starting re-education On Parenting Payments (p1 onpp) records the individual was on Parenting Payments in the year prior to starting re-education
Housing situation
Mortgage balance (p1 hsmgowe) records the amount still owing on the mortgage that the individual had in the year prior to re-education start. For those without a mortgage or not home owner, the mortgage balance is set to 0.
Non home owners (p1 renter ) records whether the individual was renting or not living in their own homes in the year prior to re-education start.
Prior Year Outcomes
Weekly income from all jobs (p1 earning) records the weekly earnings from all jobs for the individual in the year prior to the individual starting their re-education.
Weekly income from main job (p1 wscmei ) records the weekly earnings from the main job for the individual in the year prior to the individual starting their re-education.
Weekly working hours (p1 wkhr ) records the total number of hours the individual works in all jobs in a week on average in the year prior to the individual started their re-education. Working hours are set to 0 for those not working.
Real hourly wage (p1 rlwage) records the real hourly wage of the individual in the year prior to the individual starting their re-education, indexed at 2012 price levels. Hourly wages are set to 0 for those not working and set to missing for those reporting working more than 100 hours a week. All wages have then been adjusted up by $1 to preserve sample size for the logarithm transformation.
Mental health (p1 ghmh) records the transformed mental health scores from the aggregation of mental health items of the SF-36 Health Survey, as reported by the individual in the year prior to the individual started their re-education. It ranges from 0 to 100, with higher scores indicating better mental health.
Life satisfaction (p1 losat) records the life satisfaction score reported by the individual in the year prior to the individual started their re-education. It ranges from 0 to 10, with higher scores indicating higher life satisfaction.
Delta variables
For all the variables described in the preceding section titled Characteristics in the Year
• Creative arts
• Food, hospitality and personal services
• Other
Study duration (fsddur ) records the total number of waves an individual had spent studying from the start of their first study event counted in our sample.
Starting Study intensity (csftsd ) records whether the individual was studying full time or not when they started their re-education.
Finishing Study intensity (fsftsd ) records whether the individual was studying full time or not when they completed their re-education.
Other variables
Number of waves in HILDA (numwave) records the number of waves in which the respondent has submitted a valid response for the HILDA survey.
C.2.4 Variables that are not included in the model
The unique person identifier (xwaveid ) Wave started re-education (icswave) Wave completed re-education (ifswave)
Control group indicator (control )
Started but did not complete re-education between 2003-2017 (ncomp) Starting year of re-education imputed (impute) is a binary indicator for individuals for which we observe their re-education completion but they never reported ever starting re-education and so we had to impute a starting wave for these individuals.
Started re-educaton in wave 2018/19 (latestart) is an indicator for those individuals who had started their re-education in 2018 or 2019.
|
2023-04-05T01:16:35.050Z
|
2023-04-04T00:00:00.000
|
{
"year": 2023,
"sha1": "51e94917aa9d794f76cdc2cc32fe91ea659d47af",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "51e94917aa9d794f76cdc2cc32fe91ea659d47af",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics",
"Mathematics"
]
}
|
221979348
|
pes2o/s2orc
|
v3-fos-license
|
Interleukin-17A: The Key Cytokine in Neurodegenerative Diseases
Neurodegenerative diseases are characterized by the loss of neurons and/or myelin sheath, which deteriorate over time and cause dysfunction. Interleukin 17A is the signature cytokine of a subset of CD4+ helper T cells known as Th17 cells, and the IL-17 cytokine family contains six cytokines and five receptors. Recently, several studies have suggested a pivotal role for the interleukin-17A (IL-17A) cytokine family in human inflammatory or autoimmune diseases and neurodegenerative diseases, including psoriasis, rheumatoid arthritis (RA), Alzheimer’s disease (AD), Parkinson’s disease (PD), multiple sclerosis (MS), amyotrophic lateral sclerosis (ALS), and glaucoma. Studies in recent years have shown that the mechanism of action of IL-17A is more subtle than simply causing inflammation. Although the specific mechanism of IL-17A in neurodegenerative diseases is still controversial, it is generally accepted now that IL-17A causes diseases by activating glial cells. In this review article, we will focus on the function of IL-17A, in particular the proposed roles of IL-17A, in the pathogenesis of neurodegenerative diseases.
Neurodegenerative diseases are characterized by progressive loss of selectively vulnerable populations of neurons, which progressively worsens over time and eventually leads to dysfunction (Hammond et al., 2019). There is a convincing body of evidence that protein aggregation, neuronal loss, and immune pathway dysregulation are common features of neurodegeneration (Hammond et al., 2019). These diseases include AD, PD, dementia with Lewy Bodies (DLB), multiple sclerosis (MS), and glaucoma. Glaucoma is characterized by visual field loss and progressive damage to the optic nerve axon and retinal ganglion cells (RGCs; Tian et al., 2015). The elevated intraocular pressure (IOP) is thought to be a major risk factor (Quigley and Broman, 2006;Wei and Cho, 2019). Studies have shown that IL-17A is involved in the pathogenesis of CNS neurodegenerative diseases. Levels of IL-17A in cerebrospinal fluid (CSF) and plasma are significantly increased in patients with MS, AD, and PD, and the expression levels are related to the severity and progress of diseases (Gu et al., 2013;Zhang et al., 2013;Kostic et al., 2014). Although the function of IL-17A in CNS neurodegenerative diseases is less understood and remains somewhat controversial, IL-17A is described to induce the occurrence and development of diseases by activating glial cells (especially microglia; Gu et al., 2013;Kolbinger et al., 2016). Therefore, for this review article, we mainly focused on recent studies on IL-17A and its role in neurodegenerative diseases.
IL-17A AND IL-17 FAMILY CYTOKINES
There are six cytokines and five receptors in the IL-17 family (Gaffen, 2011b). The cytokines include IL-17A to IL-17F, and the receptors included IL-17RA to IL-17RE. These cytokines are dimer molecules, and they contain 163-202 amino acids with molecular weights ranging from 23 to 36 kDa (Gaffen, 2011b). The structures of these cytokines are similar to those of platelet-derived growth factor (PDGF) and nerve growth factor (NGF), which involve a special cystine knot fold architecture (Hymowitz et al., 2001). In the IL-17 family, IL-17A is the most studied cytokine, and it has the 57% sequence homology with the open reading frame 13 (ORF13) of Herpesvirus saimiri, a T cell tropic-herpesvirus that causes a lymphoproliferative syndrome (Gaffen, 2011b). Although they have a similar ORF13 sequence, the 3 UTR of IL-17A has an adenylate-uridylate-rich (AU-rich) instability sequence, a common characteristic of growth factor and cytokine genes, and IL-17A can induce cytokine secretion in certain cells (Gaffen, 2011b). Thus, IL-17A is considered a cytokine (McGeachy et al., 2019). It has been shown that IL-17A exerts functions in the process of immune inflammation, neovascularization, and tumor development (Zhu et al., 2016;Kuwabara et al., 2017). IL-17B through IL-17F was discovered when researchers screened for the homologous genes of IL-17A. IL-17B has been reported to play an important role in cancer and inflammation. The proliferation and migration of gastric carcinoma cells are facilitated by IL-17B through activating mesenchymal stem cells in vitro (Bie et al., 2017b). The resistance to the treatment of paclitaxel in breast cancer is promoted by IL-17B through activation of the extracellular regulated protein kinases 1/2 (ERK1/2) pathway (Laprevotte et al., 2017). IL-17B exerts a dual function in the development and progression of inflammation. In mucosal inflammation, IL-17B plays an anti-inflammation role (Reynolds et al., 2015). In rat models with indomethacininduced intestinal inflammation, however, IL-17RB levels are increased, and intraperitoneal injection of IL-17B promotes the migration of neutrophils in normal mice, indicating that IL-17B has a pro-inflammatory function (Shi et al., 2000;Bie et al., 2017a). The source of IL-17C is different from IL-17A as IL-17C is produced by different cells, such as epithelial cells (Ramirez-Carrozzi et al., 2011). A recent study has shown that the peripheral nerve neurons are protected by IL-17C, which acts as a neurotrophic cytokine, during Herpes simplex virus reactivation . Also, through the expression of antimicrobial peptides, chemokines, and pro-inflammatory cytokines, epithelial inflammatory responses are stimulated by IL-17C. Although IL-17C plays a proinflammatory role in a skin inflammation model induced by imiquimod, it has a protective function in colitis elicited by dextran sodium sulfate (Ramirez-Carrozzi et al., 2011). IL-17D is preferentially expressed in some tissues, such as adipose tissue and skeletal muscle, as well as some organs, including lung, heart, and pancreas (Starnes et al., 2002). IL-17D has some effect during inflammation, tumors, and viral infection. Stimulation of endothelial cells with IL-17D induces a classic pro-inflammatory cytokine response, including granulocyte-macrophage colony-stimulating factor (GM-CSF), IL-6, and IL-8 and the increased expression of IL-8 is dependent on nuclear factor B (NF-κB)-dependent (Starnes et al., 2002). Compared to wild-type animals, IL-17D (−/−) mice showed a higher incidence of cancer and exacerbated viral infections, indicating that the expression of IL-17D after viral infection and tumors is essential for the protection of the host (Saddawi-Konefka et al., 2016). Moreover, IL-17D plays a role in tumors and virus surveillance mediated by NK-cells (Saddawi-Konefka et al., 2016). There are some differences between IL-17E (now called IL-25) and other family members of IL-17. IL-25 is associated with type 2 immune response marked by increased serum Immunoglobin E (IgE), IgG, and IgA levels as well as pathological changes in the gastrointestinal tract and lungs. In the digestive tract, IL-25 limits chronic inflammation and regulates type 2 immune response (Owyang et al., 2006). IL-25 induces IL-4, IL-5, and IL-13 gene expression (Fort et al., 2001). An early study indicated that IL-25 exerts an opposite function in the pathogenesis of organ-specific autoimmunity compared to IL-17A (Kleinschek et al., 2007). IL-17F and IL-17A are similar in terms of function and source. These two cytokines are not only the result of gene replication, as they are located next to each other on the same chromosome, but are also co-produced by Th17 cells (Waisman et al., 2015). Similar to IL-17A, IL-17F contributes to inflammatory responses and barrier surface protection (Puel et al., 2011).
RECEPTORS AND SIGNALING PATHWAYS OF IL-17
There are five receptors (IL-17RA to IL-17RE) in the IL-17 receptor family and these receptors are composed of two chains (Waisman et al., 2015). Among these receptors, IL-17A and IL-17F bind to the same receptor, which is a heterodimer composed of IL-17RA and IL-17RC (Ely et al., 2009;Hu Y. et al., 2010). The heterodimeric receptor composed of IL-17RA and IL-17RC is express in CNS resident cells, such as microglia and astrocytes, as well as CNS endothelial cells (Kebir et al., 2007;Das Sarma et al., 2009). However, the expression of the IL-17 receptor expresses on neurons remains controversial. Early studies have shown that rat dorsal root ganglion neurons and mouse neural stem cells express IL-17 receptors Segond von Banchet et al., 2013). Recently, human PD-induced pluripotent stem cell (iPSC)-derived midbrain neurons (MBNs) have been described to express IL-17 receptors (Kawanokuchi et al., 2008;Sommer et al., 2018).
The IL-17 receptor family has one thing in common, namely, it shares a cytoplasmic motif termed ''SEFIR'' (SEF/IL-17 receptor; Novatchkova et al., 2003). After contact with IL-17 family cytokines and the IL-17R complex, Act1) an adaptor protein) is recruited to the SEFIR domain of the receptor complex (Qian et al., 2007;Liu et al., 2011;Waisman et al., 2015). The intracellular SEFIR domain then interacts with a corresponding SEFIR motif on the Act1 adaptor. Act1 then rapidly ubiquitinates another E3 ubiquitin ligase, namely TNF-receptor associated factor 6 (TRAF6; Schwandner et al., 2000;Qian et al., 2007). Ultimately, IL-17 signaling triggers the activation of the canonical NF-κB cascade response (Qian et al., 2007). Collectively, transcriptional induction of target genes is triggered by these factors (McGeachy et al., 2019; Figure 1). When the NF-κB cascade response is activated, IL-17-NF-κB signaling induces several positive and negative feedback circuits to control related physiological function. NF-κB upregulates the expression of B cell lymphoma 3-encoded protein (Bcl3) and then, in turn, facilitates the expression of multiple IL-17-NF-κB-driven anti-microbial and proinflammatory genes (Ruddy et al., 2004;Karlsen et al., 2010). However, IL-17-NF-κB signaling induces several negative feedback circuits to restrain the activation of NF-κB, such as deubiquitination (Garg et al., 2013;Cruz et al., 2017). Among the above signaling pathways, Act1 is an essential activator. The absence of the Act1 gene has been shown to cause complete failure in the response of cells to IL-17 (Qian et al., 2007;Liu et al., 2011).
FUNCTION OF IL-17A
Induction of the expression of chemokines, such as chemokine (C-X-C motif) ligand 1 (CXCL1), CXCL2, and CXCL8, is an important function of IL-17A. These chemokines can attract myeloid cells to injured or infected tissues (Onishi and Gaffen, 2010). IL-17A also induces the expression of IL-6 and granulocyte colony-stimulating factor (G-CSF), which promotes myeloid-driven innate inflammation (Gaffen et al., 2014). When encountering acute microbial invasion, IL-17A induces responses to protect the host. Overwhelming data suggest that IL-17A has a specific function in the prevention of Candida albicans. Antifungal immunity is regulated by IL-17A through upregulating antimicrobial peptides (e.g., defensins) and proinflammatory cytokines (e.g., CXCL1 and CXCL5; Conti and Gaffen, 2015;Drummond and Lionakis, 2019). The increased expression of proinflammatory cytokines and antimicrobial peptides has a synergistic effect on limiting fungal overgrowth (Conti and Gaffen, 2015;Drummond and Lionakis, 2019).
In injured psoriatic skin tissue, dysregulated IL-17 signaling promotes pathogenic inflammation. A phase 2 clinical trial has shown that inhibitory treatment of IL-17A is effective, indicating the pathogenic role of IL-17A in mediating important inflammatory pathways in psoriasis (Chiricozzi and Krueger, 2013). In ankylosing spondylitis (AS), another autoimmune disease, IL-17A has been shown to contribute to pathogenic inflammation. Two double-blinded phase-3 trials have reported that the use of secukinumab (an anti-IL-17A monoclonal antibody) to treat AS is effective (Baeten et al., 2015). However, researchers have failed to identify evidence of meaningful clinical efficacy with brodalumab (a human anti-IL-17A monoclonal antibody) treatment in rheumatoid arthritis (RA) at least when compared to treatment with methotrexate (Pavelka et al., 2015). Taken together, these data indicate that further studies are required to clearly understand the role of IL-17A in the pathogenesis of autoimmune diseases.
In healthy skin, commensal microflora induces the production of IL-17A, which provides anti-fungal protection (McGeachy et al., 2019). When injury destroys the epithelial barrier of the skin, IL-17A promotes epithelial-cell proliferation and can clear the pathogenic agents (Naik et al., 2015). Production of IL-17A from the local epithelium is driven by the microbiota, resulting in the anti-microbial function. Colonization with the segmented filamentous bacterium (SFB), a single commensal microbe, is sufficient to induce the production of IL-17A in the lamina propria of the small intestine of mice. SFB and Th17 cells mediate the protection from pathogenic microorganisms (Ivanov et al., 2009). A previous study has suggested that IL-17A is beneficial in controlling dysbiosis and maintaining a homeostatic balance in the gut. The predisposition to neuroinflammation is enhanced by abolishing the intestinal IL-17RA pathway, thus confirming the crucial role of the IL-17R pathway in mediating the protection of epithelial surface and interaction of host and microbiome (Ivanov et al., 2009;Kumar et al., 2016).
IL-17A promotes the repair of tissue. A crucial part of wound repair is the proliferation of epithelial keratinocytes. In keratinocytes, the expression of regenerating islet-derived protein 3-alpha (REG3A), an intestinal anti-microbial protein, is increased during psoriasis. IL-17A induces keratinocytes to express REG3A, and this process promotes the proliferation of keratinocytes after injury in psoriasis (Lai et al., 2012).
IL-17A and transcription factors that regulate adipocyte differentiation have been reported to act in concert to contribute FIGURE 1 | Signaling pathway of Interleukin-17A (IL-17A). The heterodimer receptor consists of two subunits, IL-17RA and IL-17RF, which bind to IL-17A, IL-17F, and IL-17AF ligands. The intracellular SEF/IL-17 receptor (SEFIR) domains interact with a corresponding SEFIR motif on the Act1 adaptor (Novatchkova et al., 2003). TNF-receptor associated factor 6 (TRAF6) and TRAF2/5 proteins bind to the TRAF-binding site in Act1. After binding to Act1, TRAF6 mediates the activation of the classical nuclear factor-κB (NF-KB) pathway of MAPK:AP-1. Collectively, these pathways trigger the transcriptional induction of target genes (Qian et al., 2007). In the IL-17 signaling pathway, a pathway of post-transcriptional mRNA stabilization is promoted through the recruitment of TRAF2 and TRAF5 by Act1 (Schwandner et al., 2000). This physiological process is achieved by controlling multiple RNA-binding proteins, such as HuR and Arid5a.
to the suppression of adipogenesis . Mice with deficiency of both IL-17A and IL-17RA gain increased fat with age, and IL-17A suppresses the maturation of cells with adipogenic potential, indicating that IL-17A inhibits adipogenesis . In a healthy state, IL-17A directly influences the metabolic function of adipocytes. IL-17A produced by γδT cells controls the homeostasis of regulatory T cells and adaptive thermogenesis in adipose tissue (Kohlgruber et al., 2018).
These abovementioned findings show that IL-17A is not just an inflammatory factor. IL-17A usually protects the body during the acute injury, but when a wound takes a long time to heal and turns to a chronic injury, the effect of IL-17A may turn into erosion or hyperproliferation of the wound, ultimately leading to the loss of function (McGeachy et al., 2019).
ROLE OF IL-17A IN NEURODEGENERATIVE DISEASES
There are several divergent and shared pathological and clinical features of age-related CNS neurodegenerative diseases, such as diverse protein aggregation and selective vulnerability of the brain that impact the clinical presentation and immune responses of diseases (Hammond et al., 2019). Neurodegenerative diseases lead to impairments of a person's memory and cognitive ability, and some of these diseases affect patients' ability to speak, move, and breathe. Neurodegenerative disease is a multifactorial disease that included aging, mitochondrial defects, dysfunctions in autophagic lysosomal pathways, neurovascular toxicity, synaptic toxicity, accumulation of misfolded proteins, and liquid-phase transitions in pathological protein aggregation (Focus on Neurodegenerative Disease, 2018).
Neuroinflammation contributes, in part, to the occurrence of neurodegenerative diseases. Neuroinflammation in diseases, such as PD, AD, and ALS, is characterized by a reactive morphology of glial cells and increased levels of inflammatory mediators in the parenchyma (Ransohoff, 2016). To date, most pieces of evidence point to a pathogenic role for IL-17A in the CNS neurodegenerative diseases. IL-17A acts on multiple CNS resident cells to potentiate inflammation (Qian et al., 2007;Stromnes et al., 2008;Kang et al., 2010;Ji et al., 2013;Kang Z. et al., 2013;Liu et al., 2015;Rodgers et al., 2015; 2019; Figure 2). It has been reported that IL-17A plays a regulatory factor in the induced cytokine network rather than as a direct role to mediate tissue damage during neuroinflammation (Zimmermann et al., 2013). Also, several studies have been reported the impact of medicinal plants on the level of IL-17A in neurodegenerative diseases ( Table 1).
AD is the most common type of late-onset dementia, and it is a complex molecular and genetic disease. The features of AD are neuronal and extensive synaptic loss, which leads to brain volume loss. Subsequently, the pathological changes of brain structure lead to a decline in patients' memory and cognitive function that results in an inability to take care of themselves in daily life (Hammond et al., 2019). In recent years, the understanding of the pathological mechanism of AD has been constantly improved. The important pathological features of AD include intracellular neurofibrillary tangles resulting from the aggregation of hyperphosphorylated tau and deposition of extracellular neurotoxic plaques primarily composed of amyloidβ (Aβ; Holtzman et al., 2011). The aggregation of amyloid and tau eventually impacts the hippocampal, entorhinal cortex, and neocortical regions (Montine et al., 2012). Furthermore, Bussian et al. (2018) found that senescent microglia and astroglia FIGURE 2 | The way of glial cells respond to IL-17A. In central nervous system (CNS) neurodegenerative diseases, IL-17A binds to the receptor on the surface of microglia and activates microglia. Activated microglia secrete cytokines, exacerbating dopaminergic neurons loss. Astrocytes respond to IL-17A through generating chemokines to promote the recruitment of inflammatory cells, such as macrophages and neutrophils. IL-17A reduces the ability of astrocytes to absorb and transform glutamate as well as enhance the excitotoxicity of glutamate. IL-17A inhibits the maturation of oligodendrocyte lineage cells (OPCs) and exacerbates the TNF-α-induced oligodendrocyte apoptosis (Qian et al., 2007;Stromnes et al., 2008;Kang et al., 2010;Ji et al., 2013;Kang Z. et al., 2013;Liu et al., 2015;Rodgers et al., 2015;Liu Z. et al., 2019). influence neurofibrillary tangle formation and intraneuronal tau phosphorylation. IL-17A may play a significant role in the pathogenesis of AD. In terms of clinical manifestations, elevated levels of IL-17A in plasma and CSF have been reported in patients with AD. For example, Chen et al. (2014) showed that IL-17A level in serum is increased in Chinese patients, and Hu W. T. et al. (2010) reported that the CSF level of IL-17A increases in patients. Also, Behairi et al. (2015) found that the baseline level of IL-17A is markedly higher in AD patients compared to controls. At the cellular and genetic level, there is also evidence of the correlation between IL-17A and the pathogenesis of AD. It has been reported that TH17 cell differentiation and activation as well as associated transcription factors are increased in patients with AD (Saresella et al., 2011). The induction and expression of IL-17A may be due to the polymorphism of Th17-related genes (Zota et al., 2009). BACE1 is a transmembrane asparty1 protease that plays a role in forming plaques in AD (Vassar, 2004). BACE1-deficient T cells have reduced IL-17A expression under Th17 conditions in AD mouse models (Hernandez-Mir et al., 2019). However, the effect of IL-17A on the pathogenesis of AD is controversial. Yang generated an AD mouse model with IL-17A overexpression and Yang et al. (2017) reported that IL-17A does not exacerbate neuroinflammation, and Yang also demonstrated that IL-17A overexpression decreases the level of soluble Aβ in the CSF and hippocampus as well as improves the metabolism of glucose (Yang et al., 2017). Two distinct human cohort studies have reported that the IL-17A level is decreased in AD patients compared to healthy controls (Doecke et al., 2012;Hu et al., 2013).
PD is the second most frequent form of neurodegenerative disease. PD is characterized by motor symptoms, including tremor, rigidity, and bradykinesia (Moustafa et al., 2016). A key pathological finding in PD is the aggregation of hallmark proteins (known as Lewy bodies), primarily composed of protein α-synuclein (Hamilton, 2000). There are certain routes for degeneration and aggregation of proteins generally spreading from the brain stem to the substantia nigra and other midbrain regions and then to the neocortex (Braak et al., 2004). Before the onset of symptoms, there is a massive loss of dopamineproducing neurons in the substantia nigra (Cheng et al., 2010). Recently, Sommer et al. (2018) have reported that T lymphocytes increase cell death in PD. iPSC-derived MBNs are mediated by IL-17A, indicating that IL-17A may be involved in PD pathogenesis. Liu et al. (2017) have demonstrated that Th17 cells infiltrated into the brain parenchyma through the disrupted blood-brain barrier (BBB) in PD. It has also been confirmed in animal experiments that IL-17A plays a role in the development of PD. Dopaminergic neurodegeneration, motor impairment, and BBB disruption are alleviated in mice with a deficiency of IL-17A (Liu Z. et al., 2019). However, the decreased plasma level of IL-17A is found in PD patients compared to controls (Rocha et al., 2018).
MS is an inflammatory demyelinating disorder of the CNS (Kostic et al., 2014), and susceptible genes and environmental factors are involved in disease pathogenesis. MS is characterized by the onset of recurring clinical symptoms followed by partial or total recovery. After 10-15 years of the disease, progressive deterioration is observed in up to 50% of untreated patients (Kolbinger et al., 2016). In approximately 15% of MS patients, the disease deteriorates from its onset (Gold et al., 2010). At present, the pathophysiology of MS has not been elucidated. MS may be primarily a neurodegenerative disease in which inflammation occurs as a secondary response that amplifies the state of progression (Kassmann et al., 2007). Compared to other neurodegenerative diseases, IL-17A has been mostly studied in MS. A possible pathogenic function of IL-17A in the pathogenesis of MS has been suggested. Kostic et al. (2014) demonstrated that the IL-17A level is increased in MS patients. In human MS brain tissue, the IL-17A-producing cells have been found but not in noninflamed brain tissue or normal white matter (Tzartos et al., 2008). In human MS plaques samples, an increase of IL-17A mRNA has been detected (Lock et al., 2002), and it has been reported that IL-17A content is related to BBB disruption and neutrophil expansion in CSF (Kostic et al., 2017). In terms of pathogenesis, Th17 cells may utilize the excitotoxicity of glutamate as an effector mechanism in MS. In MS, IL-17A is directly related to glutamate levels and may stimulate the Ca 2+ -dependent release of glutamate (Kostic et al., 2017). In the experimental autoimmune encephalomyelitis (an animal model of MS, EAE), IL-17RA expression is significantly increased in the CNS (Das Sarma et al., 2009). After binding to the IL-17RA complex in the CNS, IL-17A participates in the pathogenesis of EAE by promoting CD4 cell migration and secreting chemokines (Liu G. et al., 2014).
Amyotrophic lateral sclerosis (ALS) is another chronic neurodegenerative diseases of the CNS. The feature of ALS is the loss of upper and lower motor neurons, leading to motor and extra-motor symptoms. The neuropathological feature of this disease is the aggregation and accumulation of ubiquitylated proteinaceous inclusions in the motor neurons. In most subtypes of ALS, TAR DNA-binding protein 43 (TDP43) is the main component of these inclusions, but other abnormal protein aggregates are also present, including neurofilamentous hyaline conglomerate and misfolded superoxide dismutase (SOD1; Hardiman et al., 2017). The potential culprits of this disease may be the high-molecular-weight complexes that appear before protein aggregation, and the high-molecular-weight protein might contribute to the cell-to-cell spread of disease (Marino et al., 2015). Microglia have some impact on ALS disease. In the SOD1 mouse model of ALS, microglia have been found to contribute to the severity and progression of the disease. In contrast, microglial plays a neuroprotective role in the TDP43dependent mouse model of ALS (Spiller et al., 2018). Together, these data indicate that microglial may exert different roles depending on the specific animal model and stimulus in ALS. In ALS, the IL-17A-mediated pathway may play a critical role. It has been reported that IL-17A serum concentrations in sporadic ALS and familial ALS patients are significantly higher than control subjects without autoimmune disorders (Fiala et al., 2010;Rentzos et al., 2010).
Glaucoma is also known as a neurodegenerative disorder characterized by RGCs death and axonal damage of the optic nerve, and ultimately leading to irreversible blindness (Levin et al., 2017). Glaucoma is considered as a disease caused by multiple factors, including high IOP mechanical injury, neurotrophic factor deprivation, ischemia/reperfusion injury, oxidative stress injury, excitatory glutamate toxicity, and abnormal immune-inflammatory response (Burgoyne, 2011;Rieck, 2013;Križaj et al., 2014). Studies have shown that immune dysfunctions, such as changes in cytokine signaling, immune cell proliferation, migration, and phagocytosis, as well as reactive gliosis, are common features of neurodegenerative diseases (Hammond et al., 2019). Autoimmunity is related to the pathogenesis of glaucoma as evidenced by large amounts of serum autoantibodies in glaucoma patients and animal models (Wax et al., 2008;Bell et al., 2013). In glaucoma, the elevated IOP is thought to be a major risk factor (Wei and Cho, 2019). However, increasing pieces of evidence have shown that the immune response plays a part in the pathogenesis of glaucoma. In recent years, some researchers have studied the IL-17A levels in patients with glaucoma. Yang et al. (2019) reported that the plasma levels of IL-17A are comparable in glaucoma patients and healthy people, and they demonstrated that the average frequencies of Th17 cells in patients with glaucoma is not significantly higher than that in the control group. In another study, however, researchers have demonstrated that the frequency of IL-17A-secreting cells and IL-17A + CD4 T cells is significantly higher in patients with glaucoma than in controls (Ren et al., 2019). Using a retinal ischemia-reperfusion (IR) mouse model caused by acute elevated IOP, researchers have reported that elevated IOP increases the expression of IL-17A . Because these studies measured IL-17A in peripheral blood in human patients and glaucoma is a complex disease whose pathogenesis has not been fully understood, further studies are needed to understand the role of IL-17A in glaucoma.
FIGURE 3 |
The relationship between IL-17A and astrocytes. In astrocytes, IL-17A induces expression of macrophage inflammatory protein-α (MIP-1α) through Src/MAPK/PI3K/NF-KB pathways (Yi et al., 2014). IL-17A enhances the excitotoxicity of glutamate by reducing the ability of astrocyte to absorb and transform glutamate (Kostic et al., 2017). In EAE mice, IL-17A triggers the downregulation of miR-497, thereby upregulating the hypoxia-inducible factor-1α (HIF-1α) transcription factor in astrocytes as well as IL-1β and IL-6 secretion by astrocytes. MiR-409-3p and MiR-1896 are involved in the process of IL-17A-mediated secretion of inflammatory cytokines by astrocytes by targeting the SOCS3/STAT3 signaling pathway in EAE mice. Under IL-17A stimulation, miR-873 participates in inflammatory cytokine production in astrocytes through the A20/NF-KB pathway in EAE mice Shan et al., 2017;Liu X. et al., 2019). In EAE mice, proinflammatory gene expression induced by IL-17A is diminished through the abrogation of p38α in astrocytes, which was via the defective activation of MAPK-activated protein kinase 2 .
THE ROLE OF IL-17A IN A DIFFERENT MODEL SYSTEM OF NEURODEGENERATIVE DISEASES
Some animal models are used for the research of neurodegenerative diseases. In APP/PS1 mice, a transgenic mouse model of AD that overexpresses amyloid precursor protein (APP) with the Swedish mutation and exon-9-deleted presenilin 1, IL-17A is reported to play a key role in the induction and development of AD (Browne et al., 2013). IL-17A-producing T cell infiltrates in the brains of APP/PS1 mice which enhances the activation of glial cells and exacerbates neurodegeneration (McManus et al., 2014;Ahuja et al., 2017). IL-17A overexpression in this transgenic mouse model reduces soluble Aβ levels, decreases cerebral amyloid angiopathy, and improves glucose metabolism (Yang et al., 2017). IL-17A does not exacerbate neuroinflammation and significantly improves learning and anxiety deficits in this IL-17A overexpressing APP/PS1 mice (Yang et al., 2017). In 5XFAD mice, another animal model of AD, the production of IL-17A is decreased in the gut-residing immune cells (Saksida et al., 2018). In another animal models of AD including hAPP mice, 3xTg-AD mice, Mo/Hu APPswe PS1dE9 mice and Aβ1-42-Induced AD rat model, the level of IL-17A shows a significant upregulation compared with wild type mice (Jin et al., 2008;Zhang et al., 2013;Chen et al., 2015Chen et al., , 2019Yang et al., 2015;St-Amour et al., 2019). In MPTP (1-methyl-4phenyl-1,2,3,6-tetrahydropyridine) treaded mouse and MPP+ (1-methyl-4-phenylpyridinium) treated rats (animal models of PD), the levels of IL-17A are upregulated in the substantia nigra, spleen, serum, and mesenteric lymph nodes. Furthermore, IL-17A promotes neurodegeneration in PD depending on microglial activation and partly TNF-α release (Huang et al., 2014;Dutta et al., 2019;Liu Z. et al., 2019). The pathological role of IL-17A in EAE, an animal model of MS, has been demonstrated. A monoclonal anti-IL-17A Ab (MM17F3) autovaccination is reported to prevent histological and clinical manifestations of EAE (Uyttenhove et al., 2007). In the EAE mouse model with TnC−/−, the reduced ability of Th17 cells to produce IL-17 is observed in spleens (Momčilović et al., 2017). Also, IL-17A mRNA and protein levels increase in the cup-induced mouse model of MS (Sanadgol et al., 2017). For ALS, the SOD1G93A mice model is established and in this model, IL-17A is found to gradually increase with aging (Noh et al., 2014).
THE RELATIONSHIP BETWEEN IL-17A AND GLIAL CELLS IN NEURODEGENERATIVE DISEASES
Microglial cells are an important part of the glial population of the CNS, and they account for approximately 10% of the total number of cells and are the largest number of mononuclear phagocytes in the CNS (Colonna and Butovsky, 2017). Microglial cells play a key role in CNS development and maintenance of CNS homeostasis. During CNS injury, microglial cells play a neuroprotective role by morphologically changing, proliferating, and migrating to the damaged site to phagocytose and eliminate microbes, protein aggregates, and dead cells (Colonna and Butovsky, 2017). Also, microglial cells secrete many soluble factors, such as neurotrophic factors and neurotrophic factors, which are involved in the immune response of the CNS. Aberrations in the normal phenotype or functions of microglial cells may lead to excessive synapse loss, contributing to the pathogenic mechanism of neurodegenerative diseases of the CNS. For example, microglial cells are induced to engulf neurons by recognizing phosphatidylserine exposed on tau-laden neurons, produce nitric oxide, and release of the MFGE8 opsonin. MFGE8 is required for engulfment and uptake of neurons (Brelstaff et al., 2018). Molecules expressed on the surface of microglial cells, such as LRRC33 and TREM2, affect relevant cellular pathways by binding to specific proteins (Qin et al., 2018;Li et al., 2019). These biological processes play a role in the pathogenesis of neurodegenerative diseases.
Although the specific mechanism of IL-17A in neurodegenerative diseases is still controversial, it is generally accepted that IL-17A causes diseases by activating glial cells (especially microglia). In a PD model, IL-17A activates microglia in vitro and accelerates the death of dopamine neurons through activating microglial cells (Liu Z. et al., 2019). Consistently, the IL-17A effect is abrogated after inhibition of the IL-17RA signaling pathway in microglia (Liu Z. et al., 2019), confirming the pathogenic relevance of microglial cells in mediating neurodegeneration in PD. Compared to controls, the expression of IL-17RA is increased in microglial cells of the CNS in EAE mice, which may be due to Toll-like receptors (TLR) signaling inducing IL-17RA expression in neuroglial cells (Liu G. et al., 2014). In EAE, IL-17A treatment induces the upregulation of chemokine secretion by microglial cells (Das Sarma et al., 2009). In aged rats, IL-17A participates in the process of neuroinflammation and cognitive impairment induced by lipopolysaccharide (LPS) through microglia activation . In acute glaucoma mouse models, inhibition of microglial activation reduces the secretion of IL-17A . However, the relationship between IL-17A and microglia in neurodegenerative diseases has not been elucidated. Yang et al. (2017) reported that IL-17A overexpression in the mouse brain does not promote activation of microglia in AD mouse models. The evidence above suggests that further studies are needed.
Astrocytes play a central role in maintaining homeostasis of CNS, including regulation of synapse formation and maintenance, preserving neurological function, supplying energy to neurons, and maintaining the function of BBB (Pellerin and Magistretti, 2004;Molofsky and Deneen, 2015;Almad and Maragakis, 2018). Astrocytes are not homogeneous but can be specialized according to different regions of the CNS in which they reside (Pekny and Pekna, 2016). These glial cells of the CNS affect the structure and function of surrounding neurons. In the tripartite synapse, astrocytes can modulate synaptic activity by gliotransmission (Haydon, 2016). In the CNS, astrocytes communicate with surrounding neurons, microglia, and oligodendrocytes through hemichannels, which act in concert to maintain the normal function of CNS (Almad and Maragakis, 2018). During the disease course of the CNS, phenotypic conversion of microglial cells is induced by signals from astrocytes (Locatelli et al., 2018). An astrocyte that loses its function (termed A1 astrocyte) is induced by activated neuroinflammatory microglia (Liddelow et al., 2017). Taken together these data highlight the crucial role of astrocytemicroglia communication in neurodegenerative diseases of CNS.
Astrocytes and IL-17A have been mainly studied in MS. One of the pathological features of MS is increased astrogliosisassociated neuroinflammation. Astrogliosis is a process by which astrocytes activate, proliferate, and upregulate the expression of the glial fibrillary acidic protein, and it is the main cause of MS plaque formation (Yi et al., 2014). By reducing the ability of astrocytes to absorb and transform glutamate, IL-17A enhances the excitotoxicity of glutamate (Kostic et al., 2017). Thus, astrocytes may act as a potential target for the neuroprotective effect of MS. In the CNS of EAE mice, the expression of IL-17RA is increased in astrocytes (Das Sarma et al., 2009;Colombo et al., 2014). Macrophage inflammatory protein-α (MIP-1α) is the β-chemokine that induces the directed migration of eosinophils, T lymphocytes and monoctyes, and it contributes to the pathogenesis of EAE. In primary astrocytes, IL-17A induces the expression of MIP-1α (Yi et al., 2014). Furthermore, miRNAs are involved in the pathogenesis mediated by IL-17Aexpressing astrocytes in EAE (Liu X. et al., , 2019Shan et al., 2017). Under IL-17A stimulation, miRNAs participate in inflammatory cytokine production in astrocytes and, in turn, aggravate EAE development. Collectively, these findings suggest that the pathogenic role of the IL-17A-miRNA-astrocytes axis in EAE and may indicate a therapeutic target for treating MS. IL-17A blockade by Act1 ablation in astrocytes inhibits the induction of EAE and has a therapeutic effect (Kang et al., 2010;Yan et al., 2012). Also, in a mouse model of MS, proinflammatory gene expression induced by IL-17A is diminished through the abrogation of p38α in astrocytes ; Figure 3). In vitro studies have shown that IL-17A secreted by activated astrocytes plays a neuroprotective role in acute neuroinflammation (Hu et al., 2014).
Oligodendrocytes are a group of glial cells in the CNS and have a supporting role for neuron migration, terminal differentiation, axon wrapping, axon recognition, myelin production, and myelin maintenance (Cai and Xiao, 2016). The main function of oligodendrocytes is the formation of myelin, which benefits nerve repair by maintaining myelin restoration (Bradl and Lassmann, 2010). Oligodendrocytes perform their physiological functions by communicating with neighboring astrocytes and neurons (Orthmann-Murphy et al., 2007). The loss or dysfunction of oligodendrocytes contributes to the vulnerability of the human brain to neurodegenerative diseases. For example, in AD and Huntington's disease (HD), disturbances of myelin integrity are exacerbated compared to normal controls (Bartzokis et al., 2004(Bartzokis et al., , 2007. Increased numbers of oligodendrocyte progenitor cells (OPCs) are observed in ALS patients, indicating the failure of myelin regeneration (Kang S. H. et al., 2013). Recently, it has been reported that oligodendrocyte heterogeneity in human MS brain tissue may contribute to the disease progression (Jäkel et al., 2019). IL-17A plays a role in the development of oligodendrocyte lineage cells. IL-17A inhibits the maturation of oligodendrocyte lineage cells in vitro (Kang Z. et al., 2013) and it exacerbates the TNF-α-induced oligodendrocyte apoptosis (Paintlia et al., 2011). In EAE, mature oligodendrocytes and OPCs have different effects on the progression of the disease. Kang Z. et al. (2013) reported that the deletion of Act1, the adaptor protein required for IL-17 signaling, from mature oligodendrocytes does not affect the course of EAE. However, elimination of IL-17A signaling in OPCs (referred to as NG2 glia) reduces EAE severity.
CONCLUSION
IL-17A is a signature of a key T helper cell population and evidence suggests a crucial role for IL-17A in the pathogenesis of autoimmune diseases and neurodegenerative diseases. The function of IL-17A has been proven to be varied as it not only contributes to pathogenic inflammation but also induces innate-like acute immune defenses. Thus, IL-17A is not simply an inflammatory factor. Although the specific mechanism of IL-17A in neurodegenerative diseases is still controversial, it is generally accepted that IL-17A causes diseases by activating glial cells. The functions of IL-17A have proven to be more adaptable and diverse than initially discovered. IL-17A may also play a key role in tissue damage. Our understanding of these processes is still lacking, particularly in the role of IL-17A in the pathogenesis of glaucoma. Also, we are still in the early stages of understandin how IL-17A interacts with different cytokines and how IL-17A signals are transmitted in response to microbial stimuli. Understanding how IL-17A interacts with different cells and cytokines is important. Uncovering the molecular pathways may allow the identification of better targets to modulate these cellular processes. Novel therapeutic strategies may be discovered by such studies.
AUTHOR CONTRIBUTIONS
JC wrote and edited the manuscript. XL and YZ edited the manuscript. All authors read and approved the final manuscript. All authors contributed to the article and approved the submitted version.
|
2020-09-29T13:10:29.292Z
|
2020-09-29T00:00:00.000
|
{
"year": 2020,
"sha1": "3c0e214cb4749e84d50000cab8dbbe82e817baf1",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnagi.2020.566922/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3c0e214cb4749e84d50000cab8dbbe82e817baf1",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
119223452
|
pes2o/s2orc
|
v3-fos-license
|
Probing new physics with tau-leptons at the LHC
We discuss new physics that can show up in the $\tau^+\tau^-$ production process at the LHC but not in the dimuon or the dielectron channels. We consider three different generic possibilities: a new resonance in the Drell-Yan process in the form of a non-universal $Z^\prime$; a new non-resonant contribution to $q\bar{q}\to \tau^+\tau^-$ in the form of leptoquarks; and contributions from gluon fusion due to effective lepton gluonic couplings. We emphasize the use of the charge asymmetry both to discover new physics and to distinguish between different possibilities
New physics searches in the pp → τ + τ − process are well underway in both ATLAS and CMS. From the perspective of beyond the SM physics, this channel provides a window into scenarios in which the third generation is preferred and we will discuss three such possibilities. These possibilities reinforce the need to explore τ + τ − production at the LHC regardless of limits from dimuon or dielectric channels.
A very valuable tool to measure electroweak couplings and to constrain new physics at LEP was the forward-backward asymmetry. As of now, there remain some discrepancies from the SM expectations in both A b FB as measured at LEP and A FB in tt production as measured at the Tevatron [1]. This leads us to consider the A τ FB as well. Of course there was no measurable difference from the SM in A τ FB as measured at LEP, all the way up to CM energy of 210 GeV [2] and this places significant constraints on new physics affecting the τlepton that can show up at LHC, leaving mostly the high τ + τ − invariant mass region to explore.
The LHC is a symmetric pp collider so that one cannot define the forward backward asymmetry in the usual way. However, it is well known from corresponding studies for heavy quarks, that the information present in A FB can be recovered in the form of a charge asymmetry [3]. Conceptually, the simplest possibility is the reconstruction of the qq parton CM frame. If this is possible, one also knows that the direction of the quark is correlated with the direction of the boost permitting a definition of A FB which has already been used to measure A FB for muons and electrons [4], confirming SM expectations. For τ-leptons it is harder to reconstruct the parton CM frame and we choose to carry our discussion in terms of the charge asymmetry. Our numerical studies show that it is better to work with an integrated charge asymmetry The largest charge asymmetry is obtained for a value y c ∼ 0.5 as can be seen in Figure 1 [5] We also integrate A c (y c ) over τ + τ − invariant mass from a minimum m ττ min chosen at first to exclude the Z region and later on to optimize the sensitivity to new physics. Figure 1 is for SM and includes basic cuts p T τ > 20 GeV, |η τ | < 2.5, and ∆R ττ > 0.4, but we found that y c is not very sensitive to any of this. As seen in the figure, A c increases with m ττ min as this has the effect of including more events with larger boosts. This comes at the price of lost statistics. An important observation is that the τleptons at LHC are highly boosted so their decay products travel in essentially the same direction as the parent in the lab frame and this allows us to construct the asym- metry using the direction of the decay product (muon, electron or jet).
We investigate two new physics scenarios in connection with the usefulness of A c . Of course we are interested in new physics scenarios that are compatible with LEP and not ruled out by measurements with muon or electron pairs so we choose the models accordingly.
An example of resonant new physics of this type is a non-universal Z that prefers the third generation [6,7] (or just the τ-lepton [8]). The main feature of such a Z is that it has couplings to the third generation that are enhanced by a factor g R /g L and couplings to the first two generations suppressed by the inverse of the same factor. In this way processes that involve only fermions from the first two generations can be suppressed below existing limits; processes involving one pair of third generation fermions, such as e + e − → τ + τ − or pp → τ + τ − , receive corrections of electroweak strength; and processes with four third generation fermions can be significantly enhanced. Until very recently LEP2 provided the best direct bounds on this kind of resonance, but LHC has now entered the picture. CMS, for example, can exclude a Z in the relevant pp → τ + τ − channel up to about 1 TeV [9] (although the analysis has only been done using universal Z models). For comparison, the models [10] analyzed by CMS are already excluded up to the 2-3 TeV range by their dimuon and dielectron analyses. In Figure 2 [5] we show the usefulness of the charge asymmetry to discriminate between two different non-universal Z bosons of the same mass. The generic couplings are of the form with c u R c τ R = 1/3. The curve labeled 'model 1' corresponds to the Z of Ref. [7] and involves contributions from uū, dd as well as strange and charm. In both cases we use a mass of 600 GeV for the Z . only non-zero couplings c τ R , and c u R such that c τ R · c u R = 1/3.
The model labeled 'Model 1' is that of Ref. [7].
The events were generated using Madgraph [11] and the error bars correspond to 1σ statistical errors for 10 fb −1 at 14 TeV.
As an example of non-resonant new physics that would affect τ-pair production in the Drell-Yan process we next consider the exchange of leptoquarks (LQ). The generic couplings of vector LQ are given by [12] LQ that would affect primarily this process, for example, are Pati-Salam vector LQ with a strong coupling 2 and with quantum numbers that couple the first generation quarks to the third generation fermions [13]. These would contribute to pp → τ + τ − via a t-channel dd → τ + τ − diagram. Since the protons have more up quarks than down quarks, we consider instead a variation of this model in which the LQ has charge 5/3 and contributes via a t-channel 2, we call this model LQ2. In Figure 3 we show how the charge asymmetry could easily differentiate between the SM and LQ2 for a leptoquark mass 1 TeV which would be hard to detect in the lepton-pair invariant mass distribution. The plot is for 14 TeV with cuts p T > 6 GeV, |η| < 2.5 and ∆R ik < 0.4, i, k = , j. We show the τ e τ µ channel which has less background. The τ e τ e and τ µ τ µ modes can also be used with the additional requirement of a minimum missing rk mass and for the SM. The curve on the direct Drell-Yan background overin this case, whereas the curve on the provement achieved by requiring miss-GeV in the event. We show our results e, but at our level of analysis the results for e þ e À are identical. The asymmetry is shown for y c ¼ 0:5 as discussed in the previous section. The asymmetry is integrated over the invariant mass of the dimuon pair with a minimal value that replaces the mimimal value of M in the previous section. We show a range for M '' from 130 GeV to 230 GeV. The lower end of this range is ). Charge asymmetry in the dilepton mode with same lepton flavor pair for the SM and the LQ-2 model with ark masses as a function of a minimal M '' cut without (left) and with (right) an additional requirement of missing e text. chosen to remove the Z resonance where the SM cross section peaks and the upper end is simply chosen for illustration.
In Fig. 12 we show the charge asymmetry in the dilepton (different type) channel for both the LQ-2 model with two values of lepto-quark mass and for the SM. In this case we show the results for þ e À and note that, at our level of analysis, the results for À e þ are identical. In this case there is no direct Drell-Yan background, so the requirement of missing ET does not improve our signal to background ratio. The asymmetry is once again shown for yc ¼ 0:5, integrated over the invariant mass of the dilepton pair.
The integrated charge asymmetries for the four dilepton channels are given in Table V E T (we used 10 GeV) that removes the direct Drell-Yan production of dimuons or dielectrons. In all cases the charge asymmetry is constructed using the direction of the decay lepton and the error bars correspond to 1σ statistical errors for 10 fb −1 at 14 TeV [5]. The analysis can also be carried out in the τ e τ h and τ µ τ h modes as we have shown in Ref. [5]. For τ h we included only the one pion and one rho modes, and used a 0.3% probability of a QCD jet in W j events faking a τ. The Figure 4 uses the same cuts described above and includes background from W pairs, Z pairs, and W j events. It shows that it is possible to discriminate between LQ2 and the SM using these decay channels. The two plots are different because the W + j background is larger than the W − j background and they also have different charge asymmetries (-6.3% and 11% with our set of cuts).
Finally we turn our attention to the possibility of new physics contributing to lepton pair production via gluon fusion. This would be a very interesting possibility since the LHC is a 'gluon collider' and is not as exotic as it first seems. An example of new physics that connects gluons to leptons is a new heavy neutral Higgs which has already been studied at LHC [14,15]. We are more interested in a 'lepton gluonic coupling': an effective coupling between gluons and leptons away from a new resonance. The natural formalism to describe this scenario is the effective Lagrangian [16] in which one writes corrections to the SM in the form of higher dimension operators suppressed by the scale of new physics. Schematically, For processes at an energy scale E, the higher dimension operators contribute amplitudes suppressed by increasing powers of E/Λ and for this reason one limits these studies to the lowest dimension, usually six. At the LHC, however, the large parton luminosity for gluon-gluon interactions distorts this power counting, enhancing gluon fusion initiated processes. This makes the 'lepton gluonic couplings' that first appear at dimension eight possibly competitive with other dimension six operators. There are two such operators (and their hermitian conjugates): Ignoring CP violating phases, the two operators affect the parton cross-sections in identical manner: . (6) The lepton flavor structure of the operators can be any, including lepton flavor violating, but here we concentrate on the case of τ-flavor. In the top Figure 5 we compare the effect of the lepton gluonic coupling operator with that of a dimension six operator of the form ag 2 /Λ 2ū u¯ (chosen so that it doesn't interfere with the SM) at the Tevatron [17]. The figure confirms our expectation of dominance by the dimension six operator. At the LHC, however, the situation is much different, due to the enhancement of gluon fusion, and we illustrate this in the bottom Figure 5 [17].
One example of a model that could induce these lepton gluonic couplings involves a heavy scalar with couplings to fermions as in a 2HDM with large tan β. With a heavy fourth generation one could get c ∼ m tan 2 β/v with Λ 4 ∼ (4πv) 2 M 2 S . Another possibility would be a model with a vector LQ of mass M X coupling heavy quarks to tau leptons. In this case one would find c ∼ πα s and Λ 4 ∼ (4πv) 2 M 2 X . Our numerical simulation with the aid of Madgraph [11] indicates that the LHC has a 3σ statistical sensitivity to c 4.3 for Λ = 2 TeV and 10 fb −1 at 14 TeV. This compares to the Tevatron's 3σ statistical sensitivity to c 75, LEP2's c 80 and the partial wave unitarity constraint c 80.
To summarize, we have investigated three different scenarios that illustrate new physics that can be constrained at LHC by studying τ-lepton pair final states: (i) As an example of a new resonance in the Drell-Yan process we considered a non-universal Z . We showed how the charge asymmetry can distinguish between different possibilities of new resonances with the same mass. (ii) As an example of non-resonant contributions to Drell-Yan, we considered vector leptoquarks. In this case we saw that the charge asymmetry can signal the presence of new physics even when it is not visible as a bump in an invariant mass distribution.
(iii) Finally we considered the case of lepton gluonic couplings which first occur at dimension eight and which can be tested for the first time at LHC. (iv) It is worth emphasizing that the lepton gluonic couplings can occur in any dilepton channel, including those that violate lepton flavor, and that all of them should be investigated.
|
2013-01-17T20:49:12.000Z
|
2013-01-17T00:00:00.000
|
{
"year": 2013,
"sha1": "b2f4e0c1baf82cc0f881492fcff258f11417ed9d",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1301.4214",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "b2f4e0c1baf82cc0f881492fcff258f11417ed9d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
253167221
|
pes2o/s2orc
|
v3-fos-license
|
Canine induced pluripotent stem cells efficiently differentiate into definitive endoderm in 3D cell culture conditions using high-dose activin A
Introduction Endoderm-derived organs support indispensable functions in the body. Pluripotent stem cells can generate endoderm-derived cells or tissues and have excellent therapeutic potential to replace the functions of endodermal tissues. However, there is no viable method to induce endodermal precursor cells, definitive endoderm (DE), from canine induced pluripotent stem cells (ciPSCs). Methods A ciPSC line was used in this study. In order to induce DE, ciPSCs were cultured with high dose activin A and fetal bovine serum. We considered the optimal differentiation period and starting cell density. Next, to reduce the remaining undifferentiated cells and improve the DE induction efficiency, DE was induced from 3D cell aggregates with knockout serum replacement instead of fetal bovine serum. Finally, hepatic and pancreatic induction were performed to investigate whether DE could differentiate into downstream lineages. Results After differentiation, some cells expressed the DE markers FOXA2 and SOX17. DE induction period and starting cell density were found to be important for efficient DE induction. However, some cells remained undifferentiated even after optimization of cell density and culture period. Cell differentiation under 3D culture conditions reduced undifferentiated cells and the replacement of fetal bovine serum with knockout serum replacement improved the DE induction efficiency. After hepatic and pancreatic induction, cells expressed some early hepatic and pancreatic markers. Conclusions A ciPSC line was successfully differentiated to DE efficiently using a high dose of activin A with knockout serum replacement under 3D cell culture conditions. We believe that this study will be fundamental to achieving the generation of canine endodermal tissues from ciPSCs.
Introduction
Endoderm-derived organs such as the liver, lung, pancreas, intestine, thymus, and thyroid support indispensable functions such as gaseous exchange during respiration, mechanical and chemical digestion, and blood glucose homeostasis and detoxification. Therefore, cell transplantation aimed at replacing these functions has great therapeutic potential [1,2]. Induced pluripotent stem cells (iPSCs) are generated from somatic cells through cell reprogramming and can self-renew and differentiate into three germ layers [3]. Based on these characteristics, human PSC-derived endodermal cells are instrumental resources for regenerative medicine.
Worldwide, dogs are largely recognized as companion animals; thus, advancements in dog-related medicine are required. Accordingly, veterinary regenerative medicine using canine iPSCs (ciPSCs) also have great therapeutic potential [4]. Similar to humans, dogs also develop, in their endoderm-derived organs, conditions such as diabetes [5], cirrhosis [6], and inflammatory bowel disease [7]. Thus, an efficient method to obtain pancreatic, hepatic, and intestinal cells or tissues from ciPSCs could offer a new cell transplantation-based treatment for dogs with diseases in endoderm-derived organs.
Additionally, ciPSCs can also benefit human medicine by providing a model for validating the efficacy and safety of regenerative therapies. Although rodent animal models are usually used to examine the efficacy and safety, they do not reproduce in full human disease [8]. In contrary, canines share a variety of biochemical and physiological characteristics with humans, live longer, and are outbred. Because they are also exposed to external and environmental factors that contribute to diseases such as obesity, diabetes, and cancer, naturally occurring disease in dogs might be a valuable preclinical model for humans [9].
During embryonic development, endodermal tissues develop from the same origin, the definitive endoderm (DE) [10]. Human PSCs (hPSCs) and mouse PSCs differentiate into endoderm-derived tissues via DE induction, mimicking embryogenesis [11,12]. A high dose of activin A is widely used for the differentiation of DE cells from hPSCs [13,14]. In addition, some reports have suggested that 3D cell culture conditions and the replacement of fetal bovine serum (FBS) with knockout serum replacement (KSR) improved DE induction efficiency from hPSCs [15e17]. Because the gastrointestinal and respiratory tracts of dogs might also develop from the same endodermal progenitors [18], it should be possible to generate endoderm-derived tissues from ciPSC-derived DE. Considering the potential clinical applications, we previously reported the generation of ciPSCs from not only embryonic cells [19] but also adult cells including peripheral blood mononuclear cells (PBMCs) [20e22]. However, the optimal method and mechanism of DE induction from ciPSCs remains unknown.
We hypothesized that a high dose of activin A induces ciPSCs into endodermal lineages, and 3D cell culture and KSR supplementation improves DE induction efficiency as is the case with hPSCs. In order to obtain DE efficiently in this study, we studied the DE induction conditions based on a high dose of activin A using one ciPSC line.
Cell culture
All experiments were performed using one ciPSC line, OPUiD05-A, which was previously established in our laboratory from canine PBMCs using a Sendai virus vector encoding the human KLF4, OCT3/ 4, SOX2, and C-MYC genes [22]. ciPSCs were maintained on a laminin-511 E8 fragment (iMatrix511; Nippi, Inc., Tokyo, Japan) using StemFit AK02N (StemFit; Ajinomoto, Tokyo, Japan). They were passaged mechanically using a glass Pasteur pipette. ciPSCs from passages 40 to 65 were used for differentiation studies.
Assessment of DE induction
DE differentiation was analyzed using immunocytochemistry and flow cytometry (FCM). For immunocytochemistry, cells were washed in phosphate-buffered saline (PBS) (À), fixed in 4% paraformaldehyde (Sigma-Aldrich) for 5 min, and permeabilized with 0.1% Tween 20 in PBS(À) for 5 min at around 25 C. The cells were incubated with 10% bovine serum albumin (Fujifilm Wako Pure Chemical Corporation, Osaka, Japan) for 30 min, followed by overnight incubation at 4 C in the presence of primary antibodies against OCT3/4, SOX17, and FOXA2. The negative control cells were incubated in PBS(À) without primary antibodies. The next day, the cells were washed with PBS(À) and incubated for 1 h at around 25 C with the secondary antibodies. The cells were washed with PBS(À), labeled with DNA using ProLong Gold Antifade Reagent 4 0 ,6-diamidino-2-phenylindole (DAPI; Thermo Fisher Scientific), and observed using confocal laser microscopy (FV3000; Olympus, Tokyo, Japan). For FCM, the cells were dissociated using 0.25% trypsin-EDTA. Cell pellets were resuspended in FACS buffer (PBS(À) containing 2% FBS and 1 mg/ml sodium azide [Fujifilm Wako Pure Chemical Corporation, Osaka, Japan]) and labeled with CXCR4-PE and c-kit-FITC on ice for 30 min. Negative control cells were incubated with the isotype control. The cells were washed in FACS buffer and analyzed using FCM (CytoFLEX; Beckman Coulter, Brea, CA, USA). All the antibodies used in this study are listed in Supplementary Table 1.
After differentiation, quantitative reverse transcription polymerase chain reaction (qRT-PCR) was performed. For qRT-PCR, total RNA was extracted using the RNeasy Micro Kit (Qiagen, Hilden, Germany). RT was performed using random primers and ReverTra Ace (Toyobo, Osaka, Japan). Quantitative PCR was carried out in triplicate using the Plus One System (Thermo Fisher Scientific) with PowerUp™ SYBR™ Green Master Mix (Thermo Fisher Scientific) according to the manufacturer's instructions. The cDNA of liver and pancreas from a 12-year-old male beagle was preserved in our laboratory and used as the positive controls. The animal experiment was approved by the Institutional Animal Experimental Committee of Osaka Prefecture University (Permission number; 21e67). All the primers used are listed in Supplementary Table 2.
Statistical analysis
The experiments for FCM were performed using one well per condition. The qRT-PCR was performed in triplicate, using one well per condition. These experiments were repeated three times. Data are expressed as the mean ± standard deviation. Statistical significance was determined using the TukeyeKramer multiple comparison procedure or the student's t-test using Statcel software (Statcel 3; OMS Ltd., Tokyo, Japan). A p value < 0.05 was considered statistically significant.
Determining an appropriate DE induction period and starting cell density
To calculate the DE induction period, after ciPSCs were seeded at a density of 1.0 Â 10 4 /cm 2 , FCM was performed every day from day 0 ( Fig. 2A). Daily FCM analysis revealed that the DE differentiation efficiency was highest on day 4 (Fig. 2B). Next, we considered the initial cell density to induce DE. After ciPSCs were seeded at a density of 2.5 Â 10 3 , 5.0 Â 10 3 , or 1.0 Â 10 4 /cm 2 , DE was induced for four days from day 0 (Fig. 3A). After DE differentiation, fewer ciPSCs were seeded and fewer PSC-like cells were observed (Fig. 3B). FCM analysis revealed that DE induction efficiency was highest when ciPSCs were seeded at 5.0 Â 10 3 /cm 2 (Fig. 3C). Under conditions of 5.0 Â 10 3 /cm 2 seeding and four days differentiation, the DE differentiation efficiency was 41.26 ± 6.25%. However, immunocytochemistry revealed that OCT3/4-positive undifferentiated cells were present in differentiated DE cells (Fig. 3D).
Improvement of DE induction from ciPSCs
After DE induction under the conditions described above (2D conditions), undifferentiated cells remained. Therefore, we attempted to induce DE under 3D conditions to reduce the number of remaining undifferentiated cells. This schema is illustrated in Fig. 4A. After ciPSCs were seeded at 5.0 Â 10 3 /cm 2 and cultured in Matrigel, ciPSCs formed 3D cell aggregates (Fig. 4B). After four days of DE induction, the number of PSC-like cells decreased in the 3D condition (Fig. 4B). Immunocytochemistry also showed that OCT3/ 4-positive and SOX17-negative undifferentiated cells also decreased under 3D conditions (Fig. 4C). By contrast, the number of OCT3/4 and SOX17 double-negative cells increased rather than OCT3/4-negative and SOX17-positive cells (Fig. 4C). Finally, FBS was replaced with KSR during four days of DE induction (0% day 0e1; 0%, day 1e2; 0.2%, and day 2e4; 2.0%) under 3D conditions. After four days of differentiation, no significant morphological differences were observed (Fig. 4D). However, KSR significantly improved DE differentiation efficiency compared to FBS, from 43.55 ± 3.14% to 64.32 ± 8.12% (Fig. 4E).
ciPSC-derived DE differentiated hepatic and pancreatic lineage
DE is the precursor of endodermal tissues such as the liver and pancreas. Therefore, we investigated whether ciPSC-derived DE has the potential to differentiate into these lineages. After hepatic lineage induction, HNF4A and AFP were upregulated compared to those in ciPSCs (Fig. 5A). HNF4A was also upregulated in the DE stage, and the expression level did not change between DE and hepatic lineage-induced cells. However, the expression of the mature hepatic marker ALB did not increase in differentiated cells compared with that in ciPSCs (Fig. 5A). After pancreatic differentiation, the early pancreatic marker PDX1 was upregulated compared to ciPSCs and DE, albeit with lower expression levels compared to adult pancreatic tissue (Fig. 5B). The mature pancreatic marker, INS, was not detected in the differentiated cells (Fig. 5B). During hepatic and pancreatic differentiation, some cells differentiated into beating cardiomyocytes (Supplementary Video 1, after hepatic differentiation, and Supplementary Video 2, after pancreatic differentiation).
Discussion
In this study, we sought the efficient methods to differentiate ciPSCs into DE cells as endodermal precursors. After ciPSCs were differentiated into DE for three days, the cells seeded at a low density expressed the DE markers FOXA2, SOX17, CXCR4, and c-kit. In contrast, ciPSCs seeded at a high density did not differentiate and maintained their undifferentiated state. This coincides with a previous report that hPSCs seeded at a low seeding density differentiated into DE [25]. They also reported that hPSCs seeded at a high density maintained their undifferentiated state even after DE induction, which corresponded with our results. However, some groups reported that DE was induced efficiently when hPSCs were seeded at a high density [24,26], which might be due to the properties of each hPSC line. Therefore, in dogs, it is necessary to consider the initial seeding cell density when DE is induced in other ciPSC lines.
During differentiation on Matrigel, DE induction efficiency was the highest at 5.0 Â 10 3 /cm 2 seeding density and four days induction. However, even after differentiation under these conditions, some undifferentiated cells were observed with immunocytochemistry. It has also been reported that undifferentiated cells remain after DE induction in hPSCs [27]. The inclusion of activin A in the DE induction medium activates activin/nodal signaling and phosphorylates SMAD2/3 [28]. Highly activated activin/nodal signaling results in DE differentiation because SMAD2/3 directly binds to endodermal lineage-specifiers such as SOX17 and FOXA2 [29]. In contrast, phosphorylated SMAD2/3 in hPSCs under low concentrations of activin A enters the nucleus and directly binds to the promoter regions of pluripotency-associated genes, such as NANOG and OCT3/4, resulting in the maintenance of their undifferentiated states [30]. Therefore, activin A has two distinct functions, maintenance of the undifferentiated state in hPSCs and induction of the endoderm from the pluripotent state. Meanwhile, insulin and insulin-like growth factor (IGF), they are included in FBS, activate PI3K signaling and inhibit activin/nodal signaling by crosstalk [31], resulting in the remaining of undifferentiated cells after DE induction from hPSCs. In canine PSCs, although activin A reportedly phosphorylates SMAD2/3 [32], the function and crosstalk of activin/nodal and PI3K signaling remain unknown. The understanding of these signaling functions and interactions will facilitate the more efficient differentiation of ciPSCs into DE.
DE induction from 3D cell aggregates resulted in a reduction of undifferentiated cells and an increase of OCT3/4 and SOX17 doublenegative cells. We speculated that these cells represent those that had just escaped from the pluripotent state and were heading toward transformation to the DE state. During hPSC aggregate culture, cells in the center region of aggregates showed a reduction in their proliferation ability, accompanied by enhanced apoptosis or necrosis. This can be explained by the limited oxygen and nutrient availability in the center region as the size of aggregate increased, in particular at a size of over 400 mm [33]. In our study, the size of ciPSC aggregates at day 0 was approximately 50 mm and the aggregates attached to the bottom of dish, proliferated, and finally differentiated to DE in two dimensions. Therefore, under our experimental conditions, ciPSC aggregates might have been only minimally impacted by stress conditions that could induce apoptosis or necrosis. Finally, during differentiation under 3D conditions, the replacement of FBS with KSR improved the DE induction efficiency. The detailed mechanisms of our results remained unclear. Because FBS and Matrigel contain undefined factors, it requires attention to discuss about their effects. During DE induction from hPSCs, cell differentiation was also improved under 3D conditions because cell survival, cell growth, and cell-cell contact were promoted [16]. Furthermore, a sandwich culture using Matrigel promoted the epithelial-to-mesenchymal transition in hPSCs, resulting in improved DE induction efficiency [15]. Various growth factors included in Matrigel had the possibility to improve DE induction efficiency from ciPSCs. On the other hand, while FBS contains both insulin and IGF, KSR contains only insulin [17]. Although DE induction was not performed under 2D conditions with KSR, these studies raised the hypothesis that 3D cell cultures promote cell growth, cell-cell contact, and epithelial-tomesenchymal transition in ciPSCs and KSR activated PI3K signaling to a lesser extent than FBS, resulting in an efficient DE induction.
Although the hepatic lineage-induced cells expressed HNF4A and AFP, the pancreatic lineage-induced cells expressed the pancreatic progenitor marker PDX1, albeit at a low level. Furthermore, some cells differentiated into beating cardiomyocytes following hepatic and pancreatic differentiation. During embryonic development, the cardiac mesoderm reportedly induces the liver program and suppresses the pancreatic program through fibroblast growth factor secretion [34]. This may explain why the pancreatic lineage marker was not remarkably upregulated. Because PSCs pass through the mesendoderm stage, which is in a bipotent state, into DE and mesodermal lineages [2], the development of the cardiac lineage from ciPSCs indicated that some cells differentiated toward the mesoderm at the DE stage. Additionally, during DE induction, molecules modulating bone morphogenetic protein signal or Wnt/ b-catenin signal were reported to direct hPSCs into three subtypes of DE based on their differentiation potentials [35]. In our study, canine DE was induced without other signaling modulators; although it is unknown that subtypes of canine DE are exist, we might need to consider them to induce endodermal lineages.
Additionally, the expression level of mature hepatocyte, ALB was not increased in hepatic lineage-induced cells in contrast to the upregulated expression of HNF4A and AFP. HNF4A is the regulator gene for the direct reprogramming of canine mesenchymal stem cells to hepatocytes [36] and AFP is expressed in canine hepatic progenitor cells [37]. Although other canine hepatic progenitor or hepatocyte markers such as CK7 or Hep Par1 [38] should be assessed to confirm the differentiation status of hepatic lineageinduced cells, our results indicate that hepatic lineage-induced cells did not reach a mature phenotype. In humans and mice, appropriate stepwise culture conditions are necessary to obtain adequate mature endodermal cells such as hepatocytes or pancreatic cells [39,40]. These studies might provide suggestion to induce mature endodermal cells from ciPSCs via DE.
Because there are no reports on the identification of canine DE, suitable DE markers for canines are unknown. Therefore, in this study, we employed the DE markers that are used in humans [41]. Our data indicate that canine DE, defined using human DE markers, can differentiate into pancreatic and hepatic lineages assessed by only mRNA expression levels. Although protein expression should be assessed to confirm the cell differentiation, our results suggested that FOXA2, SOX17, CXCR4, and c-kit are appropriate canine DE markers. Although we showed that ciPSCs derived from PBMCs could differentiated into DE cells efficiently, the limitation in this study was that DE differentiation was performed using only one ciPSC line. Human iPSC lines reportedly had the large variation in their differentiation capacity to specific lineages due to the epigenetic memory from somatic cells [42], the aberrations in DNA methylation during cell reprogramming [43], and the genetic differences among donors [44]. In addition, because Matrigel is extracted from murine tumor and its component is not defined [45], the cells cultured in Matrigel are not suitable for transplantation. In order to generate the endodermal tissues/cells from ciPSCs and apply them for veterinary regenerative medicine, further study must be needed to replace Matrigel with chemically defined extracellular matrix and evaluate the DE induction protocol could apply to other ciPSC lines.
Conclusions
We are the first to report that ciPSCs seeded at an appropriate density can efficiently differentiate into DE under 3D conditions using a high dose of activin A and a low dose of KSR. ciPSC-derived DE could differentiate into downstream lineages, albeit in immature differentiation states. To apply iPSC-derived endodermal cells for regenerative medicine, DE must be induced from iPSCs with high purity and efficiency. Further studies are therefore needed to establish a reliable method for generating transplantable cells from various ciPSC lines. However, considering the successful generation of canine endodermal cells from ciPSCs, our study offers fundamental information and differentiation strategies for the efficient induction of DE from ciPSCs.
Declaration of competing interest
The authors declare no conflicts of interest directly relevant to the content of this article.
|
2022-10-28T15:08:29.661Z
|
2022-10-26T00:00:00.000
|
{
"year": 2022,
"sha1": "d3416b2e8d674dc45cd3a6587377a9e59b60e207",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.reth.2022.10.002",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "e3beac827b973f6d7cd9469c27c48b1fc750dba2",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
}
|
251402705
|
pes2o/s2orc
|
v3-fos-license
|
Unitary canonical forms over Clifford algebras, and an observed unification of some real-matrix decompositions
We show that the spectral theorem -- which we understand to be a statement that every self-adjoint matrix admits a certain type of canonical form under unitary similarity -- admits analogues over other $*$-algebras distinct from the complex numbers. If these $*$-algebras contain nilpotents, then it is shown that there is a consistent way in which many classic matrix decompositions -- such as the Singular Value Decomposition, the Takagi decomposition, the skew-Takagi decomposition, and the Jordan decomposition, among others -- are immediate consequences of these. If producing the relevant canonical form of a self-adjoint matrix were a subroutine in some programming language, then the corresponding classic matrix decomposition would be a 1-line invocation with no additional steps. We also suggest that by employing operator overloading in a programming language, a numerical algorithm for computing a unitary diagonalisation of a complex self-adjoint matrix would generalise immediately to solving problems like SVD or Takagi. While algebras without nilpotents (like the quaternions) allow for similar unifying behaviour, the classic matrix decompositions which they unify are never obtained as easily. In the process of doing this, we develop some spectral theory over Clifford algebras of the form $\cl_{p,q,0}(\mathbb R)$ and $\cl_{p,q,1}(\mathbb R)$ where the former is admittedly quite easy. We propose a broad conjecture about spectral theorems.
Introduction
In this paper, we prove "spectral theorems" for all Clifford * -algebras with a limited amount of degeneracy (up to 1 nilpotent generator) and suggest that a "unification" between some classic matrix decompositions results from this. The unification is interesting because: If producing the canonical form were a subroutine, then these classical matrix decompositions would be obtained from 1 application of the subroutine, and nothing more. Other reductions between matrix decompositions are usually less "efficient" (in the sense of not being just 1 application with no additional steps). Some connections with the theory of quiver representations are also obtained.
The classic matrix decompositions we consider are the: • The unitary diagonalisation of a self-adjoint matrix.
• The SVD of a real matrix.
• The Jordan decomposition of a real matrix.
In some sense, the first of these cases is equivalent to the rest, if one varies the * -algebra. We will now discuss the notion of a * -algebra, and its motivation.
We are intending to generalise notions like self-adjoint matrix, unitary matrix and singular value decomposition (among others) to various "number systems" for various reasons. These notions were originally developed over the complex numbers C. If we focus attention to the notion of, let's say, a unitary matrix, we see that to generalise this to novel "number systems" it is not sufficient to simply redefine the operations {+, −, ×, ÷}, but also the complex conjugation operation, which we will denote * . A bit of experience with similar "number systems" (like the quaternions, or the 2 × 2 real matrices) suggests that a promising generalisation of complex conjugation over a ring would be an arbitrary ring anti-automorphism of order up to 2. Such an operation is called an involution, and we must include it in our list of operations to redefine: {+, −, ×, ÷, * }.
The above discussion suggests that the claim in some linear algebra courses that linear algebra takes place over fields is incorrect, because notions like unitary matrices are defined in terms of involutions which are not field operations. It is thus interesting to suggest that linear algebra might be done instead over * -fields, which are fields equipped with involutions. Unfortunately, this is not sufficient for our paper, where our "number systems" may contain zero divisors.
When we refer to an algebra over a field, we understand this to be something unital, associative and finite-dimensional. The notion of an algebra is not sufficient because there are many involutions which an algebra can be equipped with. The notion of a * -algebra [5] is clearly a better notion of "number system" than just an algebra when generalising spectral theory. Our * -algebras will be over * -fields in the expected way.
Most of the * -algebras we'll consider will be Clifford * -algebras over a * -field F which will be either the real number R or the complex numbers C equipped with their standard involutions. We will in fact consider two different involutions for each algebra.
To give a very quick example of why passing from C (equipped with its standard involution) to a larger * -algebra can unify matrix decompositions, consider the Takagi decomposition (though the SVD would provide a very similar example): This states that given a C-matrix M which satisfies M = M T , there is a unitary matrix U and a R-diagonal matrix D such that M = U DU T . Notice that while this looks like a diagonalisation of a linear map, it actually isn't one because U T = U −1 . The columns of U are not eigenvectors. But now imagine introducing a new imaginary number δ which satisfies δi = −iδ and δ 2 = 0. We then have that M δ = U DU T δ = U (Dδ)U * = U (Dδ)U −1 . We see that while the columns of U are not eigenvectors of M , they are instead eigenvectors of M δ. Additionally, if we extend the involution * so that δ * = δ, we get that M δ is self-adjoint, which M wasn't. If we brazenly assume that M δ can be unitarily diagonalised (by an optimistic extension of the spectral theorem), then it's immediate that the unitary diagonalisation will take the form M δ = U (Dδ)U * where U is immediately a C-matrix and D is immediately an R-matrix. This is more efficient than introducing the quaternion j which satisfies ji = ij but not j 2 = 0, because we cannot immediately conclude that the unitary diagonalisation of M j will yield the Takagi decomposition.
Preliminary definitions
We assume the reader knows what a field is. A * -field is a pair (F, * F ) where F is a field and * F : F → F is a function called the involution. The involution satisfies (x + y) A * -algebra (A, * : A → A) over a * -field (F, * F ) is an algebra over F equipped with a map * : A → A which we call the involution, satisfying (x * ) * = x, (xy) * = y * x * and (x + λy) * = x * + λ * F y * . In this paper, we assume that our * -algebras are both associative and unital. Associativity means (xy)z = x(yz). Unital means that there exists an element 1 ∈ A such that x1 = 1x = x, and 1 * = 1.
or the "split-complex numbers". We will call it the double numbers [3]. This algebra is sometimes defined as being "like the complex numbers" but with the role of i being replaced with a number j for which j 2 = 1. This results in the number 1 having 4 different square roots. This algebra is isomorphic to R ⊕ R. This implies that the algebra is far more convenient to work with when its elements are written as (a, b), and all arithmetic operations {+, −, ×, ÷} are understood to happen componentwise. We now consider two possible involutions.
The sterile * 1 involution
The first involution will be denoted * 1 . This will be defined by (a, b) * 1 = (a, b).
The corresponding * -algebra over R will be denoted (R ⊕ R, * 1 ). This is a trivial definition, and will result in a rather sterile "spectral theorem". How should we write an (R ⊕ R, * 1 )-matrix? We will write it in the form (M, K) where M and K are real matrices of equal dimensions. What we would be the adjoint operation (sometimes called conjugate-transpose)? It would simply be (M, K) * = (M T , K T ). Based on this, we see that (M, K) is unitary (respectively, self-adjoint) whenever M and K are individually unitary (respectively, selfadjoint). Trivially we obtain a spectral theorem: Given a self-adjoint (R⊕R, * 1 )matrix (H, K), we see that there exists an (R⊕ R, * 1 )-unitary matrix (U, V ) and But this is trivial and uninteresting.
The intriguing * −1 involution
More interesting would be to consider the other possible involution. This involution will be denoted * −1 , but otherwise just * if ambiguity won't arise. This will be defined by (a, b) * −1 = (b, a). The corresponding * -algebra over R will be denoted (R ⊕ R, * −1 ). We argue that an (R ⊕ R, * −1 )-matrix should now be written as [M, K], and define this to mean (1, 0)M + (0, 1)K T . We use square brackets instead of round brackets because of the transpose, which ultimately serves to simplify things. 1 We observe the following identities: By defining the square bracket notation the way we did, we made multiplication slightly more complicated while greatly simplifying the adjoint operation (which is the third one in our list). The multiplication operation is made more complicated because the second component multiplies in the opposite order to the first component.
What is now a self-adjoint matrix over (R ⊕ R, * −1 )? It needs to satisfy where J is a unique Jordan matrix. Thus, the spectral theorem for (R ⊕ R, * −1 ) is (somehow) the same as the R-Jordan decomposition. This achieves a unification.
In summary: The square brackets hint at the transpose. The proofs are immediate.
3 What we propose a spectrum theorem is for general * -algebras, and a conjecture In this section, we define what we think a spectral theorem ought to be, and pose a conjecture. A proof of this would be like a (sometimes purely qualitative and non-constructive) generalisation of many matrix decompositions. We begin by defining terms: We recall some definitions related to monoids: A monoid is the same notion as a group, but without the requirement of existence of inverses. An abelian monoid is a monoid where the product is commutative. The product in an abelian monoid is usually written using the additive symbol +. A free abelian monoid generated by a set S is the monoid whose underlying set is the set of functions of the form f : . We define a subfree abelian monoid to be a submonoid of a free abelian monoid.
Let (A, * ) be a * -algebra over R. Remark 3.1. We will later show that the Singular Value Decomposition for the *algebra (A, * ) is implied by the spectral theorem for the * -algebra (B, †) (called the "SVD algebra" for (A, * )) where B is the result of adjoining an element δ to A ⊕ A (where ⊕ denotes direct sum of algebras), such that a general element of B is of the form (x, y) + (y ′ , x ′ )δ, with: δ 2 = 0, δ(x ′ , y ′ ) = (y ′ , x ′ )δ, δ † = δ, (x, y) † = (x * , y * ). The spectral theorem for the enlarged * -algebra is more general than the SVD for the original * -algebra, but nevertheless provides insight into its SVD.
4 Introducing three * -algebras corresponding to the SVD, Takagi and skew-Takagi decompositions respectively
The "SVD * -algebra"
Consider the Clifford algebra Cl 1,0,1 (R), which we will equip with a certain involution. To make the reader's life easier, we will describe this algebra explicitly. The elements of this algebra are all of the form where a, b, a ′ , b ′ are all real numbers. The two pairs (a, b) and (a ′ , b ′ ) are double numbers, or numbers belonging to the algebra R ⊕ R. The number δ on the other hand is quite exotic. First of all, δ satisfies δ 2 = 0. Additionally, a number of the form (a ′ , b ′ )δ is essentially in its simplest form, but a number of the form δ(a ′ , b ′ ) simplifies to (b ′ , a ′ )δ. δ therefore acts as a swapping operator. The swapping operation here is essentially identical to the operation we defined as The algebra Cl 1,0,1 (R) is still not a * -algebra because we have not equipped it with an involution. We will define an involution We will denote this more simply as * unless this results in ambiguity. We will denote the corresponding * -algebra as (Cl 1,0,1 (R), * 1 ).
We can write it as (M, K) + (M ′ , K ′ )δ with the obvious meaning.
What is the adjoint operation over (Cl 1,0,1 (R), We now consider the special case of the "spectral theorem" for only infinitesimal self-adjoint matrices. This would be a canonical form for an infinitesimal selfadjoint matrix under unitary similarity. Consider an infinitesimal self-adjoint The canonical form must therefore be the same as the R-singular value decomposition. A unification is thus achieved between a special case of the spectral theorem over (Cl 1,0,1 (R), * 1 ) (over only the infinitesimal self-adjoint matrices) and the singular value decomposition over R. Note that the reasoning is valid with respect to any * -field in place of R (including C).
Appendix remark: Note that we could have defined another involution and works in either direction.
The "Takagi * -algebra"
Consider the Clifford algebra Cl 0,1,1 (R), which we will equip with a certain involution. To make the reader's life easier, we will describe this algebra explicitly. The elements of this algebra are all of the form where a, b, a ′ , b ′ are all real numbers. The two components a + bi and a ′ + b ′ i are complex numbers. The number δ on the other hand is quite exotic. First of all, δ satisfies δ 2 = 0. Additionally, a number of the form (a ′ + b ′ i)δ is essentially in its simplest form, but a number of the form δ( δ therefore acts as a complex conjugation operator. The algebra Cl 0,1,1 (R) is still not a * -algebra because we have not equipped it with an involution. We will define an involution * 1 by ((a + bi) We will denote this more simply as * unless this results in ambiguity. We will denote the corresponding * -algebra as (Cl 1,0,1 (R), * 1 ).
What is then a self-adjoint matrix over (Cl 0,1,1 (R), * 1 )? It is of the form H + Sδ where H = H * and S = S T . We can consider an infinitesimal self-adjoint matrix to be of the form Sδ where S is complex-symmetric because δ behaves like an infinitesimal.
The "skew-Takagi * -algebra"
Consider the same algebra Cl 0,1,1 (R) as in the previous subsection, but with a different involution. Define * −1 : Applying a unitary similarity to it simplifes to U SU T δ where U is some complex unitary matrix. The spectral theorem for infinitesimal self-adjoint matrices over (Cl 0,1,1 (R), * −1 ) is thus the same as the skew-Takagi decomposition.
Remark about dualities between matrix decompositions
The above is surprising because the Takagi decomposition is usually seen as a special case of the SVD. What we've uncovered though is instead a duality between them. Takagi is not a special case of SVD, but is its dual. This duality suggests that somehow, whatever "works" (let's say in numerical computing) for the SVD should also work over Takagi. Since the SVD is more thoroughly studied, this suggests that one can transfer the numerical theory of the SVD wholesale onto the Takagi and skew-Takagi decompositions.
The skew-Takagi decomposition is obviously dual to the Takagi decomposition in a different way to how it is dual to the SVD.
Notice that when diagonalising a self-adjoint matrix H over C, the following steps are taken: 1. Find an eigenvector v of H.
2. Find a basis B for the orthogonal complement of v, written v ⊥ .
3. Restrict H to v ⊥ by using the basis B.
4. Return to the first step.
The trick is to realise that the SVD and Takagi follow the same plan.
Step 1 is usually the most complicated, but there are approaches which sometimes work, depending on the * -algebra. A good approach can be called the unpack-and-unwind method (for lack of a better name). Given a * -algebra (A, * ), and a sub- * -algebra B, we define a function "unpack" that sends a (A, * )-matrix to a (B, * )-matrix. For example, consider the matrix (M, M T )δ over (Cl 1,0,1 (R), * 1 ), and let B be the dual numbers. We have that . By the injectivity of unwind, we cancel to get that v ′ is an eigenvector of (M, M T )δ.
Note that all of this is not the same thing as reducing SVD or Takagi to the diagonalisation of some suitable self-adjoint C-matrix. These tricks are wellknown, suboptimal, and they require additional post-processing steps which the method we're proposing doesn't.
5 Infinitesimal spectral theorems, and the resulting equivalence relations which give rise to numerous matrix decompositions
Understanding δ in general
The algebras above -that is, Cl 0,1,1 (R) and Cl 1,0,1 (R), and without considering involutions -could be obtained from C( ∼ = Cl 0,1,0 (R)) or R ⊕ R( ∼ = Cl 1,0,0 (R)) respectively in multiple ways. On the one hand, they could be obtained by introducing a nilsquare generator δ into the Clifford algebra to obtain a larger Clifford algebra.
A broader direction of generalisation is also apparent: We introduced an element δ into some algebra A over some field F such that δz = φ(z)δ where φ : A → A is some function. This is a lot like the Cayley-Dickson construction. What properties should φ satisfy?
To have associativity, we will need to have φ(wz) = φ(w)φ(z). Why? Because by associativity, we have that This already rules out the standard quaternion involution, or the matrix transpose, as possible instantiations of φ.
φ will need to be linear over the underlying field. Why?
Notice though that it would not suffice for φ to be an antiautomorphism. The usual quaternion involution is actually an antiautomorphism and not an automorphism -as is the matrix transpose -and so could not be used as a φ.
We can now state: ] denote the algebra obtained by adjoining to A an element δ such that δ 2 = 0 and δz = φ(z)δ for some function φ : A → A and all z ∈ A. The result is an associative algebra over F if and only if φ is an algebra automorphism of A.
The unifying list
Each matrix decomposition we consider presents a canonical form for a matrix under some equivalence relation. In our statement of conjecture 3.1, we speculate that this canonical form has a particularly nice structure for the general type of equivalence relation we consider here.
Some matrix decompositions are precisely equivalent to spectral theorems over * -algebras. We've seen this with the spectral theorem for (R ⊕ R, * −1 ), which is precisely equivalent to the Jordan decomposition. But oftentimes, the equivalence only holds for so-called infinitesimal matrices, with the non-infinitesimal spectrum theorem being strictly more general than is needed to describe a classic matrix decomposition.
A matrix M over the * -algebra R [[δ]] is called infinitesimal if it is equal to Kδ for some (R, * )-matrix K. An infinitesimal self-adjoint matrix H is one which is infinitesimal and self-adjoint.
The unitary-similarity relation specialised to infinitesimal self-adjoint matrices is equivalent (depending on R, * , φ, s) to numerous equivalence relations on Rmatrices, with a likely spectral theorem for each of them: • Consider four symmetric bilinear forms B 1 , B ′ 1 , B 2 , B ′ 2 : V ⊗ V → R, the first two being non-degenerate, over an R-vector space V . We say that φ(a, b) = (b, a). We don't consider this one further. There are other special cases of spectral theorems over * -algebras that are equivalent to other matrix decompositions.
6 The unpack-and-unwind method for computing and verifying existence of some classic matrix decompositions The following theorems aren't new, but are proved using the same technique. These are special cases of spectral theorems which we will prove fully later. Note that by C, we will mean the * -algebra of complex numbers equipped with their standard involution: (a + bi) * = a − bi. We won't write this (C, * −1 ) for the sake of readability, but be aware that the complex numbers may (outside of this paper) be equipped with a different involution.
Below, we piecemeal define operations we call unpack and unwind. The unwind operation acts on column vectors, and is at least partially a mere change of scalars. The unpack operation is then defined such that unpack(M ) unwind(v) = unwind(M v) for all v. We say that unwind is only partially a change of scalars, because it changes the scalar * -algebra to one of its subalgebras, but in such a way that for instance the "length squared" of a vector might be preserved.
For example, the following would be a bad way to define unwind from C-vectors to R-vectors: unwind(u + vi) = (u + v, u) T . The problem with this definition is that if z = u + vi, then z * z = unwind(z) * unwind(z) according to this proposal.
In some instances below, we unfortunately cannot define unwind such that v * v = unwind(v) * unwind(v), but a seemingly natural definition of unwind is still possible, and we still give one. Such difficulties arise in any ring which contains nontrivial idempotents (i.e. solutions to x 2 = x which are not either 0 or 1): For example, in the ring R ⊕ R.
In general, the unwind and unpack operations ought to satisfy: Proof. Observe that while M is not Hermitian, we may introduce a new scalar δ such that δ 2 = 0, δi = −iδ and δ * = δ. We see that M δ is now a (Cl 0,1,1 (R), * 1 )-matrix, and is indeed self-adjoint (over (Cl 0,1,1 (R), We might then hope to unitarily diagonalise M δ. We seek to show that a (Cl 0,1,1 (R), * 1 )-unitary diagonalisation of M δ -should it exist -gives rise to a Takagi decomposition of M . To see this, we begin with assuming that a (Cl 0,1,1 (R), * 1 )-unitary diagonalisation exists: M δ = U DU * for (Cl 0,1,1 (R), * 1 )-matrices U and D where U is (Cl 0,1,1 (R), * 1 )-unitary and D is (Cl 0,1,1 (R), * 1 )-diagonal. We write each of U and D as: Ignoring the ε, we have a self-adjoint R-matrix. By the R-spectral theorem, we obtain an R-eigenvector v of unpack(M δ). Unfortunately, v is not an eigenvector of M δ, but only of unpack(M δ). By the expression for unwind above, we have that there is a C-vector v ′ such that unwind(v ′ ) = v. We see that v ′ is an eigenvector of M δ.
Since M δ is self-adjoint, it maps the orthogonal complement of v ′ to itself. We can define a matrix representation of M δ over this subspace. Since v ′ is a Cvector, a (Cl 0,1,1 (R), * 1 )-orthonormal basis B for (v ′ ) ⊥ is obtained simply from the C-orthogonal complement of v ′ . The matrix representation of M δ over (v ′ ) ⊥ is obtained using the basis B. The result is a self-adjoint (Cl 0,1,1 (R), * 1 )-matrix M ′ δ with one less dimension than M δ. We repeat by finding another eigenvector, restricting to the orthogonal complement of that, etc. Formally, this proof is by induction. M , the argument below succeeds in building an existence-of-SVD proof around it, where we ensure that the unitary diagonalisation we obtain has the block structure we need. For the dual-number spectral theorem(s) and initial proofs of the corresponding SVD(s), see [2]. Note that we say "(s)" because there is a separate dual-number spectral theorem (and SVD) for each of the two possible involutions over the dual numbers.
Proof. Observe that (M, M T )δ is self-adjoint over (Cl 1,0,1 (R), * 1 ). We might then reasonably hope to obtain a unitary diagonalisation of (M, M T )δ. We seek to show that a (Cl 1,0,1 (R), . By the expression for unwind above, we have that there is a R-vector (u, v) such that unwind((u, v)) = w. We see that (u, v) is an eigenvector of (M, M T )δ.
Since (M, M T )δ is self-adjoint, it maps the orthogonal complement of (u, v) to itself. We can define a matrix representation of (M, M T )δ over this subspace. Proof. Observe that while M is not Hermitian, we may introduce a new scalar δ such that δ 2 = 0, δi = −iδ and δ * = −δ. We see that M δ is now a (Cl 0,1,1 (R), * −1 )-matrix, and is indeed self-adjoint (over (Cl 0,1,1 (R), * −1 )) be- We can now hope to employ some analogue of the spectral theorem. It's easily seen that if for a matrix K we have that Kδ is unitarily similar to M δ, then there is a U such that U M U T = K. We intend to find a canonical K.
We Since M δ is self-adjoint, it maps the orthogonal complement of span{u ′ , v ′ } to itself. We can define a matrix representation of M δ over this subspace. Since u ′ and v ′ are C-vectors, a (Cl 0,1,1 (R), * −1 )-orthonormal basis B for span{u ′ , v ′ } ⊥ is obtained simply from the C-orthogonal complement of span{u ′ , v ′ }. The matrix representation of M δ over span{u ′ , v ′ } ⊥ is obtained using the basis B. The result is a self-adjoint (Cl 0,1,1 (R), * −1 )-matrix M ′ δ with one less dimension than M δ. We repeat the above with M ′ δ. Formally, this proof is by induction.
Proof. We are not able to employ the same trick as for a skew-Hermitian matrix over the complex numbers (with their usual involution). We employ unpack-andunwind instead. We have that unpack(M ) is a skew-symmetric R-matrix. By the skew-symmetric R-spectral theorem, we obtain a pair of R-eigenvectors u and v such that there exists a µ ∈ R for which unpack(M )u = −µv and unpack(M )v = µu. Unfortunately, u and v are not vectors over the same algebra as M . By the expression for unwind above, we have that there are C-vectors u ′ and v ′ such that unwind(u ′ ) = u and unwind(v ′ ) = v. We see that M δv ′ = −µv ′ and M δu ′ = µv ′ . Since M is a skew-Hermitian, it maps the orthogonal complement of span{u ′ , v ′ } to itself. We can define a matrix representation of M over this subspace. We know that all modules over the quaternions admit a basis, and this basis can always be orthonormalised. We restrict M to the orthogonal complement of span{u ′ , v ′ } by use of a basis, and obtain a matrix M ′ . Formally, this proof is by induction.
We define unpack(A +
There are many more examples of this trick, for instance involving some matrix decompositions over the dual numbers, reducing them to the dual-number spectral theorem [2]. 7 Proving the spectral theorem for the "SVD * -algebra" (Cl 1,0,1 (R), * 1 ) In this section, we prove a spectral theorem for (Cl 1,0,1 (R), * 1 ).
Some preliminaries on notation:
In the following arguments, we sometimes conflate a dual number a + bε with a member of (Cl 1,0,1 (R), * 1 ) of the form a + bδ. We define st(a + bε) = a (the "standard part") and nst(a + bε) = b (the "non-standard part"). Be aware that when we write (a, b), we mean a member of (Cl 1,0,1 (R), * 1 ), and not a row vector -we don't presently expect this to cause confusion. When we write (M, K) (or some other capital letters), we mean a matrix over (Cl 1,0,1 (R), * 1 ) of the form M (1, 0) + K(0, 1). Some facts we're assuming: We make extensive use of the fact that every dual number matrix satisfying S = S T also satisfies a form of the spectral theorem: We have that S = U DU T is true for a unique diagonal matrix D over the dual numbers, and some orthogonal matrix U over the dual numbers (i.e. satisfying U T = U −1 ). Lemma 7.1. If a self-adjoint (Cl 1,0,1 (R), * 1 )-matrix H admits a pair of vectors w L and w R such that: 1. w L = w L (1, 0) and w R = w R (0, 1), 2. w * L w L = (1, 0) and w * R w R = (0, 1), 3. There exist real numbers λ L and λ R for which Hw L = w L λ L and Hw R = w R λ R , Proof. From 1, we may expand w L to w L = (u, 0) + δ(u ′ , 0) and It's easy to see that w is an eigenvector of H of eigenvalue (λ L , λ R ). It remains to consider w * w.
We have that w * w = A + Bδ for some A and B. We know that A = 1 because u * u = v * v = 1 (from item 2). We also get that B is real because Bδ = w * w−1 = (w * w − 1) * = (Bδ) * .
We seek to show that v * v is in D: From the previous paragraph, we get that v * v = 1/2 + Xδ for some X. Clearly, (v * v) * = v * v, so we conclude X is real.
We prove the main claim: Now let w = v(v * v) −1/2 . Clearly, w * w = 1, as we sought. We also observe that Observe that from unpack(H) unwind(u i ) = unwind(u i )λ i , we get Hu i = u i λ i . Furthermore, we may scale u i by the scalars (1, 0) and (0, 1) and the analogous identity remains true: More explicitly, we have that Hu i (1, 0) = u i (1, 0)λ i and Hu i (0, 1) = u i (0, 1)λ i .
We seek to show that for each i, at least one of u i (1, 0) or u i (0, 1) is not a multiple of δ: If they are both multiples of δ, then u i is also a multiple of δ because u i = u i (1, 0) + u i (0, 1). But then unwind(u i ) is a multiple of ε. This is clearly impossible because unwind(u i ) has unit length.
We seek to show that it is not the case that u i (1, 0) is a multiple of δ for every i: Assume otherwise. We have that the unwind of a multiple of δ is a multiple of ε. We thus get that the following is a multiple We seek to show that it is not the case that u i (0, 1) is a multiple of δ for every i: Same argument as above.
We seek to show that there exists a unit eigenvector of H: Pick a u i such that u i (1, 0) is not a multiple of δ. Either there exists a u j such that u j (0, 1) is not a multiple of δ and λ i = λ j , or there doesn't: • If there doesn't, then we conclude that all eigenvalues of unpack(H) are the same real number λ. But then unpack(H) is a real multiple of the identity matrix. Therefore so is H. Any vector is now an eigenvector of H, and so we are done.
• If there does, then pick this i and j. Let w = u i (1, 0) + u j (0, 1). We see that Hw = w(λ i , λ j ). It remains to show that w * w = 1. But this follows from lemma 7.1, so we are done.
The following lemma is necessary to be able to take orthogonal complements. In general, this can be quite complicated if we only assume the conditions P 2 = P and P * = P . We therefore need to add the condition that P should be displaced from a real orthogonal projection Q by only a multiple of δ.
Lemma 7.4. If P 2 = P , P * = P and P = Q + δ(K, K T ) for some real projection matrix Q and some real matrix K, then P is unitarily diagonalisable with eigenvalues either 0 or 1.
Proof. The real matrix Q can be unitarily diagonalised using a real matrix U . We therefore have that U P U * = diag(1, . . . , 1, 0, . . . , 0) + δ(L, L T ) for some real matrix L. We write U P U * as a block matrix for clarity: . If we square this block matrix and recall that (U P U * ) 2 = U P U * , we get that L 11 = L 22 = 0. We can finally diagonalise U P U * using the block matrix V = I δ(L 12 , L T 12 ) −δ(L 12 , L T 12 ) I . Proof. We prove this by induction on k where H is k × k.
The claim is clearly true for k = 0.
Either all the eigenvalues are real or they're not.
Consider the case when some eigenvalue λ i is not real, but dual. There is a u ′ i such that unwind(u ′ i ) = u i . Lemma 7.2 implies that for w obtained by normalising u ′ i , we have that Hw = wλ i and w * w = 1. We have that λ in this case is a dual number. By lemma 7.4, we may take the orthogonal complement of w, restrict H to w ⊥ , and apply the induction hypothesis. We are done.
Consider the case when all the eigenvalues are real: By lemma 7.3, we may find a unit eigenvector v. λ in this case is a pair (x, y) ∈ R 2 . By lemma 7.4, we may take the orthogonal complement of w. We restrict H to w ⊥ and apply the induction hypothesis. We are done. Definition 7.1. We define the spectrum of a self-adjoint (Cl 1,0,1 (R), * 1 )-matrix H (which we will later show is unique) by a triple of finite multisets (C, L, R) where: • C consists of dual numbers, while L and R consist of real numbers, • H is unitarily similar to c ⊕ (l, r) where c, l and r are diagonal matrices whose entries belong to C, L and R respectively. • One where the unitary eigenbasis is {u ′ 1 , u ′ 2 , . . . , u ′ n } with the eigenvalues coming from the spectrum (C ′ , L ′ , R ′ ). We say that the vectors in Observe that {u 1 , u 1 (1, −1), u 2 , u 2 (1, −1), . . . , u k , u k (1, −1)} ∪ {u k+1 (1, 0), u k+1 (0, 1) . . . , u n (1, 0), u n (0, 1)} form a spanning set. Furthermore, applying unwind retains the spanning property. We get that for each dual λ i , we get that both λ i and λ i are eigenvalues of unpack (H). Since the eigenvalues of unpack (H) are unique, this establishes that C = C ′ and L + R = L ′ + R ′ . By projecting st(H) on its two components, and applying the uniqueness of real eigenspectra, we get that L = L ′ and R = R ′ .
Proof. Let M be a self-adjoint (Cl 0,1,1 (R), * 1 )-matrix. We find a complex unitary matrix S such that S st(A)S * is diagonal. We let M ′ = SM S * , which we write as a block matrix where each B ii is complex symmetric. We let and let M ′′ = P M ′ P * . We end up with M ′′ being equal to a direct sum of matrices: M ′′ = (λ 1 I + B 11 δ) ⊕ (λ 2 I + B 22 δ) ⊕ · · · ⊕ (λ n I + B nn δ). We finally use the Takagi decomposition (whose existence we proved for a general complexsymmetric matrix using the unpack-and-unwind method in proposition 6.1) to find matrices Q i such that Q i B ii Q T i is equal to a real diagonal matrix. We thus get that (Q 1 ⊕ Q 2 ⊕ · · · ⊕ Q n )M ′′ (Q 1 ⊕ Q 2 ⊕ · · · ⊕ Q n ) * is a diagonal matrix. 9 Proving the spectral theorem for the "skew-Takagi * -algebra" (Cl 0,1,1 (R), * −1 ) Presently, this result has been proved in a paper in Arxiv where I have been promised coauthorship. 3 Theorem 9.1. Every self-adjoint (Cl 0,1,1 (R), * −1 )-matrix H is unitarily similar to a direct sum of matrices of the form Proof. Let M be a self-adjoint (Cl 0,1,1 (R), * −1 )-matrix. We find a complex unitary matrix S such that S st(A)S * is diagonal. We let M ′ = SM S * , which we write as a block matrix , and let M ′′ = P M ′ P * . We end up with M ′′ being equal to a direct sum of matrices: M ′′ = (λ 1 I + B 11 δ)⊕ (λ 2 I + B 22 δ)⊕ · · ·⊕ (λ n I + B nn δ). We finally use the skew-Takagi decomposition (whose existence is proved in 6.3 using unpackand-unwind) to find matrices Q i such that Q i B ii Q T i is equal to a direct sum of matrices of the form 0 −λ ′ i λ ′ i 0 and (0) (with the last type of block only occurring once). We thus get that (Q 1 ⊕ Q 2 ⊕ · · · ⊕ Q n )M ′′ (Q 1 ⊕ Q 2 ⊕ · · · ⊕ Q n ) * is of the required form.
10 Spectral theorems for Cl p,q,0 (R) (for 1 involution) and Cl p,q,1 (R) (for 2 involutions) The results here follow easily from those of the previous sections, and the classification theorem for Clifford algebras over R.
The following has been proven elsewhere [4], and we recall it.
Lemma 10.1. Every (Cl p,q,0 (R), * ) is isomorphic to one of the following *algebras: Proof. Using lemma 10.1, we prove our theorem for each of the seven cases within that lemma in turn: 1. First, H has a real eigenvector v with real eigenvalue. This follows either from the Fundamental Theorem of Algebra or from Lagrange multipliers (we won't show the details here because they're well covered elsewhere). Finally, restrict H to the orthogonal complement of v. H over this orthogonal complement is still self-adjoint, so the construction can be repeated. Proof. Let M be a self-adjoint (Cl 1,1,1 (R), * −1 )-matrix. We find a quaternion unitary matrix S such that S st(A)S * is diagonal. We let M ′ = SM S * , which we write as a block matrix
|
2022-08-09T01:16:30.644Z
|
2022-08-08T00:00:00.000
|
{
"year": 2022,
"sha1": "cb7fe3c4be64a76264f8926118c1b5b17b7abc53",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "cb7fe3c4be64a76264f8926118c1b5b17b7abc53",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
}
|
260753663
|
pes2o/s2orc
|
v3-fos-license
|
Development of a nucleic acid detection method based on the CRISPR-Cas13 for point-of-care testing of bovine viral diarrhea virus-1b
Abstract Bovine viral diarrhea (BVD) is a single-stranded, positive-sense ribonucleic acid (RNA) virus belonging to the genus Pestivirus of the Flaviviridae family. BVD frequently causes economic losses to farmers. Among bovine viral diarrhea virus (BVDV) strains, BVDV-1b is predominant and widespread in Hanwoo calves. Reverse-transcription polymerase chain reaction (RT-PCR) is an essential method for diagnosing BVDV-1b and has become the gold standard for diagnosis in the Republic of Korea. However, this diagnostic method is time-consuming and requires expensive equipment. Therefore, Clustered regularly interspaced short palindromic repeats-Cas (CRISPR-Cas) systems have been used for point-of-care (POC) testing of viruses. Developing a sensitive and specific method for POC testing of BVDV-1b would be advantageous for controlling the spread of infection. Thus, this study aimed to develop a novel nucleic acid detection method using the CRISPR-Cas13 system for POC testing of BVDV-1b. The sequence of the BVD virus was extracted from National Center for Biotechnology Information (NC_001461.1), and the 5’ untranslated region, commonly used for detection, was selected. CRISPR RNA (crRNA) was designed using the Cas13 design program and optimized for the expression and purification of the LwCas13a protein. Madin Darby bovine kidney (MDBK) cells were infected with BVDV-1b, incubated, and the viral RNA was extracted. To enable POC viral detection, the compatibility of the CRISPR-Cas13 system was verified with a paper-based strip through collateral cleavage activity. Finally, a colorimetric assay was used to evaluate the detection of BVDV-1b by combining the previously obtained crRNA and Cas13a protein on a paper strip. In conclusion, the CRISPR-Cas13 system is highly sensitive, specific, and capable of nucleic acid detection, making it an optimal system for the early point-of-care testing of BVDV-1b.
INTRODUCTION
Bovine viral diarrhea (BVD) is an economically significant disease in cattle and is found in most countries worldwide.Infection and death associated with BVD lead to significantly reduced reproductive performance and increased premature culling.These clinical signs are especially pronounced when one or more BVD carriers are present in a herd.Animals that develop acute diarrhea and fever may die or have long, costly recovery periods with decreased production and growth performance.Over the past 10 years, BVD has occurred frequently, causing economic losses to farmers in the domestic livestock industry [1].The major pathogens causing calf diarrhea in these reports were viruses (bovine coronavirus [BCV], bovine rotavirus group A [BRV], and bovine viral diarrhea virus [BVDV]), bacteria (Escherichia coli K99 and Salmonella spp.), and protozoa (Cryptosporidium parvum and Eimeria spp.).Some of these agents are detected not only in diarrheic calves but also in normal calves [2,3].BVDV is predominant in most parts of the world, with a high prevalence, persistence, and clinical consequences [4].BVDV is an enveloped, positive-sense, linear, single-stranded RNA virus (12.5 kb) of the genus Pestivirus in the family Flaviviridae.BVDV can be divided into two types (BVDV1 and BVDV2) based on sequence similarity of the 5' untranslated region (UTR) in the viral genome [5].BVDV transmission occurs both horizontally and vertically, with persistently and transiently infected animals excreting the virus.The virus is transmitted via direct contact, bodily secretions, and contaminated fomites, and can persist in the environment for more than two weeks.Persistently infected animals are the most important source of the virus, continuously excreting a viral load 1000 times higher than that shed by acutely infected animals [6].Laboratory testing for identifying BVDV has typically been performed with methods such as enzyme-linked immunosorbent assay (ELISA) and polymerase chain reaction (PCR).However, these are costly and require precise instruments, making it imperative to establish a fast, accurate, and efficient method for detecting BVDV [7][8][9].Quick and accurate point-of-care (POC) testing for BVDV would greatly enhance diagnostic capacity, leading to improved quarantine and disease prevalence control.There are several POC RNA detection technologies that do not require special instruments, such as reverse transcription recombinase polymerase amplification (RT-RPA) [10] and reverse transcription-loop-mediated isothermal amplification (RT-LAMP) [11].RT-RPA and RT-LAMP are highly sensitive methods for detecting viral RNA; however, they are prone to non-specific amplification under isothermal conditions.This can lead to false positive results when used for viral RNA detection [12].This problem worsens when non-sequence-specific probes, such as pH-sensitive dyes, are used.The use of sequence-specific probes, such as hybridizationbased fluorescent oligonucleotide probes, can improve detection sensitivity [13].Diagnostic methods have been developed based on clustered regularly interspaced short palindromic repeats (CRISPR), which utilize the collateral cleavage activity of bystander nucleic acid probes by RNAguided CRISPR-associated 12/13 (Cas12/13) nucleases [14].CRISPR diagnostic methods are both highly sensitive (at the attomolar level) and specific (down to the single-nucleotide level) [15].The readout of Cas-mediated nucleic acid probe cleavage can be detected using fluorescence or the lateral-flow strip method.The latter is advantageous because the strips are portable, and the results can be easily read with the naked eye [16].Cas13a, previously known as a single-effector RNAguided ribonuclease (RNase), can detect the presence of an RNA target using CRISPR RNAs (crRNAs), providing a platform for specific RNA sensing [17].Several studies have reported that Cas13a-based molecular detection systems can detect both Zika and Dengue viruses [15].Recently, a nucleic acid detection method based on CRISPR-Cas13a was developed as a new system for the early diagnosis of BVDV [9].This study aimed to establish a nucleic acid detection method based on the lateral flow strip method for early POC detection of BVDV-1b using CRISPR-Cas13a.To University in the year of 2022.
Bovine viral diarrhea virus-1b RNA extraction and cDNA synthesis
Bovine viral diarrhea virus-1b RNA extraction MDBK cells were cultured in Dulbecco's modified Eagle's medium (DMEM) supplemented with 10% fetal bovine serum (FBS), penicillin, and streptomycin at 37℃ with 5% CO 2 .MDBK cells were obtained from Korean Cell Line Bank, and BVDV-1b cells were obtained from the Korea Veterinary Culture Collection (KVCC).MDBK cells at 80%-90% confluency were washed twice with phosphate-buffered saline (PBS) and then infected with BVDV-1b diluted to a concentration of 10 2 -10 3 using DMEM.The cells were then incubated at 37℃ with 5% CO 2 for 60-90 minutes to allow the virus to adsorb onto them.Infected MDBK cells were cultured for more than 48 h in DMEM supplemented with 3% FBS.The cells were observed daily for cytopathic effects (CPE).The cells were lysed by three repeated freeze-thaw cycles, and the lysate was centrifuged for 10 minutes at 494×g in a 15 mL tube.The supernatant was aspirated and stored at -70℃ until further use.The viral RNA of BVDV-1b was extracted using the Viral Gene-spin Viral DNA/ RNA Extraction Kit (iNtRON Biotechnology, Seongnam, Korea), following the manufacturer's instructions.
cDNA synthesis
The extracted viral RNA of the BVDV-1b was reverse transcribed into complementary DNA (cDNA) using the AccuPower® RT Master Mix (Bioneer, Daejeon, Korea), following the manufacturer's instructions.The template RNA, for ward primer (5'-GCCATGCCCTTAGTAGGACT-3'), reverse primer (5'-T7 promoter sequence-CGAACCACTGACGA CTACCC-3'), RNase inhibitor, and ddH 2 O were combined in a 1.5 mL tube.The mixture was incubated at 70℃ for 5 min and then placed on ice.AccuPower® RT Master Mix and DNA polymerase (Bioneer) was added to the PCR tube to achieve a final volume of 50 µL.The RT-PCR reaction was performed by incubating the mixture at 42℃ for 60 minutes, followed by 5 minutes at 95℃ and then 35 cycles of DNA amplification, which consisted of 30 seconds at 95℃, 30 seconds at 54℃-67℃, and 1 min at 72℃.A final extension step of 5 minutes at 72℃ was included.The cDNA was stored at −20℃ until use.
Design of crRNA, forward/reverse primer, direct repeat sequence
The 5'UTR region of the BVDV FASTA sequence (NC_001461.1)was obtained from National Center for Biotechnology Information (NCBI).Forward and reverse primers were designed using the NCBI Primer-BLAST.For detection reactions involving Cas13, a T7 promoter sequence must be added to the 5ʹ end of the forward or reverse primer to enable T7 transcription.The gRNA sequence was designed using the Cas13 online platform (https://cas13design.nygenome.org).The 5ʹ end of the gRNA sequence is then combined with a direct repeat (DR) sequence to form a complete crRNA [17].The primers and crRNAs were designed as shown in Table 1.
Purification of LwCas13a
The cell pellet was resuspended in 20 mL of supplemented lysis buffer (composed of 50 mL of lysis buffer, 1 tablet of cOmplete TM Ultra EDTA, 0.05 g of lysozyme, and 0.5 µL of benzonase) by stirring the mixture on ice for 30 min.The resuspension was sonicated using a Vibra cell (Sonics, Newtown, CT, USA) for 10 min with a 1-second on and 2-second off cycle at a 30% pulse amplitude.The lysate was cleared by centrifugation at 10,174×g for 30 minutes at 4℃, after which the supernatant was transferred to a new 50 mL conical tube.The supernatant was added to 1.25 mL of Strep-Tactin Superflow Plus resin (Qiagen, Hilden, Germany), and the mixture was gently shaken at 4℃ for 2 hours to bind the Twinstrep-SUMO-huLwCas13a protein to the resin.The glass Econo-column® (BIO-Rad, Hercules, CA, USA) was prepared by washing the column with cold lysis buffer (4 mL of Tris-HCl [1M, pH8.0], 20 mL of NaCl [5M], and 0.2 mL of DTT [1M], adjusted to a final volume of 200 mL with ddH 2 O) on ice.The solution was poured and allowed to flow through the column.The column with the binding resin was washed by adding 18.75 mL of cold lysis buffer.Then, 3.75 mL of SUMO protease cleavage solution (3 µL of SUMO-protease, 7.5 µL of IGEPAL® CA-630, and 5 mL of lysis buffer) was added to the column and it was covered.The mixture was gently shaken at 4℃ for 16 h.The SUMO cleavage sample was collected in a new 50 mL conical tube, and 1.5 mL of glycerol was added.The LwCas13a protein was stored at −20℃.
cDNA synthesis and design of crRNA, forward/reverse primer, direct repeat sequence, and RNA reporter The cDNA was synthesized by RT-PCR targeting the 5' UTR region of BVDV-1b, resulting in a 101 bp product (Fig. 1B).Furthermore, Fig. 2 shows the workflow for detecting the cDNA using the CRISPR-Cas13 system.The cDNA must be transcribed to bind with the crRNA.As shown in Fig. 3A, the crRNAs did not overlap with the primers.The crRNAs were designed similarly.The
A B
https://doi.org/10.5187/jast.2023.e77 sequences of the primers and crRNA were designed relative to the target cDNA and transcribed RNA.A T7 promoter added to the 5′ end of the primer, and the crRNA sequence was the reverse complement of the target site in the transcribed RNA.The DR of the LwCas13a crRNA was located at the 5' end of the spacer sequence, forming a complex with the Cas13 enzyme (Fig. 3B).
The HybriDetect lateral flow strip (Milenia Biotec) used for the detection assays consisted of a biotin ligand on the control line and an anti-rabbit antibody on the test line.GNPs were coated on the sample spot of the strip, which produced a color.The GNP can bind to fluorescein amidite (FAM) or the Anti-rabbit antibodies on the test line.As a colorimetric-based lateral-flow detection assay (LFDA) probe, an RNA reporter was labeled with FAM at the 5′ end of the poly U sequence and Biotin at the 3′ end (Fig. 3C).Thus, the biotin of the RNA reporter bound to the biotin ligand on the control line of the strip, and the FAM of the RNA reporter bound to the GNP (Fig. 4A).
Expression and Purification of LwCas13a Protein
The pC013-Twinstrep-SUMO-huLwCas13a plasmid (#90097, Addgene) was digested into fragments using Xho I. Rosetta TM (DE3) was transformed with the plasmid, and single colonies were obtained for the analysis of LwCas13a protein expression.The purified LwCas13a protein showed a single band on SDS-PAGE with a molecular weight of 138.5 kDa (Fig. 4B), indicating successful purification, and was expressed primarily in the supernatant [9].
Colorimetric-based lateral-flow detection assay
To confirm the collateral cleavage activity of Cas13, we performed a colorimetric LFDA.The results of the colorimetric-based LFDA using the CRISPR-Cas13 system for detecting BVDV-1b are shown in Figs.4C and 4D.
First, we tested the RNA reporter at a concentration of 0 µM, and only a band appeared on the test (T) line (Fig. 4C).The GNP did not bind to the control (C) line without the RNA reporter.The Biotin-FAM RNA reporter specifically binds to GNP.To avoid false positive and high dose hook effect (a state of antigen excess relative to the antibody proves, resulting in falsely lowered values), we tested the RNA reporter at concentrations of 2 µM, 1 µM, 0.2 µM, 0.15 µM, 0.1 µM, and 0.05 µM using a LFDA with a paper strip.All lateral flows with paper strips contained test (T) and control (C) bands.We found that a concentration of 0.2 µM resulted in a true-negative readout.
Secondly, we verified that all four crRNAs could effectively detect BVDV-1b.As LwCas13a showed RNase collateral activity, we designed a Cas13a-responsive RNA reporter that remained susceptible to ssRNA-mediated cleavage (Fig. 3C).The RNA reporter was labeled with biotin and FAM, allowing it to be captured on a lateral flow strip with a biotin ligand and detected using anti-FAM antibodies conjugated to gold nanoparticles (biotin-ligand-FAM-GNP).In the truenegative control (no crRNA), all the intact biotin-FAM-GNPs bound to the biotin ligand were captured in the control (C) line.In a positive sample, the reporter is cut, releasing the FAMcontaining fragment to be captured by a second line of antibodies resulting in a "test band."Specific detection of the RNA target cleaves the biotin-FAM reporter, allowing the production of two colored bands in the test (T) line (where anti-rabbit IgG binds to excess anti-FAM-GNPs).We tested the detection of BVDV-1b using a colorimetric LFDA with the CRISPR-Cas13 system (Fig. 4D).True-positive samples produced two colored bands in the control (C) and test (T) bands, which were distinguishable from the true-negative controls (main band at C). Analysis of BVDV-
DISCUSSION
BVDV-1b is an economically significant viral disease in the national cattle industry.BVDV has spread quickly throughout farms, making prevention and control of the virus challenging [24,25].Therefore, early POC testing is essential to prevent and control the spread of BVDV-1b.The nucleic acid detection method has the advantage of high sensitivity and is more sensitive and specific than the BVDV-1b antibody detection method.As a result, RT-PCR of viral RNA has been widely used in various countries to detect BVDV-1b.RT-PCR is regarded as the essential method for the diagnosis of BVDV-1b and has become the gold standard for diagnosis in the Republic of Korea.Therefore, Diagnosing BVDV-1b using RT-PCR has a limited ability to prevent and control the spread of BVDV-1b throughout the farm, it requires expensive equipment, such as a thermal cycler, and skilled personnel to conduct the experiments.POC testing has become an excellent supplement to the gold standard for diagnosis using RT-PCR in standard laboratories.Recently, a nucleic acid detection method based on CRISPR-Cas13a was developed as a new system for the early diagnosis of BVDV [9].According to several studies, nucleic acid detection based on the CRISPR-Cas13 system is a novel strategy for developing detection of RNA viruses for POC testing [14][15][16]26].Patchsung et al. [26] validated the SHERLOCK method on 154 clinical COVID-19 samples and found it to be 100% specific, 96% sensitive with a fluorescence readout, and 88% sensitive with a lateral-flow readout.This method was able to detect SARS-CoV-2 in RNA extracts from nasopharyngeal and throat swab samples, including sputum samples, without any cross-reactivity to other common human coronaviruses, and was able to detect the virus in asymptomatic cases [26].Myhrvold et al. [16] have validated the ability of SHERLOCK to detect Dengue virus (DENV) and Zika virus (ZIKV) infections.All RT-PCR-positive ZIKV and DENV RNA samples were confirmed to be positive for ZIKV and DENV after 1hour of detection [16].
BVDV strain was classified based on a variety of genetic variance in the 5′ UTR.Among the BVDV strains, BVDV-1b occurs frequently in the Republic of Korea.This information was obtained from the Animal and Plant Quarantine Agency.We found that all crRNAs detected were BVDV-1b.Hence, the nucleic acid detection method using the CRISPR-Cas13 system could be useful for POC testing of BVDV-1b as a novel strategy in the Republic of Korea.
To meet the World Health Organization's "ASSURED" criteria for diagnostic tests -affordable, sensitive, specific, user-friendly, rapid, equipment-free, and deliverable, several challenges must be addressed.Ideal POC testing products must be characterized by miniaturization, automation, visualization of results, and rapid, high-precision, and high-throughput detection.Several problems are associated with POC testing using the CRISPR-Cas13 system.First, the SHERLOCK reagents, which are detection reagents from BVDV-1b, must be freeze-dried not only for cold chain transportation but also for long-term storage or easy reconstitution on paper strips for field applications.Another problem that combines LwCas13a collateral cleavage activity with lateral flow readings is disruption of the FAM-biotin reporter gene.There is increasing evidence that combining Csm6 with LwCas13a detection can improve the test stability and reduce the possibility of false-positive readings [16].
As shown in Figs.4C and 4D, a faint line was observed on the lateral flow paper strip.Kim et al. [27] observed incomplete removal of the test line from the colorimetric assay results of the negative control, which could lead to false-positive results.When using the traditional colloidal gold test strip for immunological testing, false-positive results are commonly observed, and these results may have serious consequences that affect clinical diagnosis and subsequent control measures [27].To reduce the false-positive rate of the ERASE assay, Li et al. [28] adjusted its interpretation mode and considered the disappearance of the T-band as the positive threshold.Casati et al. [29] empirically determined that positive samples exhibit band intensity ratios greater than 0.2 compared to negative samples.
CRISPR-based nucleic acid detection, multi-silicon microfluidic chips, and multi-isothermal PCR-based molecular diagnostic methods are being integrated into lab-on-a-chip platforms that utilize microfluidic and biosensor technologies and are emerging as ideal platforms for POC diagnostics.Separate nucleic acid amplification steps increase the assay time and complexity and introduce a cross-contamination risk during sample transfer to the CRISPR reaction.Therefore, integrating CRISPR assay procedures into user-friendly devices, such as microfluidic chips or lateral flow assays, allows for the provision of qualitative and quantitative readouts [30].
In this study, cDNA was synthesized using primers designed for BVDV-1b obtained from the Korea Veterinary Culture Collection (KVCC).The crRNAs were designed to have high specificity for the target pathogen.The Cas13 enzyme was expressed and purified.For detection, the BVDV-1b target cDNA, crRNA, Cas13 enzyme, and RNA reporter were subjected to collateral cleavage reaction at 37℃.Lateral flow cytometry analysis showed that crRNAs 1, 2, 3, and 4 were positive for BVDV-1b.If CRISPR-Cas13-based detection is conducted, BVDV-1b can be rapidly and accurately diagnosed, even in samples collected under harsh conditions with a high risk of contamination.We propose that our results could serve as a new POC test at the DNA level for detecting nucleic acid of the BVDV-1b, which frequently occurs in the Republic of Korea.In the next study, we will conduct experiments using clinical samples.
Fig. 2 .
Fig. 2. Detection of BVDV-1b RNA.Cas13 complex and collateral activity experimental workflow.A BVDV-1b RNA region of interest is amplified to DNA by RT-PCR, then converted to RNA by T7 transcription.Cognate binding of Cas13a-crRNA complex to amplified RNA targets triggers collateral activity of Cas13a, which cleaves RNA reporters.Cleaved RNA reporters can be captured on a colorimetric lateral-flow strip.Predicted colorimetric outcomes for negative and positive samples.BVD, bovine viral diarrhea virus; BVDV, BVD virus.
Fig. 3 .
Fig. 3. Specificity for BVDV-1b of the CRISPR-Cas13 system.(A) Alignment of primer and crRNA designed for BVDV-1b.The reference sequence used was no.KC963967.1 in GenBank [20].(B) Schematic of Cas13 enzyme activity with crRNA1.c, Schematic of the designed RNA reporter.FAM and Biotin are labeled at both ends of the ssRNA.BVDV, bovine viral diarrhea virus; CRISPR, clustered regularly interspaced short palindromic repeats.
Fig. 4 .
Fig. 4. Lateral flow detection assay for point-of-care test.(A) Lateral-flow strip result schematic diagram.In the case of a negative result, one band appears, in the case of a positive result, two bands appear.(B) Coomassie Blue-stained SDS-PAGE gel of LwCas13a protein.The progress of protein purification is shown, and we confirmed the presence of LwCas13a with a molecular weight of 138.5 kDa.M, Protein marker; 1, Cell lysate; 2, Supernatant of centrifuged cell lysate; 3, Pellet obtained after centrifugation of cell lysate; 4, Flow-through following Strep-Tactin binding; 5, Strep-Tactin resin before SUMO cleavage; 6, Eluted fraction post SUMO cleavage; 7 and 8, Washing column.(C) Lateral-flow strip results according to RNA reporter concentration.(D) Colorimetric-based lateral-flow detection assay results for BVDV-1b.SDS-PAGE, sodium dodecyl sulfate polyacrylamide gel electrophoresis; SUMO, small ubiquitin-like modifier; BVDV, bovine viral diarrhea virus; T, test-line; C, control-line.
|
2023-08-10T15:11:19.496Z
|
2023-07-25T00:00:00.000
|
{
"year": 2024,
"sha1": "5b3ae7ac8e0eb4c0c8d62acce14f9cf28c5a4ba3",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "480b851638ebb7d1a4b6aacb4a37b784bee7cac5",
"s2fieldsofstudy": [
"Medicine",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
38338606
|
pes2o/s2orc
|
v3-fos-license
|
Histomorphometry of the Testes and Epididymis in the Domesticated Adult African Great Cane Rat ( Thryonomys swinderianus )
Histomorphometry of the testes and epididymis were carried out on the domesticated adult African great cane rat (Thryonomys swinderianus), also known as the grasscutter. The average weight and age of the cane rats used in the study were 1.93 ± 0.42 kg and 18.80 ± 1.39 months respectively. The mean relative volume of the germinal epithelium, interstitium and lumen of the seminiferous tubules of the cane rats were 68.54 ± 1.63%, 8.86 ± 0.85% and 21.40 ± 1.12% respectively. The mean diameter of the seminiferous tubules of the cane rats used for this study was 183.0 ± 11.06 μm. The ductal diameter of the caput, corpus and cauda epididymis were 207.4 ± 7.41 μm, 237.8 ± 10.15 μm and 274.2 ± 9.00 μm respectively, being statistically different (p< 0.05). The epididymal luminal diameters were 95.8 ± 11.52 μm, 126.8 ± 8.35 μm and 221.0 ± 4.05 μm, respectively for the caput, corpus and cauda epididymis. The caput, corpus and cauda epididymis had epithelial heights of 63.6 ± 2.23 μm, 59.20 ± 3.38 μm and 28.60 ± 9.23 μm respectively. There was a high negative correlation (-0.7958) between epithelial height and lumen diameter meaning that with a decrease in the height of the epithelium, the lumen increased significantly. This research work provides base-line data on the histomorphometry of the testes and epididymis of the African great cane rat (Thryonomys swinderianus).
INTRODUCTION
It is very evident in Nigeria today that the average citizen does not meet the protein requirements for humans.This is seen from data obtained for alternative source of animal protein to augment the shortage currently existing in conventional livestock (Chupin, 1992).Wildlife domestication has been recognized as a possible way of achieving this objective (Ajayi, 1971).The grasscutter (Thryonomys swinderianus), also known as the African great cane rat, is a wild hystricomorphic rodent widely distributed in the African sub-region and exploited in most areas as a source of animal protein (NRC, 1991).Among the wild rodents found in the African subregion, the grasscutter or cane rat is the most preferred (Asibey & Eyeson, 1975;Clottey, 1981).Being the most preferred bush meat in West Africa, including Nigeria, Togo, Benin, Ghana and Cote' d'Voire, it contributes to both local and export earnings of most West African countries and is therefore hunted aggressively (Baptist & Mensah, 1986;Ntiamoa, 1988;Asibey & Addo, 2000).
Reproductive organs are not unconditionally necessary for the individual life, but they have essential role in the reproduction and genesis of species.The great attention is given to morphology in relation to practice as well as to theoretical science (Abreu & David-Ferreira, 1982;Kolodzieyski & Danko, 1995;Akinloye et al., 2002).Reproductive organs represent the most dynamic organs in the animal as well as in human body.Great attention to study the reproductive organs has always been reported; particularly those of morphological structure and physiological functions of reproductive organs in many species (Pucek et al., 1993;Massányi et al., 2003).However, there are still many empty spaces mostly in wild animals especially in the area of morphometric studies.
The target of this study was to describe microscopic structure and to give exact morphometric values of the testes and epididymis of the adult domesticated grasscutter (Thryonomys swinderianus) with a view to providing baseline data on the subject area, which could facilitate an improved breeding of the animal in an effort to increase the sources of animal protein for consumption especially in rural communities.
MATERIAL AND METHOD
Experimental Animals.Twenty domesticated adult male cane rats were used for the study.They were acquired from a commercial farm in Igbesa, Ogun State, Nigeria.Records on the age and feeding patterns of the animals were also obtained from the farm.The cane rats were kept at the Animal House, Faculty of Veterinary Medicine, University of Ibadan for 72 hours.They were kept on a daily ration of Guinea corn offal of about 0.5 kg per body weight supplemented with raw cassava (Manihot species).
All the cane rats were weighed using a Microvar® weighing balance before being stunned and then slaughtered by cervical decapitation.Following this, each rat was placed on a dissection board on dorsal recumbency while a shallow medioventral incision was made from the linea alba to a point cranial to the anus in order to expose the abdominal and pelvic cavities.The testes and epididymides were carefully dissected out, with the latter separated into the caput, corpus and cauda regions on the basis of morphology.The weights of the testes and epididymides were determined using the Digital Microvar® weighing balance.
Histological procedures.The samples from the testes and epididymis were fixed in Bouin's fluid and embedded in paraffin blocks.Sections of 10 µm thick were stained with Haematoxylin and Eosin (Akinloye et al.).The slides of testes and epididymis, were studied under the light microscope.
Histomorphometry.The slides were examined under the microscope and the following measurements were taken: the relative volume of the germinal epithelium, interstitium and lumen of the seminiferous tubules; the seminiferous tubular diameter, epididymal tubular diameter, epididymal luminal diameter and epididymal epithelial height.For each parameter, ten measurements were made per section using a calibrated eye-piece micrometer (Graticules Ltd.Toubridge Kent).
Statistical Analysis.All data obtained were expressed as means with the standard errors.Analysis of variance was performed using the One-way ANOVA while Duncan multiple range tests was used to compare means found to be statistically significant (p< 0.05) using the GraphPad Prism version 4.00 for Windows, GraphPad Software (GraphPad Prism, 2003).
RESULTS AND DISCUSSION
The average weight of the cane rats used for the study was 1.93 ± 0.42 kg with an average age of 18.80 ± 1.39 months.The testes of the cane rats were covered with stroma (tunica albuginea), consisting of collagenous tissue.Grossly, radially formed septa divided the testes into lobes (lobuli testes).The average size of the testes of the cane rats was 18.75 × 11.33 mm.There was a strong positive correlation (r = 0.8214) between the age of the rats and the weight of the testes and epididymis.However, there were a few exceptions where younger cane rats presented with bigger testes and epididymis than those from relatively older animals.The average percentage body weights for the testes and epididymis were 0.12% and 0.03% respectively.Microscopically, the testes of the rats showed seminiferous tubules having well formed germinal epithelium, which contained all cellular stages of spermatogenesis.The space between the tubules was filled with interstitial tissue, where various blood vessels and capillaries were observed.
The mean relative volume of the germinal epithelium, interstitium and lumen of the seminiferous tubules of the cane rats were 68.54 ± 1.63%, 8.86 ± 0.85% and 21.40 ± 1.12% respectively (Table I).These values were similar to previous reports.In the rabbit, germinal epithelium constituted 77.6%, interstitium 12.3% and lumen 10.0% of the testes while the value in the fallow-deer were 76.2%, 12.4% and 11.5% respectively for the germinal epithelium, interstitium and lumen (Mori & Christenson, 1980;Massányi et al., 1999) In the testes of the fox, the germinal epithelium forms 52.7%, interstitium 11.3% and lumen 36.0%while in the Sprague Dawley rat and ram, the values for the germinal epithelium were 82.4% and 70.5% respectively (Mori & Christenson).The mean diameter of the seminiferous tubules of the cane rats used for this study was 183.0 ± 11.06 µm (Table I).This value is higher than those of the rabbit and the fallowdear reported as 118.7 µm and 143.1 µm respectively (Mori & Christenson;Massányi et al., 1999).Nevertheless, it is less than the diameter of seminiferous tubules in the African giant rat (Cricetomys gambianus, Waterhouse) reported as 212.85 ± 8.32 µm (Oke, 1982).Also, the value of the diameter of seminiferous tubules obtained in the cane rats used in this study is lower than those of the male fox and Wister rat being 281.4 ± 30.5 µm and 227.91 ± 12.7 µm respectively.Microscopic analysis of the epididymis of the animals in this study showed that it was lined by pseudostratified columnar epithelium with stereocilia.This is in conformity with earlier reports on the histology of the epididymis of mammals (Oke).Like that of the seminiferous tubules, the epididymal ducts had spaces between the tubules and were filled with interstitial tissue.Mature spermatozoa were found more in the cauda epididymis than in the corpus epididymis but rarely in the caput epididymis.This is in conformity with earlier reports that sperm concentration was highest in the cauda epididymis of mammals (Dyce et al., 2002) The mean values of ductal diameter, lumen diameter and epithelial height of the different segments of the epididymis of the cane rats used in this study are given in Table 2.The ductal diameter of the caput, corpus and cauda epididymis were 207.4 ± 7.41 µm, 237.8 ± 10.15 µm and 274.2 ± 9.00 µm respectively, being statistically different (p< 0.05).These values are similar to earlier reports.The ductal diameter of the cauda epididymis of the African giant rat was reported to vary between 216.45 µm and 242.82 µm while that of the male fox was 352.3 ± 46.10 µm (Blom, 1968).There was a relatively high positive correlation (0.6835) between the ductal diameters of the caput and cauda epididymis meaning that an increase in one of these parameters would result in an increase in the other.The pattern of the dimensions of the luminal diameter across the three segments of the epididymis was not different from that of the ductal diameter being 95.8 ± 11.52 µm, 126.8 ± 8.35 µm and 221.0 ± 4.05 µm respectively for the caput, corpus and cauda epididymis.A significant difference (p< 0.05) was observed with these values.However, the pattern of the dimensions of the epithelial height across the three segments of the epididymis was different from those of the ductal and lumen diameter.The caput, corpus and cauda epididymis had epithelial heights of 63.6 ± 2.23 µm, 59.20 ± 3.38 µm and 28.60 ± 9.23 µm respectively.
There was a low positive correlation (0.0420) between the diameter of the tubule of the epididymis and its epithelial height; this means that with an increase in epithelial height the increase in tubular diameter is low.There was a high negative correlation (-0.7958) between epithelial height and lumen diameter meaning that with a decrease in the height of the epithelium, the lumen increased significantly.This relationship can be attributed to function rather than structure as the cauda epididymis has the widest lumen since it stores spermatozoa.This explains why more spermatozoa mass were found in the cauda then in the corpus epididymis in the cane rats used for this study.
However, it can be deduced from the findings of this work that the diameter of the seminiferous tubules, relative of the lumen and germinal epithelium of the testes as well as epididymal ductal diameter, lumen diameter and epithelial height are very similar in most mammals.The outcome of this research, therefore, provides base-line data on the histomorphometry of the testes and epididymis of the African great cane rat (Thryonomys swinderianus).
|
2017-09-06T20:04:19.878Z
|
2010-12-01T00:00:00.000
|
{
"year": 2010,
"sha1": "a994742d61a0686dde8aa9b28834908841276dab",
"oa_license": "CCBYNC",
"oa_url": "http://www.scielo.cl/pdf/ijmorphol/v28n4/art42.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a994742d61a0686dde8aa9b28834908841276dab",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Biology"
]
}
|
262577190
|
pes2o/s2orc
|
v3-fos-license
|
Role of Bio-Based and Fossil-Based Reactive Diluents in Epoxy Coatings with Amine and Phenalkamine Crosslinker
The properties of epoxy can be adapted depending on the selection of bio-based diluents and crosslinkers to balance the appropriate viscosity for processing and the resulting mechanical properties for coating applications. This work presents a comprehensive study on the structure–property relationships for epoxy coatings with various diluents of mono-, di-, and bio-based trifunctional glycidyl ethers or bio-based epoxidized soybean oil added in appropriate concentration ranges, in combination with a traditional fossil-based amine or bio-based phenalkamine crosslinker. The viscosity of epoxy resins was already reduced for diluents with simple linear molecular configurations at low concentrations, while higher concentrations of more complex multifunctional diluents were needed for a similar viscosity reduction. The curing kinetics were evaluated through the fitting of data from differential scanning calorimetry to an Arrhenius equation, yielding the lowest activation energies for difunctional diluents in parallel with a balance between viscosity and reactivity. While the variations in curing kinetics with a change in diluent were minor, the phenalkamine crosslinkers resulted in a stronger decrease in activation energy. For cured epoxy resins, the glass transition temperature was determined as an intrinsic parameter that was further related to the mechanical coating performance. Considerable effects of the diluents on coating properties were investigated, mostly showing a reduction in abrasive wear for trifunctional diluents in parallel with the variations in hardness and ductility. The high hydrophobicity for coatings with diluents remained after wear and provided good protection. In conclusion, the coating performance could be related to the intrinsic mechanical properties independently of the fossil- or bio-based origin of diluents and crosslinkers, while additional lubricating properties are presented for vegetable oil diluents.
Introduction
Epoxy resins are frequently used in various sectors of industrial construction and the manufacturing of adhesives, coatings and paints, composites, primers and sealants, flooring, and tooling [1].Due to its exceptional properties, bio-based epoxy materials have recently been introduced in advanced applications for aeronautics [2], aerospace engineering [3], electronic materials, and biomedical devices [4].The long-term mechanical performance, chemical stability, anticorrosion properties [5], and thermal resistance [6] of bio-based epoxy are primary selection criteria, which can be tuned depending on the selection of the composition of the respective components, including epoxy resin or prepolymers [7], hardeners or crosslinkers [8], and diluents or fillers [9].With an increasing need for sustainable sourcing of coating ingredients, the exploration of renewable polymers is urgent.While one-by-one replacement of fossil-based polymers often does not yield the best results, the coating system should be better redesigned to fully exploit the inherent features of its bio-based ingredients.The effects of bio-based phenalkamine versus traditional amine crosslinkers for epoxy coatings were evaluated in our previous work [10], while the Polymers 2023, 15, 3856 2 of 21 incorporation of bio-based diluents may further increase the bio-based content of epoxy coatings, as discussed in this study.
The high reactivity of the epoxide ring in combination with crosslinking agents results in the formation of a three-dimensional polymer network with properties that depend on the molecular structure of the reactants and/or co-reactants.The structure provides thermoset properties, where the mechanical properties depend on the crosslinking density and the flexibility of the polymer segments between the anchoring points.The viscoelastic properties can be controlled by a combination of curing conditions (temperature, time) [11] and the composition of a partially bio-based epoxy resin [12] or bio-based epoxy prepolymers [7], offering a combined toolbox to create versatile end-user properties.In particular, the toughness of partially bio-based epoxy resins was improved after the optimization of the curing and post-curing conditions [13].At the same time, the fluent processing of bio-based epoxy in coatings or composites needs lower viscosity to improve flow properties [14].The addition of proper diluents aids in controlling the rheology and reducing the viscosity [15,16], improves the wettability of pigments and fillers, increases the pot-life and gelation time, and may limit the effects of curing shrinkage [17].In particular, the rheological profile of a three-component system with epoxy, diluent, and amine crosslinker was detailed [18], concluding that the solid and pseudoplastic behavior introduced through the diamine is diminished in the presence of a diluent and the system tends more toward displaying Newtonian behavior.As such, the low molecular weight diluents take up the role of solvents and allow for smaller concentrations of volatile organic compounds [19].Traditional petroleum-based diluents for epoxy should be used with care since allergic reactions toward phenyl glycidyl ether, 1,4-butanediol diglycidyl ether, and p-tert-butylphenyl glycidyl ether were identified [20].Therefore, alternative bio-based diluents for epoxy resin were developed from furan [21], vegetable oils [22], cardanol [23], or glycerol [24].The latter mainly allows for a higher functionality of reactive groups [25], resulting in more rigid mechanical properties.
The reactive diluents participate in the crosslinking process of epoxy resins and are covalently bonded into the polymer network; hence, they do not migrate, in contrast with traditional solvents or non-reactive diluents.The reactive diluents contain reactive epoxy groups organized in aliphatic or aromatic glycidyl ethers of alcohols and alkylphenols, while the non-reactive diluents typically include aromatic hydrocarbons, such as toluene, xylene, phthalates, styrene, or phenolic compounds.Although the non-reactive diluents are expected to not decrease the reactivity of the epoxy and can be added in relatively large concentrations, it was observed that the mechanical properties of cured epoxy systems were reduced after adding toluene [26].Alternatively, the reactive diluents are more efficiently used in relatively small amounts and their reactivity is determined by the number of reactive sites (mono-, di-, or trifunctional).However, the influence of reactive diluents highly depends on the type of resin or diluent.Their effect is not uniformly subject to the mechanical characteristics [27], with a general improvement in modulus and strength and pendant compressive properties.The monofunctional glycidyl ether causes a significant drop in thermal stability and glass transition temperatures but with little effect on the glassy modulus [28].The use of difunctional reactive diluents was reported as the most advantageous since they enhanced the curing process to achieve a high degree of crosslinking [29].While comparing the performance of diglycidyl ether with other toughening agents, it was concluded that tensile strength and strain at break values are higher for the formulations with diluent compared with resins with a toughening agent [30].In general, the properties of epoxy/diluent systems can be examined in relation to the crosslinking density and chain flexibility that both increase with the amount of diluent, while a drop in elasticity was associated with secondary relaxations [31].Alternatively, the role of natural oils (e.g., epoxidized soybean oil and rapeseed oil) as reactive diluents indicated that viscosity reduction was comparable with commercial grade active diluents, but mechanical strength and thermal stability reduced due to the plasticizing effects of the oil [32].The reactive bio-based diluents for epoxy are traditionally synthesized from Polymers 2023, 15, 3856 3 of 21 soybean oil because of its high reactivity, including siloxane, allyl ether, and fluorine functionalization [33].The role of epoxidized soybean oil with amine hardener on the curing kinetics of epoxy systems was studied and indicated similar behavior compared with conventional resins [34].Also, epoxidized cardanol oil is a favored diluent for lowering the curing temperature without affecting the final degree of curing [35].The oil diluents were typically applied in higher concentrations ranging from 20 wt.-% [36] up to 60 wt.-% [37], where the flexible epoxy materials with fast elastic recovery were obtained with low water absorption and high chemical resistance.The toughening effect of epoxidized vegetable oils in combination with a bio-based crosslinker was demonstrated to overcome the brittleness of bio-based epoxies [38].However, the influences of diluents on mechanical properties may be contradictory when comparing different studies, either improving mechanical properties (strength, deformation) [39] or decreasing the modulus and ultimate strength while improving ductility [40].The latter definitely depends on the combination of specific diluents with given crosslinkers, concentrations, and compatibility with the base resin.
In this work, epoxy coatings were formulated with a range of diluents, including those with various origins, functionalities, and suitable concentrations, to evaluate their effects on the processing and mechanical performance of coatings, while their compatibility with a fossil amine (FA) and a bio-based phenalkamine (PK) crosslinker was evaluated.Complementary to existing knowledge summarized before, the effects of diluents and crosslinker combinations on the properties of epoxy coatings are not uniquely known and depend on a balance between the reduction in viscosity, softening, and lubrication.Therefore, a comprehensive study is presented where the coating performance was related to the intrinsic mechanical properties for both fossil-or bio-based origin of diluents and crosslinkers.The present results indicate good opportunities for a transition of fossil-based into bio-based epoxy coating formulations with controllable performance.
Materials
The bisphenol A diglycidyl ether (DGEBA) epoxy resin was purchased from Resion Resin Technology (Moordrecht, The Netherlands) with the commercial name EP101.The reactive diluents with different functionalities were purchased from Merck (Darmstadt, Germany), including a grade of fossil-based monofunctional glydicyl ether (MGE), fossil-based di-functional glycidyl ether (DGE), and a bio-based tri-functional glycidyl ether (TGE) that was synthesized from glycerol.As an alternative reactive diluent, the epoxidized soybean oil (ESBO) was obtained from Merck (Darmstadt, Germany), with a molecular weight of 950 g/mol and epoxide equivalent of 230 g/mol.The latter indicates an average of 4.2 oxirane groups per molecule, therefore representing a multifunctional diluent with higher functionality than the previous diluents.The chemical structures and epoxy equivalent weight (EEW) for the epoxy resin and diluents are detailed in Table 1.
Two types of crosslinkers with optimized composition for epoxy coatings were used with either fossil-based or bio-based origin.The fossil amine (FA) was a commercially available fast crosslinking cycloaliphatic amine containing a mixture of 3-aminomethyl-3,5,5trimethylcyclohexylamine (30 to 50 wt.-%)and m-phenylene bis(methylamine) (10 to 30 wt.-%), with the trade name EP113 (Resion Resin Technology, Moordrecht, The Netherlands).The bio-based phenalkamine (PK) was obtained after a reaction between cardanol and 1,2ethylenediamine and is commercially available under the trade name H811 (Anacarda, Wigan, UK).The chemical structures and amine hydrogen equivalent weight (AHEW) for crosslinkers are detailed in Table 2.The main reactions for the crosslinking of an epoxy resin with a primary amine (e.g., FA) or secondary amine (e.g., PK) are shown in Figure 1.
Product Type Chemical Formula EEW CAS
Fossil-based amine
Product Type Chemical Formula EEW CAS
Fossil-based amine
Coating Formulation and Application
The coating formulations were prepared by mixing the epoxy resin with different concentrations of MGE, DGE, TGE (1, 2, 3, 4, 5, 7.5 wt.-%), or ESBO diluent (10,20,30,40 wt.-%), as well as a stoichiometric ratio of FA or PK crosslinker.The coating compositions were made by first stirring the epoxy resin with the diluent and subsequently adding the crosslinker after 10 min stirring time, followed by a 5 min stirring time in the presence of the crosslinker.A combination of diluents was applied for the resin mixtures of DGEBA + 30 wt.-% ESBO with TGE diluent (1, 2, 3, 4, 5, 7.5 wt.-%).The EEWmix of the epoxy resin mixtures (i.e., DGEBA + diluent) was calculated according to Formula (1) and the values are summarized in Table 3, giving an overview of the epoxy compositions included in the present testing series.The weight fraction of the DGEBA resin (wDGEBA) and diluent (wdiluent) were determined on an analytical balance with 0.001 g accuracy.The weight of the added crosslinker (wPK/FA) per 10 g epoxy mixture was calculated from a 1:1 stoichiometric ratio between functional epoxy groups and amine crosslinker resulting from the respective EEWmix and AHEW values, according to Formula (2).
The coatings were deposited via blade coating onto softwood beech substrates (10 cm × 10 cm × 5 cm), which were primarily planed and dried in a hot air oven overnight at 60 °C.The use of a constant blade speed at 5 mm/s resulted in a wet thickness of 70 µm and was verified with a coating thickness gauge indicating a 68 ± 2 µm dry thickness.The coatings were fully cured under controlled laboratory conditions of 25 °C and 60% relative humidity for one month before further testing.No thermal curing was applied in agreement with practical on-site application in the wood-coating industry.Similarly, films of the same thickness and compositions (further used for mechanical testing) were obtained after deposition and peeling off the coating from a non-sticky aluminum support.
Coating Formulation and Application
The coating formulations were prepared by mixing the epoxy resin with different concentrations of MGE, DGE, TGE (1, 2, 3, 4, 5, 7.5 wt.-%), or ESBO diluent (10,20,30,40 wt.-%), as well as a stoichiometric ratio of FA or PK crosslinker.The coating compositions were made by first stirring the epoxy resin with the diluent and subsequently adding the crosslinker after 10 min stirring time, followed by a 5 min stirring time in the presence of the crosslinker.A combination of diluents was applied for the resin mixtures of DGEBA + 30 wt.-% ESBO with TGE diluent (1, 2, 3, 4, 5, 7.5 wt.-%).The EEW mix of the epoxy resin mixtures (i.e., DGEBA + diluent) was calculated according to Formula (1) and the values are summarized in Table 3, giving an overview of the epoxy compositions included in the present testing series.The weight fraction of the DGEBA resin (w DGEBA ) and diluent (w diluent ) were determined on an analytical balance with 0.001 g accuracy.The weight of the added crosslinker (w PK/FA ) per 10 g epoxy mixture was calculated from a 1:1 stoichiometric ratio between functional epoxy groups and amine crosslinker resulting from the respective EEW mix and AHEW values, according to Formula (2).
The coatings were deposited via blade coating onto softwood beech substrates (10 cm × 10 cm × 5 cm), which were primarily planed and dried in a hot air oven overnight at 60 • C. The use of a constant blade speed at 5 mm/s resulted in a wet thickness of 70 µm and was verified with a coating thickness gauge indicating a 68 ± 2 µm dry thickness.The coatings were fully cured under controlled laboratory conditions of 25 • C and 60% relative humidity for one month before further testing.No thermal curing was applied in agreement with practical on-site application in the wood-coating industry.Similarly, films of the same thickness and compositions (further used for mechanical testing) were obtained after deposition and peeling off the coating from a non-sticky aluminum support.
Characterization Methods
The viscosity measurements were performed according to ADTM D2196 using a DV-III Ultra viscosimeter with spindle SC4-27RD (Brookfield Engineering, Hadamar-Steinbach, Germany) under a constant rotational shear rate of 100 rpm over a time of 1 min at a controlled temperature of 25 • C. The DSC measurements were performed on a DSC 3+ (Mettler Toledo, Columbus, OH, USA) on a liquid sample to follow the curing reaction as a function of temperature, or on a solid cured sample to determine the glass transition temperature T g .For the liquid coatings, a sample size of 4 mg was heated in hermetically sealed aluminum pans between 10 and 210 • C at 5 • C/min under a nitrogen atmosphere.For the solid samples, a sample size of 7 mg was heated during two heating cycles between 20 and 110 • C at 10 • C/min under a nitrogen flow.The thermal characteristics were determined from the second heating cycle.
The abrasive wear of coatings was evaluated via Taber testing according to ASTM D4060-10 using a circular rotary platform (Model 5130, Taber Industries, New York, NY, USA) with calibrated CS-10 abrasive wheels loaded under a 250 g or 500 g load and 72 rpm rotational speed.The tests ran over 1000 cycles and the weight loss was determined on an analytical balance with an accuracy of 0.001 g (Sartorius, Göttingen, Germany).The microhardness was measured according to ASTM D2240 [41] with a handheld Shore D micro-indenter with a standardized hardened steel tip of 30 • and 0.1 mm tip radius.The gloss values were recorded according to ISO 2813 [42] with a micro-triglossmeter (BYK-Gardener Instruments, Geretsried, Germany) under a 60 • incident light angle.The scratch resistance was evaluated according to ISO 4586-2 [43] with a sclerometer type 3092 (Elcometer, Aalen, Germany) by inserting a tungsten carbide tip of 0.75 mm radius under a load of 10 N or 20 N depending on the spring constant.The scratches were optically evaluated under a stereomicroscope MS12 (Leica, Wetzlar, Germany) at a magnification of 50×.The surface topography of the worn coatings was visualized with a VK-X3000 laser interferometer (Keyence, Mechelen, Belgium) at magnifications of 20× and 50×.The static water contact angles were measured according to ISO 19403-2 [44] after the deposition of 3 µL droplets of de-ionized water with an OCA 50 contact angle device (Dataphysics Instruments GmbH, Filderstadt, Germany) and fitting the droplet geometry using a tangent procedure that involved averaging the left and right contact angles.The water contact angles were determined 10 s after the deposition of the droplet and averaged over 10 measurements per sample with a standard deviation of ±2 • .
The tensile testing of films was done on a universal testing machine (ProLine Z005, Zwick Roll, Haan, Germany) and impact testing was done on an Izod impact tester measuring the absorbed energy according to ASTM D256.The mechanical tests were repeated on 10 samples and reported as an average value for stress at break σ (MPa), strain at break ε (%), and impact strength (kJ/m 2 ).
Coating Preparation and Curing Process
The addition of diluents in an epoxy formulation is intended to lower the viscosity and enhance the processing during coating application.Therefore, results of the steady-state viscosity values for epoxy resin mixtures with different concentrations of diluents (MGE, DGE, and TGE at 0 to 7.5 wt.-% and ESBO at 10 to 40 wt.-%) before adding crosslinker are presented in Figure 2.
Polymers 2023, 15, x FOR PEER REVIEW 7 of 21 were determined 10 s after the deposition of the droplet and averaged over 10 measurements per sample with a standard deviation of ±2°.The tensile testing of films was done on a universal testing machine (ProLine Z005, Zwick Roll, Haan, Germany) and impact testing was done on an Izod impact tester measuring the absorbed energy according to ASTM D256.The mechanical tests were repeated on 10 samples and reported as an average value for stress at break σ (MPa), strain at break ε (%), and impact strength (kJ/m 2 ).
Coating Preparation and Curing Process
The addition of diluents in an epoxy formulation is intended to lower the viscosity and enhance the processing during coating application.Therefore, results of the steadystate viscosity values for epoxy resin mixtures with different concentrations of diluents (MGE, DGE, and TGE at 0 to 7.5 wt.-% and ESBO at 10 to 40 wt.-%) before adding crosslinker are presented in Figure 2.During the measurement, the viscosity stabilized after about 10 s and remained constant during the full recording time of 60 s, as no reaction between the epoxy resin and diluents happened in the absence of the crosslinker, while good pre-mixing of the resin and diluent had been achieved.The native epoxy resin inherently had the highest viscosity of 15,000 mPa•s, in agreement with the supplier's data sheets, which posed problems in further application.The ability to reduce the viscosity in the presence of reactive diluents is demonstrated depending on the functionality and concentration of the diluent, where values can be reduced below 1000 mPa•s for practical coating application.The presence of reactive diluents favorably reduces and stabilizes the viscosity of epoxy resin such that it becomes systematically lower at higher concentrations.The viscosity of epoxy resin with an MGE diluent was the lowest, in agreement with the high flexibility of the linear polymer chain.It is known that monofunctional diluents are most efficient at reducing the viscosity of paints and coatings, as demonstrated for alkyds [45], but they may simultaneously reduce the crosslinking density and decrease the mechanical properties, as demonstrated further on for the present epoxy coatings.In agreement with other studies, an efficient reduction in viscosity of epoxy-reactive diluent mixtures was only observed after the addition of the first 5 wt.-% of glycidylether [46].Therefore, a good balance between the processing and performance of epoxy coatings needs to be further identified.Although it has a slightly higher viscosity of epoxy resin with DGE diluent, it remains in a similar range as the MGE diluent owing to the linear polymer chains, in combination with the difunctional epoxide groups.The branched polymer structure of TGE evidently increased the viscosity due to its enhanced ability to produce molecular entanglements.For a more complex ESBO diluent, higher concentrations were needed to obtain a significant During the measurement, the viscosity stabilized after about 10 s and remained constant during the full recording time of 60 s, as no reaction between the epoxy resin and diluents happened in the absence of the crosslinker, while good pre-mixing of the resin and diluent had been achieved.The native epoxy resin inherently had the highest viscosity of 15,000 mPa•s, in agreement with the supplier's data sheets, which posed problems in further application.The ability to reduce the viscosity in the presence of reactive diluents is demonstrated depending on the functionality and concentration of the diluent, where values can be reduced below 1000 mPa•s for practical coating application.The presence of reactive diluents favorably reduces and stabilizes the viscosity of epoxy resin such that it becomes systematically lower at higher concentrations.The viscosity of epoxy resin with an MGE diluent was the lowest, in agreement with the high flexibility of the linear polymer chain.It is known that monofunctional diluents are most efficient at reducing the viscosity of paints and coatings, as demonstrated for alkyds [45], but they may simultaneously reduce the crosslinking density and decrease the mechanical properties, as demonstrated further on for the present epoxy coatings.In agreement with other studies, an efficient reduction in viscosity of epoxy-reactive diluent mixtures was only observed after the addition of the first 5 wt.-% of glycidylether [46].Therefore, a good balance between the processing and performance of epoxy coatings needs to be further identified.Although it has a slightly higher viscosity of epoxy resin with DGE diluent, it remains in a similar range as the MGE diluent owing to the linear polymer chains, in combination with the difunctional epoxide groups.The branched polymer structure of TGE evidently increased the viscosity due to its enhanced ability to produce molecular entanglements.For a more complex ESBO diluent, higher concentrations were needed to obtain a significant reduction in viscosity, in agreement with previous studies, where at least 50 wt.-%ESBO was needed to reduce the epoxy to the same viscosity [47].Indeed, the viscosity of an epoxy resin decreases at higher concentrations of the reactive diluent, but it also strongly depends on the molecular weight of the diluent [48], which is obviously higher for ESBO compared with the glycidyl ethers.The reactivity of multifunctional diluents is more comparable with the base epoxy resin [49], and it is therefore expected that they less drastically affect the crosslinking density of the epoxy compared with MGE.For this reason, ESBO concentrations up to a maximum of 40 wt.-% were included in this investigation.In that case, however, a balance with the reduction in mechanical properties must be found.It can be concluded that the diluents with simple linear molecular configuration already suitably reduced viscosity at low concentrations, while the higher concentrations of more complex multifunctional diluents were efficient for viscosity reduction.
The crosslinking of epoxy resin with different reactive diluents was followed by the exothermal peak during DSC analysis, after adding stoichiometric amounts of FA or PK crosslinker.Although reaction kinetics are traditionally studied using isothermal DSC analysis, the crosslinking of epoxy coatings was explicitly monitored in this study using non-isothermal data, as the optimum temperature ranges for FA and PK crosslinker agents may be different and can be more efficiently detected.A detail of the exothermal peak during heating of liquid coating samples between 20 and 200 • C is illustrated for coating compositions containing epoxy with 7.5 wt.-% MGE, TGE, and DGE (Figure 3a), or variable concentrations of ESBO (Figure 3c).
Polymers 2023, 15, x FOR PEER REVIEW 8 of 21 reduction in viscosity, in agreement with previous studies, where at least 50 wt.-%ESBO was needed to reduce the epoxy to the same viscosity [47].Indeed, the viscosity of an epoxy resin decreases at higher concentrations of the reactive diluent, but it also strongly depends on the molecular weight of the diluent [48], which is obviously higher for ESBO compared with the glycidyl ethers.The reactivity of multifunctional diluents is more comparable with the base epoxy resin [49], and it is therefore expected that they less drastically affect the crosslinking density of the epoxy compared with MGE.For this reason, ESBO concentrations up to a maximum of 40 wt.-% were included in this investigation.In that case, however, a balance with the reduction in mechanical properties must be found.It can be concluded that the diluents with simple linear molecular configuration already suitably reduced viscosity at low concentrations, while the higher concentrations of more complex multifunctional diluents were efficient for viscosity reduction.
The crosslinking of epoxy resin with different reactive diluents was followed by the exothermal peak during DSC analysis, after adding stoichiometric amounts of FA or PK crosslinker.Although reaction kinetics are traditionally studied using isothermal DSC analysis, the crosslinking of epoxy coatings was explicitly monitored in this study using non-isothermal data, as the optimum temperature ranges for FA and PK crosslinker agents may be different and can be more efficiently detected.A detail of the exothermal peak during heating of liquid coating samples between 20 and 200 °C is illustrated for coating compositions containing epoxy with 7.5 wt.-% MGE, TGE, and DGE (Figure 3a), or variable concentrations of ESBO (Figure 3c).Depending on the reactive diluent, the crosslinking reaction was postponed for the epoxy coatings with MGE: although possessing the lowest viscosity and high molecular mobility, the low functionality limited the availability of the epoxy rings for ring-opening reactions.In parallel, the crosslinking of epoxy coatings with TGE diluent shifted to lower Polymers 2023, 15, 3856 9 of 21 temperatures due to the higher reactivity of the trifunctional epoxy molecules, but the intensity of the curing reaction was lower in parallel with the higher viscosity and more difficult diffusion processes of reactive moieties.The DSC characteristics for diluents with FA crosslinker were evaluated at 10 • C/min and 20 • C/min curing rates, where the exothermal peak evidently became stronger and shifted toward higher temperatures at the higher heating rates, while the curing properties for different epoxy coating compositions were repeatable.For the ESBO diluent, the crosslinking shifted toward lower temperatures for concentrations up to 30 wt.-%, while the highest concentrations of 40 wt.-% did not yield favorable crosslinking.The latter may be explained through the steric hindrance caused by the long fatty acid molecules.The differences between FA and PK crosslinkers were clear for all epoxy compositions with a shift toward lower temperatures for PK: this is in line with the higher reactivity of PK at low temperatures, where the crosslinking may partially start at 30 • C. The higher reactivity of PK crosslinkers was noticed before [50], which can be attributed to the molecular structure with highly accessible amine groups and high reactivity of the hydroxyl group, enabling the curing in room temperature conditions.The accelerated crosslinking with PK proceeded faster and reached the maximum reactivity at a lower temperature, as is clearly noticed in the thermographs with shoulders at 30 to 50 • C, indicating the start of crosslinking.Meanwhile, the intensity of the exothermal peak for PK curing, corresponding to the total reaction heat (∆H R ), was lower in parallel with a slower crosslinking reaction that allowed for better control of heat dissipation.
The quantitative data were obtained after calculating the conversion degree α as a function of curing temperature (Figure 3b,d) and fitting the S-shape curves to an Arrhenius equation, which allowed for determining the characteristic heat and kinetic parameters for the curing process.The conversion degree was calculated from the non-isothermal DSC thermographs as the integrated exothermal heat at temperature T (∆H T ) relative to the total exothermal heat during the entire curing reaction (∆H R ), i.e., α = ∆H T /∆H R , where both values of heat enthalpy were calculated via the integration of the heat flow curve over the appropriate temperature range.The reaction rate da/dt can be expressed as a generalized Equation (3) [51], assuming n th -order curing kinetics with a specific reaction rate constant k(T) at a given temperature T. These parameters can be calculated from experimental data according to known procedures in the literature based on the kinetic modeling of DSC data after fitting to a model that assumes an Arrhenius-type expression of k(T) following Equation (4), with gas constant R = 8.314 J/(mol K) [34].The parameters from thermal analysis and kinetic modeling for the curing of a selection of coatings are reported in Table 4, including the total reaction heat (∆H R ), peak temperature (T p ), activation energy (E a ), kinetic factor A, and reaction order n.The activation energy E a represents an input value required as a barrier to initiate the crosslinking.A higher E a would generally postpone the crosslinking reaction; however, the reaction rate after initiation also strongly depends on the molecular structure of the reactants.For regular epoxy/amine systems, a value for activation energy E a = 50 to 70 kJ/mol is reported [52], which is in a similar range as that calculated in our study for the FA crosslinkers.As demonstrated in previous studies, the reactive diluent (e.g., diglycidyl aniline) decreases both the activation energy and the cure kinetic parameters [53].Depending on the use of diluents and/or a crosslinker, the epoxy system with low E a enables an easy crosslinking toward a high degree of conversion.Depending on selected diluents, the E a decreased and A increased compared with a pure epoxy coating due to the lower viscosity and reactivity of the diluent.It can be reasoned that crosslinking was easier for the more flexible polymer chains and accessible functional groups with, consequently, a lower E a , but this had to be balanced against the mobility of reactive groups and the viscosity of the medium.For the DGE diluent, the linear structure and accessibility of terminal epoxy groups favorably increased the reactivity of the system, resulting in a low E a .For the MGE diluent, the high E a may relate to the monofunctional epoxy groups and more frequent termination of the crosslinking reaction.It was indeed confirmed for other types of difunctional diluents that the activation energy for the curing process reduced most favorably [54].For the TGE diluent, the high E a may have resulted from the formation of a dense three-dimensional molecular network from the beginning of the curing reaction, which may have hindered the reaction kinetics.Other studies showed the fastest curing and coating drying times for epoxy resins with trifunctional diluents [55].Alternatively, the reaction rate A corresponds to the chance for the collision of reactive groups, leading to a favorable reaction due to the higher functionality of the diluent.Moreover, it was previously demonstrated that the reaction rate also depends on the polarity of the medium, and the crosslinking traditionally follows autocatalytic properties through the presence of -OH groups [56].However, only minor variations in reaction kinetics for diluents were found and larger differences were observed for the selection of different crosslinkers.The low E a and A for PK crosslinkers were mainly due to the high reactivity and intrinsic molecular properties of the PK crosslinker versus FA crosslinker.In particular, the E a for PK-epoxy with diluents was within the lower range or incidentally slightly below the ones reported for traditional epoxy/amine curing.The crosslinking kinetics in the presence of ESBO diluent are more complex and were also documented before [57], where the high E a and low kinetic factors were reported in parallel with the relatively high viscosity and more complex entanglements of the long fatty acid chains hindering the diffusion of reactive moieties.Irrespective of the used FA or PK crosslinkers, the kinetics evidently followed first-order kinetics with n = 0.9 to 1.0, as expected for amine crosslinking.
The T g values for fully cured coatings were determined from the second DSC heating scan, where some exemplary graphs (Figure 4a) and a summary of the values are presented for FA-epoxy coatings (Figure 4b) and PK-epoxy coatings (Figure 4c).The main variations in T g depending on the crosslinker and reactivity of the amine groups are expected to depend on the crosslinking density of epoxy [58].The T g relates to the mobility of the molecular segments in the polymer chain and, therefore, depends both on the crosslinking density and chain length in the opposite way: as the crosslinking density increases and the chain length of molecular segments in between the crosslinking points decreases, the polymer becomes more rigid, and T g increases.Therefore, both the functionality and chain length of diluents influence a shift in T g .The T g for epoxy coatings without diluents is highest, as it is generally known that diluents cause a reduction in T g : owing to the molecular structure of diluents, a more flexible polymer segment is introduced relatively to the stiff aromatic structure of the epoxy resin.It was previously demonstrated that T g decreases at higher diluent concentrations due to the higher chain flexibility [59].On the other hand, the diluents with higher functionality may increase the crosslinking density and simultaneously reduce the molecular mobility.Therefore, the balance between both physical mechanisms may introduce only very slight variations in T g .However, there was a clear trend showing that T g increased for the diluents with higher functionality and tended to stabilize or slightly decrease with higher diluent concentrations.For the vegetable oil diluent, the low T g values were related to the relatively large molecular segments of fatty acid side chains with high molecular mobility.Alternatively, the PK crosslinker led to a lower T g for the pure epoxy resin owing to the more flexible side chain of the phenalkamine molecule, while the influence in the presence of diluents was more complex and marginally lower in comparison with the FA crosslinker.molecular structure of diluents, a more flexible polymer segment is introduced relatively to the stiff aromatic structure of the epoxy resin.It was previously demonstrated that Tg decreases at higher diluent concentrations due to the higher chain flexibility [59].On the other hand, the diluents with higher functionality may increase the crosslinking density and simultaneously reduce the molecular mobility.Therefore, the balance between both physical mechanisms may introduce only very slight variations in Tg.However, there was a clear trend showing that Tg increased for the diluents with higher functionality and tended to stabilize or slightly decrease with higher diluent concentrations.For the vegetable oil diluent, the low Tg values were related to the relatively large molecular segments of fa y acid side chains with high molecular mobility.Alternatively, the PK crosslinker led to a lower Tg for the pure epoxy resin owing to the more flexible side chain of the phenalkamine molecule, while the influence in the presence of diluents was more complex and marginally lower in comparison with the FA crosslinker.
Tribological Coating Performance
The abrasive wear was determined as the weight loss after Taber testing under a low load (250 g) and high load (500 g) and is represented for coatings of FA-epoxy (Figure 5a) and PK-epoxy (Figure 5b).The wear loss was evidently higher under a high load; however, the increase in abrasive wear with applied load may not be linear due to visco-elastic properties and deformation of the epoxy coating.Different trends between low-and highloading conditions were observed due to frequent overload conditions in the la er case.Similar trends were observed for various diluents and concentrations in the FA-epoxy and PK-epoxy coatings, with a tendency for lower wear rates and less frequent overload conditions under high loads for the bio-based PK-epoxy relative to the fossil-based FA-epoxy.These trends are further explained in relation to the mechanical properties of the epoxy coatings.Depending on the type and concentration of used diluent, wear rates were lower compared with the pure epoxy coating.The lowest wear rates obviously occurred for vegetable oil diluents due to the lubricating properties of the oil, as mainly illustrated at the highest concentrations of 40 wt.-%,where residual free oil molecules should be present owing to the unsuccessful crosslinking reactions at high concentrations, as demonstrated before.The functionality of the glycidyl ether diluents predominantly influenced the wear properties, where the higher functionalities in a series of MGE < DGE < TGE gradually
Tribological Coating Performance
The abrasive wear was determined as the weight loss after Taber testing under a low load (250 g) and high load (500 g) and is represented for coatings of FA-epoxy (Figure 5a) and PK-epoxy (Figure 5b).The wear loss was evidently higher under a high load; however, the increase in abrasive wear with applied load may not be linear due to visco-elastic properties and deformation of the epoxy coating.Different trends between low-and highloading conditions were observed due to frequent overload conditions in the latter case.Similar trends were observed for various diluents and concentrations in the FA-epoxy and PK-epoxy coatings, with a tendency for lower wear rates and less frequent overload conditions under high loads for the bio-based PK-epoxy relative to the fossil-based FAepoxy.These trends are further explained in relation to the mechanical properties of the epoxy coatings.Depending on the type and concentration of used diluent, wear rates were lower compared with the pure epoxy coating.The lowest wear rates obviously occurred for vegetable oil diluents due to the lubricating properties of the oil, as mainly illustrated at the highest concentrations of 40 wt.-%,where residual free oil molecules should be present owing to the unsuccessful crosslinking reactions at high concentrations, as demonstrated before.The functionality of the glycidyl ether diluents predominantly influenced the wear properties, where the higher functionalities in a series of MGE < DGE < TGE gradually reduced the wear rates, as expected due to a higher crosslinking density for diluents with high functionality.The compatibility between vegetable oil and TGE diluent for stabilizing wear rates was demonstrated up to limited concentrations of the ether diluent.reduced the wear rates, as expected due to a higher crosslinking density for diluents with high functionality.The compatibility between vegetable oil and TGE diluent for stabilizing wear rates was demonstrated up to limited concentrations of the ether diluent.The morphology of the wear tracks (after wear at the highest load) was evaluated through laser interferometric microscopy and three-dimensional surface topography (Figure 6), illustrating the effect of various types and concentrations of diluents.The wear tracks are shown only for PK-epoxy coatings, but similar trends and conclusions on the influence of diluents can be drawn for FA-epoxy coatings (here, PK-epoxy coatings are presented as they provided the lowest wear).The surface morphology for worn coatings with vegetable oil diluents showed highly deformed surface structures, as mainly observed at the highest concentrations of 40 wt.-% oil.This was likely due to the presence of free oil molecules that provided lubricating properties, which is in line with the relatively low wear rates and presence of an almost liquid surface film.The coatings with the DGE diluent were more severely worn, with significant wear scars in the bulk of the coating, as represented by a more bri le aspect, in comparison with the epoxy coatings with TGE diluents, which showed more superficial wear at the surface of the coating and only some deeper local grooves.The smooth top surface of coatings with TGE diluent is in line with the low wear rates.The more detailed optical microscopy images of the worn surfaces (Figure 7) support the observations above, with most irregular surfaces for vegetable oil diluents, strongly worn surfaces for DGE diluents, and smooth surfaces for TGE diluents.The morphology of the wear tracks (after wear at the highest load) was evaluated through laser interferometric microscopy and three-dimensional surface topography (Figure 6), illustrating the effect of various types and concentrations of diluents.The wear tracks are shown only for PK-epoxy coatings, but similar trends and conclusions on the influence of diluents can be drawn for FA-epoxy coatings (here, PK-epoxy coatings are presented as they provided the lowest wear).The surface morphology for worn coatings with vegetable oil diluents showed highly deformed surface structures, as mainly observed at the highest concentrations of 40 wt.-% oil.This was likely due to the presence of free oil molecules that provided lubricating properties, which is in line with the relatively low wear rates and presence of an almost liquid surface film.The coatings with the DGE diluent were more severely worn, with significant wear scars in the bulk of the coating, as represented by a more brittle aspect, in comparison with the epoxy coatings with TGE diluents, which showed more superficial wear at the surface of the coating and only some deeper local grooves.The smooth top surface of coatings with TGE diluent is in line with the low wear rates.The more detailed optical microscopy images of the worn surfaces (Figure 7) support the observations above, with most irregular surfaces for vegetable oil diluents, strongly worn surfaces for DGE diluents, and smooth surfaces for TGE diluents.
Mechanical Coating Performance
The coating microhardness is a primary indicator of mechanical resistance and is related to the resistance against plastic deformation.The influence of different types and concentrations of diluents on the microhardness followed consistent trends after the crosslinking of FA-epoxy (Figure 8a) or PK-epoxy (Figure 8b).The crosslinking with PK obviously provided coatings with a higher hardness relative to the FA crosslinkers for all diluent compositions.The higher hardness for PK-epoxy coatings is in line with previous studies without the use of diluents [10], where it was related to the higher degree of crosslinking for PK-epoxy relative to the FA-epoxy coatings.The high hardness and tensile and flexural strengths of PK-epoxy compared with conventional FA-epoxy were also confirmed in other studies, depending on the selection of the phenalkamine [60], where the crosslinking density of the PK-epoxy coatings was higher compared with the FA-epoxy coatings.The microhardness of coatings clearly depended on the functionality of the diluent and expected crosslinking density, with the relatively high functionality of vegetable 0 µm
Mechanical Coating Performance
The coating microhardness is a primary indicator of mechanical resistance and is related to the resistance against plastic deformation.The influence of different types and concentrations of diluents on the microhardness followed consistent trends after the crosslinking of FA-epoxy (Figure 8a) or PK-epoxy (Figure 8b).The crosslinking with PK obviously provided coatings with a higher hardness relative to the FA crosslinkers for all diluent compositions.The higher hardness for PK-epoxy coatings is in line with previous studies without the use of diluents [10], where it was related to the higher degree of crosslinking for PK-epoxy relative to the FA-epoxy coatings.The high hardness and tensile and flexural strengths of PK-epoxy compared with conventional FA-epoxy were also confirmed in other studies, depending on the selection of the phenalkamine [60], where the crosslinking density of the PK-epoxy coatings was higher compared with the FA-epoxy coatings.The microhardness of coatings clearly depended on the functionality of the diluent and expected crosslinking density, with the relatively high functionality of vegetable oil diluents leading to a high hardness.Alternatively, the increasing functionality for the MGE < DGE < TGE diluents resulted in progressively increasing microhardness for the same concentrations of diluent.The relationships between microhardness and T g are demonstrated (Figure 9), where both parameters were inherently related through the restricted molecular mobility for coatings with a high T g and high hardness.Thus, the coating performance was uniquely determined by the intrinsic mechanical parameters.
The scratch resistance under 10 N and 20 N was evaluated using optical microscopy for coatings with different diluents and the PK crosslinker (Figure 10).The scratch resistance of the FA-epoxy coatings (not presented) was worse than for PK-epoxy coatings, showing breakthrough under the 10 N load due to the lower microhardness.Depending on the diluent, the coatings with vegetable oils did not show scratch damage under 10 N and they had a strongly deformed scratching track for 40 wt.-% vegetable oil, representing ductile deformation.The scratch resistance improved for diluents in the series MGE < DGE < TGE due to the increasing functionality.The coatings with TFE diluents showed the highest scratch resistance without surface damage for concentrations of 2 to 7.5 wt.-%.It is known that resistance against scratch damage depends on the tensile strength and compressive yield stress of the epoxy [61], which directly relates to the crosslinking density.The scratch resistance is known to be enhanced by the higher crosslinking density, which can be enhanced by tuning the processing conditions or using supplementary additives [62].Given the higher T g for diluents with a higher functionality and PK versus FA crosslinker, the scratch resistance was expected to increase [63].Moreover, delamination of the coating after scratching was not observed, showing good interface adhesion for all cases.
Polymers 2023, 15, x FOR PEER REVIEW 14 of 21 oil diluents leading to a high hardness.Alternatively, the increasing functionality for the MGE < DGE < TGE diluents resulted in progressively increasing microhardness for the same concentrations of diluent.The relationships between microhardness and Tg are demonstrated (Figure 9), where both parameters were inherently related through the restricted molecular mobility for coatings with a high Tg and high hardness.Thus, the coating performance was uniquely determined by the intrinsic mechanical parameters.The scratch resistance under 10 N and 20 N was evaluated using optical microscopy for coatings with different diluents and the PK crosslinker (Figure 10).The scratch resistance of the FA-epoxy coatings (not presented) was worse than for PK-epoxy coatings, showing breakthrough under the 10 N load due to the lower microhardness.Depending on the diluent, the coatings with vegetable oils did not show scratch damage under 10 N and they had a strongly deformed scratching track for 40 wt.-% vegetable oil, representing ductile deformation.The scratch resistance improved for diluents in the series MGE < DGE < TGE due to the increasing functionality.The coatings with TFE diluents showed the highest scratch resistance without surface damage for concentrations of 2 to 7.5 wt.-%.It is known that resistance against scratch damage depends on the tensile strength and compressive yield stress of the epoxy [61], which directly relates to the crosslinking density.The scratch resistance is known to be enhanced by the higher crosslinking density, which can be enhanced by tuning the processing conditions or using supplementary additives [62].Given the higher Tg for diluents with a higher functionality and PK versus FA crosslinker, the scratch resistance was expected to increase [63].Moreover, delamination of the coating after scratching was not observed, showing good interface adhesion for all cases.oil diluents leading to a high hardness.Alternatively, the increasing functionality for the MGE < DGE < TGE diluents resulted in progressively increasing microhardness for the same concentrations of diluent.The relationships between microhardness and Tg are demonstrated (Figure 9), where both parameters were inherently related through the restricted molecular mobility for coatings with a high Tg and high hardness.Thus, the coating performance was uniquely determined by the intrinsic mechanical parameters.The scratch resistance under 10 N and 20 N was evaluated using optical microscopy for coatings with different diluents and the PK crosslinker (Figure 10).The scratch resistance of the FA-epoxy coatings (not presented) was worse than for PK-epoxy coatings, showing breakthrough under the 10 N load due to the lower microhardness.Depending on the diluent, the coatings with vegetable oils did not show scratch damage under 10 N and they had a strongly deformed scratching track for 40 wt.-% vegetable oil, representing ductile deformation.The scratch resistance improved for diluents in the series MGE < DGE < TGE due to the increasing functionality.The coatings with TFE diluents showed the highest scratch resistance without surface damage for concentrations of 2 to 7.5 wt.-%.It is known that resistance against scratch damage depends on the tensile strength and compressive yield stress of the epoxy [61], which directly relates to the crosslinking density.The scratch resistance is known to be enhanced by the higher crosslinking density, which can be enhanced by tuning the processing conditions or using supplementary additives [62].Given the higher Tg for diluents with a higher functionality and PK versus FA crosslinker, the scratch resistance was expected to increase [63].Moreover, delamination of the coating after scratching was not observed, showing good interface adhesion for all cases.The mechanical test results for the maximum tensile stress (σ) and strain at break (ε) were determined from standard tensile testing for FA-epoxy (Figure 11a) and PK-epoxy (Figure 11b), yielding higher elongation and lower strength for PK-epoxy compared with The mechanical test results for the maximum tensile stress (σ) and strain at break (ε) were determined from standard tensile testing for FA-epoxy (Figure 11a) and PK-epoxy (Figure 11b), yielding higher elongation and lower strength for PK-epoxy compared with FA-epoxy.Therefore, it can be noticed that the PK-epoxy behaved as a ductile material and the FA-epoxy was more brittle.The main trend showing higher elongation for PK-epoxy remained in the presence of diluents, while the elongation decreased a little for diluents with higher functionality, i.e., MGE > DGE > TGE.The higher ductility for epoxy resins with di-and multifunctional diluents was also demonstrated in previous studies [40].The increase in ductility and toughness of epoxy via the selection of a suitable diluent and crosslinker is an alternative to the formulation of epoxy-based nanocomposites with improved ductility [64].The elongation of epoxy with vegetable oil diluents becomes extremely high and represents the high flexibility that can be introduced through the combination of long side chains in both the PK crosslinker and fatty acid molecules.The tensile strength for the epoxy with higher functionality increased as an illustration of the higher degree of crosslinking with DGE and TGE while maintaining good flexibility for PK-epoxy and becoming brittle for FA-epoxy.The mechanical test results for the maximum tensile stress (σ) and strain at break (ε) were determined from standard tensile testing for FA-epoxy (Figure 11a) and PK-epoxy (Figure 11b), yielding higher elongation and lower strength for PK-epoxy compared with FA-epoxy.Therefore, it can be noticed that the PK-epoxy behaved as a ductile material and the FA-epoxy was more bri le.The main trend showing higher elongation for PKepoxy remained in the presence of diluents, while the elongation decreased a li le for diluents with higher functionality, i.e., MGE > DGE > TGE.The higher ductility for epoxy resins with di-and multifunctional diluents was also demonstrated in previous studies [40].The increase in ductility and toughness of epoxy via the selection of a suitable diluent and crosslinker is an alternative to the formulation of epoxy-based nanocomposites with improved ductility [64].The elongation of epoxy with vegetable oil diluents becomes extremely high and represents the high flexibility that can be introduced through the combination of long side chains in both the PK crosslinker and fa y acid molecules.The tensile strength for the epoxy with higher functionality increased as an illustration of the higher degree of crosslinking with DGE and TGE while maintaining good flexibility for PKepoxy and becoming bri le for FA-epoxy.In summary, the relationships between the intrinsic mechanical properties and tribological performance of the epoxy coatings are presented in Figure 12, including the abrasive wear loss, microhardness, and impact resistance, in relation to tensile stress (σ) and strain at break (ε).The wear loss was related to the microhardness of the epoxy coatings (Figure 12a), with better wear resistance for coatings with a high microhardness.This trend was uniquely confirmed for the MGE, DGE, and TGE diluents for both FA-epoxy and PK-epoxy.The trend for vegetable oil diluents followed a higher level of wear resistance for coatings with comparable hardness, illustrating the additional lubricating effect of the vegetable oil diluent.The Lancester plot representing specific wear rates against ductility (Figure 12b) was applied well for the present epoxy coatings, again indicating the additional lubricating properties of the vegetable oils.The good fit of the present experimental data and correspondence to known material models confirmed that the coating properties were well controlled through variations in the composition and related crosslinking conditions.The microhardness and impact resistance were also related to the intrinsic mechanical properties of the epoxy coatings with various diluent concentrations and crosslinker types (Figure 12c), where values for the hard and brittle coatings (FA-epoxy) and more ductile coatings (PK-epoxy) overlapped for the same hardness value.The slightly higher impact strength for PK-epoxy relative to the FA-epoxy coatings was noticed, in agreement with its higher flexibility and ductility.Similarly, an increase in impact strength and ductility of epoxy resins with higher diluent concentrations was demonstrated before [55].The impact resistance for coatings with difunctional diluents (DGE) was slightly higher than other diluents (MGE, TGE) in relation to the other mechanical properties, as demonstrated in this study.Depending on the selected diluent in combination with a crosslinker, it was observed that the wear properties of coatings were determined by the intrinsic mechanical properties.
ing properties were well controlled through variations in the composition and related crosslinking conditions.The microhardness and impact resistance were also related to the intrinsic mechanical properties of the epoxy coatings with various diluent concentrations and crosslinker types (Figure 12c), where values for the hard and bri le coatings (FAepoxy) and more ductile coatings (PK-epoxy) overlapped for the same hardness value.The slightly higher impact strength for PK-epoxy relative to the FA-epoxy coatings was noticed, in agreement with its higher flexibility and ductility.Similarly, an increase in impact strength and ductility of epoxy resins with higher diluent concentrations was demonstrated before [55].The impact resistance for coatings with difunctional diluents (DGE) was slightly higher than other diluents (MGE, TGE) in relation to the other mechanical properties, as demonstrated in this study.Depending on the selected diluent in combination with a crosslinker, it was observed that the wear properties of coatings were determined by the intrinsic mechanical properties.
Coating Surface Properties
The hydrophobicity of epoxy coatings was characterized through static water contact angles (Figure 13), which were determined as steady-state values after 10 sec of stabilization time.The water contact angles were determined on coatings before and after wear, identifying the effect of wear on long-term hydrophobic protection.The reference uncoated wood substrate showed unstable water contact angles, with immediate absorption of the water, while protection and stabilization against water ingress were obtained for coated wood.The role of FA versus PK crosslinker for the pure epoxy coatings (no diluents) was determined before [10], where the hydrophobicity of PK was not directly identified on the native coatings and it was only expressed in the wear track, as expected from the long hydrophobic polymer side chains.The latter is explained in terms of the relatively higher degree of crosslinking of PK-epoxy and unfavorable orientation of the hydrophobic groups in the bulk of the coating rather than being exposed at the surface.The hydrophobicity in the presence of vegetable oil diluents was very high due to the presence of non-crosslinked oil diluents, as confirmed before by the low T g values.The exposure of free oil molecules at the surface after wear evidently provided the highest hydrophobicity.For the other diluents, the hydrophobicity increased with higher concentration and higher functionality of the diluent with consistent trends for both crosslinker types and constantly some lower contact angles on PK-epoxy relative to FA-epoxy.Some higher contact angles on worn coatings of PK-epoxy compared with FA-epoxy were observed due to both chemical and topographical changes to the coating surface after wear.In particular, the polarity of C-O bonds or long aliphatic tails in the polymer chain of diluents was expressed in the high coating hydrophobicity.
higher degree of crosslinking of PK-epoxy and unfavorable orientation of the hydrophobic groups in the bulk of the coating rather than being exposed at the surface.The hydrophobicity in the presence of vegetable oil diluents was very high due to the presence of non-crosslinked oil diluents, as confirmed before by the low Tg values.The exposure of free oil molecules at the surface after wear evidently provided the highest hydrophobicity.For the other diluents, the hydrophobicity increased with higher concentration and higher functionality of the diluent with consistent trends for both crosslinker types and constantly some lower contact angles on PK-epoxy relative to FA-epoxy.Some higher contact angles on worn coatings of PK-epoxy compared with FA-epoxy were observed due to both chemical and topographical changes to the coating surface after wear.In particular, the polarity of C-O bonds or long aliphatic tails in the polymer chain of diluents was expressed in the high coating hydrophobicity.The surface gloss for epoxy coatings with various diluents was compared after crosslinking with FA and PK (Figure 14).Relative to a pure epoxy coating (no diluents), the gloss improved after mixing with diluents in increasing concentrations.Mainly in the presence of the vegetable oil diluent, an amount of free non-crosslinked fa y acids may migrate to the surface and aid in the formation of a glossy surface.For other diluents, the lower viscosity and be er flow properties of the liquid coating may result in a smoother surface and consequently higher gloss.The gloss for coatings with a PK crosslinker was somewhat lower compared with the FA crosslinkers due to the dark brown color.The variations in gloss can be explained in relation to the measurements of surface roughness Sa on the native coatings (Figure 15), where a good relationship was observed between both parameters for both the FA-epoxy and PK-epoxy coatings: the smooth coating surfaces in the presence of diluents were clearly related to a higher gloss.The differences in intrinsic color between FA-epoxy and PK-epoxy coatings may indeed explain the The surface gloss for epoxy coatings with various diluents was compared after crosslinking with FA and PK (Figure 14).Relative to a pure epoxy coating (no diluents), the gloss improved after mixing with diluents in increasing concentrations.Mainly in the presence of the vegetable oil diluent, an amount of free non-crosslinked fatty acids may migrate to the surface and aid in the formation of a glossy surface.For other diluents, the lower viscosity and better flow properties of the liquid coating may result in a smoother surface and consequently higher gloss.The gloss for coatings with a PK crosslinker was somewhat lower compared with the FA crosslinkers due to the dark brown color.The variations in gloss can be explained in relation to the measurements of surface roughness Sa on the native coatings (Figure 15), where a good relationship was observed between both parameters for both the FA-epoxy and PK-epoxy coatings: the smooth coating surfaces in the presence of diluents were clearly related to a higher gloss.The differences in intrinsic color between FA-epoxy and PK-epoxy coatings may indeed explain the distinctions in optical properties between both crosslinkers.In conclusion, the lower surface roughness of coatings with diluents contributed to the higher gloss owing to the better flow properties and higher surface homogeneity (e.g., fewer flow defects at lower viscosity).distinctions in optical properties between both crosslinkers.In conclusion, the lower surface roughness of coatings with diluents contributed to the higher gloss owing to the better flow properties and higher surface homogeneity (e.g., fewer flow defects at lower viscosity).
Conclusions
The use of reactive diluents with different functionalities and bio-based or fossilbased origins in combination with a fossil-based and bio-based crosslinker provides a toolbox to optimize the processing conditions and performances of epoxy coatings.The viscosity reduction was related to a combination of flexibility and functionality of the reactive diluent, where lower concentrations of the glycidyl ether diluents were more efficient compared with the higher concentrations of the vegetable oil diluents.The reduction in viscosity for mono-and di-functional diluent was within comparable ranges for fluent coating processing, where a more branched structure of the bio-based trifunctional diluent evidently increased the viscosity due to the enhanced ability for molecular entanglements.The curing kinetics for epoxy with difunctional diluent showed the lowest activation energy and highest reaction rates due to a favorable combination of accessibility and reactivity of the epoxy groups, while the kinetics were slowed down for a bio-based trifunctional diluent in parallel with the higher viscosity of the reaction mixture.All types of diluents induced first-order reaction kinetics, irrespective of their source.Although the variations in reaction kinetics for different diluents were minor, the use of a bio-based phenalkamine crosslinker had more of an effect on accelerating the crosslinking, mainly at lower temperatures.
Conclusions
The use of reactive diluents with different functionalities and bio-based or fossil-based origins in combination with a fossil-based and bio-based crosslinker provides a toolbox to optimize the processing conditions and performances of epoxy coatings.The viscosity reduction was related to a combination of flexibility and functionality of the reactive diluent, where lower concentrations of the glycidyl ether diluents were more efficient compared with the higher concentrations of the vegetable oil diluents.The reduction in viscosity for mono-and di-functional diluent was within comparable ranges for fluent coating processing, where a more branched structure of the bio-based trifunctional diluent evidently increased the viscosity due to the enhanced ability for molecular entanglements.The curing kinetics for epoxy with difunctional diluent showed the lowest activation energy and highest reaction rates due to a favorable combination of accessibility and reactivity of the epoxy groups, while the kinetics were slowed down for a bio-based trifunctional diluent in parallel with the higher viscosity of the reaction mixture.All types of diluents induced first-order reaction kinetics, irrespective of their source.Although the variations in reaction kinetics for different diluents were minor, the use of a bio-based phenalkamine crosslinker had more of an effect on accelerating the crosslinking, mainly at lower temperatures.
The mechanical coating properties were strongly related to the glass transition temperature of the respective epoxy formulations, which was lowered in the presence of the diluents and phenalkamine crosslinkers.The latter indeed introduced more ductile properties reflected in their mechanical properties.The reduction in abrasive wear for trifunctional diluents and phenalkamine crosslinkers was the highest in parallel with the high microhardness and scratch resistance.Any further reduction in abrasive wear in the presence of vegetable oil diluents was due to additional lubrication of the diluents rather than related to the intrinsic mechanical parameters.In particular, the long-term protection of the coating was retained by high hydrophobicity, which was evidently the highest in the presence of vegetable oil diluents, but also remained high for the trifunctional bio-based diluent.
Overall, good relationships could be drawn between the intrinsic mechanical properties and coating properties for glycidyl ether diluents of different functionalities and two types of amine crosslinkers, independent of their bio-based or fossil-based origin.As such, the differences in the intrinsic chemical structure of the latter allow for tuning the coating performance and select new formulations with higher bio-based content and appropriate properties.
Figure 1 .
Figure 1.Main reaction mechanisms for crosslinking of an epoxy resin using ring-opening reaction in presence of primary or secondary amines.
Figure 1 .
Figure 1.Main reaction mechanisms for crosslinking of an epoxy resin using ring-opening reaction in presence of primary or secondary amines.
Figure 2 .
Figure 2. Variations in viscosity for epoxy resin with different types of diluents as a function of diluent concentration, lines are a visual aid to measuring points for illustrating trend.
Figure 2 .
Figure 2. Variations in viscosity for epoxy resin with different types of diluents as a function of diluent concentration, lines are a visual aid to measuring points for illustrating trend.
Figure 3 .Figure 3 .
Figure 3. Non-isothermal DSC thermographs with exothermal reaction and calculated conversion degree from monitoring the crosslinking reaction of epoxy resin with different diluents and crosslinkers: (a,b) influence of diglycidyl ether diluents with different crosslinkers and (c,d) influence of vegetable oil with different crosslinkers.
Figure 4 .
Figure 4. Glass transition temperature Tg of crosslinked epoxy resins with different diluents and crosslinkers: (a) details from DSC thermographs of some compositions, (b) Tg values for FA-epoxy and different diluents, and (c) Tg values for PK-epoxy and different diluents.
Figure 4 .
Figure 4. Glass transition temperature T g of crosslinked epoxy resins with different diluents and crosslinkers: (a) details from DSC thermographs of some compositions, (b) T g values for FA-epoxy and different diluents, and (c) T g values for PK-epoxy and different diluents.
Figure 5 .
Figure 5. Wear rates under low load (250 g) and high load (500 g) for epoxy coatings with different diluents and crosslinkers for (a) FA-epoxy and (b) PK-epoxy.
Figure 5 .
Figure 5. Wear rates under low load (250 g) and high load (500 g) for epoxy coatings with different diluents and crosslinkers for (a) FA-epoxy and (b) PK-epoxy.
Polymers 2023 , 21 Figure 6 .Figure 6 .
Figure 6.Microscopic evaluation of the wear tracks indicating influence of diluents for some PKepoxy coatings, including laser intensity image (greyscale image) and height map (color picture).
Polymers 2023, 15 , 3856 13 of 21 Figure 6 .
Figure 6.Microscopic evaluation of the wear tracks indicating influence of diluents for some PKepoxy coatings, including laser intensity image (greyscale image) and height map (color picture).
Figure 7 .
Figure 7. Detailed optical microscopy of wear tracks indicating influence of diluents for some PKepoxy coatings.
Figure 7 .
Figure 7. Detailed optical microscopy of wear tracks indicating influence of diluents for some PK-epoxy coatings.
Figure 8 .
Figure 8. Microhardness for epoxy coatings with different diluents and crosslinkers for (a) FA-epoxy coatings and (b) PK-epoxy coatings.
Figure 9 .
Figure 9. Relationship between microhardness measurements and glass transition temperature for epoxy coatings with different diluents and crosslinkers.
Figure 8 .
Figure 8. Microhardness for epoxy coatings with different diluents and crosslinkers for (a) FA-epoxy coatings and (b) PK-epoxy coatings.
Figure 8 .
Figure 8. Microhardness for epoxy coatings with different diluents and crosslinkers for (a) FA-epoxy coatings and (b) PK-epoxy coatings.
Figure 9 .
Figure 9. Relationship between microhardness measurements and glass transition temperature for epoxy coatings with different diluents and crosslinkers.
Figure 9 . 21 Figure 10 .
Figure 9. Relationship between microhardness measurements and glass transition temperature for epoxy coatings with different diluents and crosslinkers.Polymers 2023, 15, x FOR PEER REVIEW 15 of 21
Figure 10 .
Figure 10.Optical microscopy illustrating scratch resistance of PK-epoxy coatings with different diluents after scratching under 10 and 20 N normal loads.
Figure 10 .
Figure 10.Optical microscopy illustrating scratch resistance of PK-epoxy coatings with different diluents after scratching under 10 and 20 N normal loads.
Figure 11 .Figure 11 .
Figure 11.Mechanical test results with overview of stress at break (blue bars) and strain at break (grey bars) of epoxy compositions with different diluents and crosslinkers for (a) FA-epoxy and (b) PK-epoxy.
Figure 13 .
Figure 13.Coating surface properties for epoxy coatings with different diluents and crosslinkers, including static water contact angles before and after wear, for (a) FA-epoxy and (b) PK-epoxy (statistical variation ±3°).
Figure 13 .
Figure 13.Coating surface properties for epoxy coatings with different diluents and crosslinkers, including static water contact angles before and after wear, for (a) FA-epoxy and (b) PK-epoxy (statistical variation ±3 • ).
Figure 14 .Figure 14 .
Figure 14.Coating surface properties for epoxy coatings with different diluents and crosslinkers, including gloss values measured under 60° incident light, for (a) FA-epoxy and (b) PK-epoxy.
Figure 14 .
Figure 14.Coating surface properties for epoxy coatings with different diluents and crosslinkers, including gloss values measured under 60° incident light, for (a) FA-epoxy and (b) PK-epoxy.
Figure 15 .
Figure 15.Relationships between surface properties of gloss versus surface roughness Sa for FAepoxy and PK-epoxy coatings.
Figure 15 .
Figure 15.Relationships between surface properties of gloss versus surface roughness Sa for FA-epoxy and PK-epoxy coatings.
Table 1 .
Chemical structures and characteristics of resin and diluents for epoxy coatings.
Table 1 .
Chemical structures and characteristics of resin and diluents for epoxy coatings.
Table 2 .
Chemical structures and characteristics of fossil-based amine (FA) and bio-based phenalkamine (PK) crosslinkers for epoxy coatings.
Table 1 .
Chemical structures and characteristics of resin and diluents for epoxy coatings.
Table 2 .
Chemical structures and characteristics of fossil-based amine (FA) and bio-based phenalkamine (PK) crosslinkers for epoxy coatings.
Table 1 .
Chemical structures and characteristics of resin and diluents for epoxy coatings.
Table 2 .
Chemical structures and characteristics of fossil-based amine (FA) and bio-based phenalkamine (PK) crosslinkers for epoxy coatings.
Table 1 .
Chemical structures and characteristics of resin and diluents for epoxy coatings.
Table 1 .
Chemical structures and characteristics of resin and diluents for epoxy coatings.
Table 1 .
Chemical structures and characteristics of resin and diluents for epoxy coatings.
Table 1 .
Chemical structures and characteristics of resin and diluents for epoxy coatings.
Table 3 .
Overview of testing matrix for epoxy coating formulations with different types and concentrations of diluents.
Table 3 .
Overview of testing matrix for epoxy coating formulations with different types and concentrations of diluents.
Table 4 .
Thermal and kinetic factors for curing of epoxy coatings with a selection of diluents and crosslinkers.
|
2023-09-26T15:05:25.753Z
|
2023-09-22T00:00:00.000
|
{
"year": 2023,
"sha1": "7c395b19d7cb1f3270a4ee231cddba09c98e97ff",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4360/15/19/3856/pdf?version=1695387169",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6f64da62b5d958391f55e1f4692320796e3c8aa8",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": []
}
|
244348976
|
pes2o/s2orc
|
v3-fos-license
|
Impact of Role Conflict on Intention to Leave Job With the Moderating Role of Job Embeddedness in Banking Sector Employees
This study investigates why some employees intend to leave their jobs when facing conflict between family responsibilities and job routines. The present study also reveals the moderating role of on-the-job embeddedness between role conflict and intention to leave the job. Drawing on conservation of resources theory, the paper investigates the buffering effect of the three on-the-job embeddedness components (fit, links, and sacrifice). Data were collected from banking officers because most of the employees have to face role conflict between family and job responsibilities, as banking is considered among the most stressful jobs. Collected data were analyzed by applying structural equation modeling. Results indicate that the role conflict significantly influences intention to leave the job. Furthermore, the study shows that on-the-job embeddedness moderates the relationship between role conflict and intention to leave. The results suggest that organizations can reduce turnover intention during times of work and life conflict by developing employee on-the-job embeddedness. This study provides some insights to managers on why many employees leave their jobs and how to overcome this problem. Management should also offer extra and available resources in periods of greater tension to minimize early thinking regarding quitting.
INTRODUCTION
Employee turnover is not a new trend. Millions of people around the world leave their jobs annually. Despite all the public benefits that are attached to being employed, employees decide to quit. The researchers found several variables that affect the choice to leave during periods of work-life imbalance (Aboobaker et al., 2017). Existing literature draws our attention to multiple factors which lead to the Intention of an employee to leave and employee turnover. Those factors include internal factors related to the job itself and external factors associated with a person's personal and social life (Mansour and Tremblay, 2018). Numerous investigators have found that certain organizational factors, including organizational commitment, organizational support, and supervisors' support, along with some social and personal stress factors, might trigger Intention to leave a job among employees (Zhang L. et al., 2019). Role conflict is one of those factors which might lead employees toward job quitting. Role conflict refers to one's unpleasant experience, which one faces due to the confronting and clashing demands of different social roles and statuses (Anand and Vohra, 2020). Two types of role conflicts include intra-role conflicts and inter-role conflicts. Intra-role conflict refers to the conflict a person faces due to increased mental pressure and stress due to conflicting expectations and demands from one domain of life. For example, the tension in the job due to multiple responsibilities can be considered a result of the intra-role conflict. Another type of role conflict is an inter-role conflict that a person faces due to conflicting demands from different life domains (Mansour and Tremblay, 2016). For example, an employee's inability to be a good parent or partner due to a demanding professional life can be considered the inter-role conflict that might lead them to quit their job (Dai et al., 2019).
When an employee leaves a job, they have to face two possible consequences. The person who leaves a job either joins another organization or is left unemployed . In the case of unemployment, a person faces several consequences. The financial situation of the quitter is an immediate and distressing outcome of sudden unemployment (Bayarcelik and Findikli, 2016). It is well documented that not every person gets a new job right after leaving an employer. Even for a few months, a break in the regular inflow of income can put the person into further distress (Samadi et al., 2020). Financial instability then leads to different issues related to a person's health, mainly psychological conditions. Unemployed people commonly report depression and anxiety for a long time (Minamizono et al., 2019). Social outcomes emerge in the shape of worry about dependents' financial security and the form of social gatherings being restricted (Jyoti and Rani, 2019).
Every organization aims to have high productivity and consistently increasing profitability. High profits are indicators of the success of companies. Employees are significant contributors to a company's success (Gyensare et al., 2016). The consequences of employee turnover for the employer are frequently documented. An employee's sudden departure can hinder the operations of an office for some time (Sun and Wang, 2017). It also affects team dynamics within an organization. Organizations invest a lot in their employees' training (Cho et al., 2017). All the company's investment in an employee is lost when a trained employee no longer continues to work in the organization (Lee, 2018). Highly motivated individuals serve as the driving forces of profit and competitiveness (Zimmerman et al., 2019). Along with the increase in productivity and profits, organizations try to have lower turnover as the managers are familiar with the importance of employee retention (Stamolampros et al., 2019). Numerous researchers have stressed the importance of employee retention and recommended incorporating retention strategies as fundamental principles in organizational policies (Degbey et al., 2021). Adoption of retention strategies is also recommended to small and medium sized business enterprises for the achievement of consistent productivity and rising profits (Alhmoud and Rjoub, 2019). This research investigates if on-the-job embeddedness affects the connection between work-life conflict and an employee's intention to leave. Job embeddedness theory describes an employee's retention due to an employee's unique set of ties to the organization. In addition, it has been shown that an employee's embeddedness may serve as a buffer against bad events and ill circumstances. This study contributes to the worklife balance and job embeddedness literature by investigating whether on-the-job embeddedness moderates the effect of workfamily conflict on employee leaving intention.
On the other hand, numerous attempts have been made to understand the association between role conflict and Intention to leave. Still, no investigations have been made so far which are specifically directed toward the banking sector. When it comes to the context of Pakistan, the literature seems even more limited on the subject. Therefore, this investigation's purpose is the empirical explanation of the association between role conflict and Intention to leave and the role of job embeddedness as a mediator in their relationship in the banking sector of Pakistan. This study will contribute to the existing literature by producing information about the causal relationship among role conflict, Intention to leave, and job embeddedness. This study is also aimed to provide an empirical understanding of all these concepts in the context of Pakistan's Banking sector for the development of effective retention policies by the banks.
The following paragraphs are a brief review of relevant literature and are followed up by the hypothesized relationships between focal constructs. Then the research methods, analyses, and findings are recorded and discussed. Finally, both the theoretical and managerial implications and the limitations and avenues for future research have been discussed.
REVIEW OF LITERATURE
A review of the literature about the relevant concepts and sectors is narrated below. Sasso et al. (2019) revealed that Intention to leave or intention to quit refers to the potential plans to leave the job with an employer. It is a global phenomenon. Multiple factors include personal factors and employers' behavior, performance appraisal and feedback, absenteeism, burnout, lack of recognition, personal and professional advancement, miscommunication, and job satisfaction (Holland et al., 2019). A literature review of some of the critical factors that have been reported to be associated with leaving is presented in the following paragraphs.
Intention to Leave
Employers' attitude has been reported frequently in the literature as an indicator that triggers the intention to leave in an employee (Meriläinen et al., 2019). The majority of the people who leave their job have mentioned the employers' unprofessional attitude as the main factor that led them to quit. Numerous researchers have reported that employees are less likely to quit their jobs if they think their employers are professional (Chin et al., 2019). The second important factor that has been frequently used in the literature is appreciation and performance appraisal. Several researchers have narrated that workers feel motivated when their managers and colleagues appreciate their work (Lo et al., 2018). Positive feedback on one's work is associated with improved performance. Employees have reported in numerous studies that appreciation inspires them to work better and harder with increased interest. However, investigators have noted that many organizations lack effective performance appraisal mechanisms (Yamaguchi et al., 2016).
de Oliveira et al. (2017) discussed that an ineffective appraisal system gives rise to a lack of recognition. Employees have reported that they work very hard to get credit for their contribution to an enterprise. As a result, they are left with a sense of worthlessness. This directly demotivates the workers and causes lessening interest in their work (Mihail, 2017). This affects the productivity of a firm negatively. Workers who left their jobs have informed us in the surveys that they were not concerned with the firm's overall productivity due to lack of recognition of their efforts at the workplace. People do not give their best at work when their work is not credited (Bohle et al., 2017). Mahomed and Rothmann (2020) expressed that leaving is damaging for both employees and employers. An employer's investment in a managerial level employee's training is wasted if they quits the job (Asakura et al., 2020). A new employee takes a lot of time to understand the organizational environment and working mechanism. An organization has to invest in a new employee's training when an experienced manager leaves the organization (Domínguez Aguirre, 2019). If the intention to leave leads to job quitting, it can create personal, social, and psychological problems for the quitter (Loi et al., 2006).
Not every firm understands the importance of employee retention, as it has been reported that not all firms adopt retention strategies (MacIntosh and Doherty, 2010). However, many firms adopt different approaches and techniques to retain their workers. Retention strategies vary from firm to firm (Frye et al., 2020). Incentives, increments, competitive remuneration packages, promotions, appraisal systems, training, and recognition of work along an attractive working environment are among the tools commonly used by employers for employee retention (Silva et al., 2019).
Role Conflict
The concept of "role conflict" defines a kind of internal conflict in which job and family role obligations are at odds with one another to some degree, making it difficult to fulfill requirements in one area while having to meet them in the other (Zainal Badri and Wan Mohd Yunus, 2021). Both work and family roles may be defined by the obligations placed on the individual by their work colleagues and family members and the values the person has about their work and family role behavior. When work and family needs clash, it is easy to believe that the workplace is interfering with family happiness and pleasure (Díaz-Fúnez et al., 2021).
Role conflict may also arise when family issues interfere with job happiness and job performance. Dividing time between job and family (multiple roles) may lead to inter-role conflict since these responsibilities may deplete one-another's resources (Del Pino et al., 2021). To fulfill the needs of one job, the expectations of the other roles are ignored. If family responsibilities come after the job, the family strain may detract from the ability to fulfill the job function (Gul et al., 2021).
Inter-role conflict and intra-role conflict are two commonly used concepts in the literature of role conflict. An interrole conflict is a form of role conflict that refers to stressful conditions resulting from conflicting demands from different life spheres (Iannucci and MacPhail, 2018). Workers have frequently reported the inter-role conflict due to continuous pressure from the employer and the increasing demands of family members (Karim, 2017;Sarfraz et al., 2021). Intra-role conflict, another form of role conflict, is associated with a single role's expectations and needs. It can occur due to either expectations in the workplace or family members' demands (Grzywacz, 2020). The employees frequently report Intra-role conflict. Both inter-role conflict and intra-role conflict cause employees to experience role strain. Role strain is another often-used concept in the literature regarding role conflict (Wendling et al., 2018). Role strain can be explained as tension that a person experiences when he/she faces competing demands within one particular role and find it challenging to perform according to the expected roles (Jamil et al., 2021a).
Inter-role conflict can occur due to multiple reasons, but role ambiguity is reported as a common reason people experience role conflict at the workplace (Wehner, 2016). Role ambiguity is when an employee lacks awareness about their job-related duties (Oviatt et al., 2017). Numerous researches demonstrate that employees who do not perform as per employers' expectations are usually unclear about their responsibilities in the workplace (Purohit and Vasava, 2017). That role ambiguity is reported mainly as a result of miscommunication. Role ambiguity can be overcome through clear communication of job responsibilities and duties (Furtado et al., 2016).
Role conflict may also result from workers' failure to manage their work and family (non-work) obligations on an equal footing (Yousaf et al., 2020). This kind of conflict may suggest that employees' job duties interfere with their happiness and success in their personal lives or that employees' personal lives interfere with their satisfaction and success at work (Naseem et al., 2020). Therefore, it is probable that role conflict will have adverse effects, such as stress and dissatisfaction, and interfere with the ability to fulfill work or family obligations (Baranik and Eby, 2016;Mohsin et al., 2021). Furthermore, this tension may result in voluntary turnover, according to studies conducted by Eby et al. (2014), which supports this assertion. Hence, we proposed the following hypothesis: H1: Role Conflict has a significant impact on intention to leave a job.
The Moderating Role of Job Embeddedness
The term "on-the-job embeddedness" refers to a worker's connection to social ties formed at work, making them reluctant to quit the company (Jiang et al., 2012). A detailed explanation is given by Lee et al. (2004) about the effect of on-and offthe-job embeddedness on job performance, attitude related to firm citizenship, and also decrease and lower the impact of truancy on global turnover (Amjad et al., 2018). Sekiguchi et al. (2008) demonstrated that there is a strong bond between leadership-membership exchange (LMX) and task action, LMX and organizational citizenship behaviors (OCBs), organizationbased self-appreciation, and is reinforced by job embeddedness factor (which is the composition of on-the-job and off-the-job embeddedness). Karatepe (2012) has depicted that embeddedness reinforces the negative effect of well-known firm and co-worker support (Karatepe, 2012;Naseem et al., 2021) and also increments the impacts of appreciation of organizational and firm equity (Karatepe and Shahriari, 2014) on representative leaving out proposals (Dunnan et al., 2020a). A demonstration was given by Özçelik and Cenkci (2014) about employee embeddedness that a worker with a fundamental level of work embeddedness detailed a weaker interconnection between finding work and turnover than those workers who have lower levels of embeddedness. Burton et al. (2010) demonstrated some interconnection between paternalistic administration and work in-role execution. Peachey et al. (2014) have detailed research regarding embeddedness and have shown that the impact of unfurling theory-type stuns on worker progress can be weakened by embeddedness. It fortifies the effect on OCB. Peachey et al. (2014) demonstrated that employee embeddedness strengthened the connection between workers' past encounters of work environment bullying and consequent work environment hostility. Al-Ghazali (2020) explained that worker embeddedness could fortify the effect of worker recognitions of procedural decency and transmission on danger evaluation and organizational rebuilding.
The Moderating Effect of On-the-Job Fit Embeddedness
This study claims that on-the-job fit embeddedness is a valuable resource for helping employees deal with work-life balance issues (Kiazad et al., 2014a). Work-life conflict is less likely to impact workers with greater fit embeddedness. Consequently, there is a minor link between work-family conflict and intention to leave and companies with lower fitness levels. Two mechanisms have been suggested: First, those employees with a better person-job fit are more likely to have more essential work skills, making it easier for them to fulfill the ordinary demands of their position compared to employees with a worse fit (Dunnan et al., 2020b). Second, employees with low levels of person-job fit will deplete resources more quickly to fulfill the day-to-day work needs than those with high levels of person-job fit. Third, the extra depletion of resources that results from role conflict between the job and home domains adds to the stress and exhaustion felt in both fields (Al-Ghazali, 2020). As a result, workers who aren't a good match for the company will be hurt more. As a result, workers who fit in well on the job are less likely to be impacted and less likely to be motivated to leave.
Second, workers who better match the work-family conflict needs of the workplace may potentially adapt. Fasbender et al. (2019) suggest that conflict between a person's ideas and expectations and the reality they encounter prompts an employee to change their stance and attitudes. People that are more in sync are better able to find and create new workplace arrangements (Chan et al., 2019;Naseem et al., 2020b). Employees with better degrees of fit embeddedness will feel less of a negative impact as a result. Hence, we proposed the following hypothesis: H2: On-the-job fit positively moderates the relationship of role conflict and intent leave job.
The Moderating Effect of On-the-Job Link Embeddedness Employees who have a greater level of on-the-job link embeddedness are more likely to know more individuals at the organization and be more engaged in the organization's matters than their counterparts (Lyu and Zhu, 2019). Because of this connectivity, workers have greater access to possibilities inside the organization and services like career sponsorship from higher-ranking colleagues (Coetzer et al., 2018). Thus, link embeddedness may be seen as a valuable resource that allows employees to more easily access ameliorating support services offered by the organization in which they work. This has the potential to operate in two ways. First, increasing levels of social connection throughout an organization may, in the first instance, offer personal support for those who are experiencing problems as a result of work-family conflict (Sender et al., 2018). Second, employees who have built up social capital can better identify and negotiate administrative support services, such as organizational work-life balance programs, feel more comfortable accessing these services, and obtaining supervisor support. In a recent study, researchers found that some employees are hesitant to use organizational support services and that employee use of family friendly corporate resources can be influenced by the quality of employee-manager relations (Jamil et al., 2021b). Workers with a greater degree of link embeddedness had an easier time implementing current ameliorative measures, resulting in a reduction in the experience of stress resulting from work and family conflict compared to employees with a lower level of link embeddedness. Hence, we proposed the following hypothesis: H3: On-the-job link positively moderates the relationship of role conflict and intent leave job.
The Moderating Effect of On-the-Job Sacrifice Embeddedness
While fit and connection embeddedness has been suggested to mitigate the impact of work-life conflict on leaving intention, sacrifice embeddedness has been proposed to enhance that relationship, following . According to COR theory, workers with a higher level of sacrifice embeddedness are more likely to accumulate intrinsic resources than employees with a lower level of sacrifice embeddedness. When intrinsic resources are collected in larger quantities, there is a greater likelihood of eventual loss. At the same time, unlike instrumental resources, increases in intrinsic resource levels do not always result in an improvement in an entity's ability either to withstand threats or obtain additional resources (Hobfoll, 2011).
According to COR theory, a threat to current resources will elicit a more significant response than an opportunity to acquire new resources or invest existing resources in the short term (Hobfoll, 2011). Employees with more to lose, such as high levels of intrinsic resources, will be more sensitive to risks, such as the depletion potential of work and life conflict since they have more to lose. Kiazad et al. (2014b) discovered that when confronted with a threat such as a psychological contract breach, employees with higher sacrifice embeddedness reacted more strongly in defense of their existing resources when compared to employees with lower sacrifice embeddedness (Kiazad et al., 2014a). Those workers who had a high level of sacrifice embeddedness used a resource defense strategy to reduce their exposure to the resource-depleting danger, which was shown to be effective in that research.
According to the findings of this research, workers who have a higher level of sacrifice embeddedness are more likely to undertake resource-saving measures, which may include considering quitting the organization (Stewart and Wiener, 2021). As a result of being unable to ward off threats to resources or obtain replacement resources, workers who have made significant sacrifices are more inclined than employees who have less to lose than they are to contemplate quitting the organization to protect their existing resources (Amoah et al., 2021). Therefore, workers with greater degrees of sacrifice embeddedness are more likely to experience conflict at work and home, and they are also more likely to consider quitting their current position. Hence, we proposed the following hypothesis: H4: On-the-job sacrifice positively moderates the relationship of role conflict and intent leave job (see Figure 1 for all relationships).
METHODOLOGY Data Collection and Sample Size
The study samples include branch managers, operations managers, credit officers, and all officers, including grade one, two, and three major private banks situated in three big cities of Pakistan: Faisalabad, Lahore, and Islamabad. Also, we only included respondents who had been working in the banking sector for more than 5 years. The average experience of respondents in the banking sector was 8.5 years, and their time at their current location was 2.5 years. A pilot study with 30 participants was carried out. Since providing recommendations, revisions were made to the final questionnaire to make it more understandable for respondents. To ensure the content validity of the measures, three academic experts of human resource management analyzed and made improvements in the items of constructs. The experts searched for spelling errors, grammatical errors, and ensured that things were correct. The experts have proposed minor text revisions to role conflict and job sacrifice items and advised that the original number of items be maintained. The sample size was determined by using Kline (2015) proposed criterion. He suggested at least ten responses per item. Therefore, a minimum of 180 samples was needed, given the 18 items in this study. To increase reliability and validity, 250 questionnaires were distributed to research participants. At the time of scrutiny, 30 questionnaires were found incomplete, and these questionnaires were excluded, and the final sample is 220 respondents.
Questionnaire and Measurements
The study used items established from prior research to confirm the reliability and validity of the measures. All items are evaluated through five-point Likert-type scales where "1" (strongly disagree), "3" (neutral), and "5" (strongly agree).
Role Conflict was measured with five items adapted from the study of Netemeyer et al. (1996); the sample item is, "The demands of my work interfere with my home and family life." Job Embeddedness, with its three dimensions: on-the-job fit, on-the-job link, and on-the-job sacrifice, was assessed with items adapted from the study (Felps et al., 2009). Job Embeddedness was measured with nine items; on-the-job fit consisted of three items, and the sample item is, "I feel like I am a good match for my organization." The on-the-job link consisted of three items, and the sample item is, "I work closely with my coworkers." Finally, on-the-job sacrifice consisted of three items, and the sample item is, "I would sacrifice a lot if I left this job." To assess the intention to leave, the four items were adopted from the work of Abrams et al. (1998) with the sample item, "In the next few years, I intend to leave this company."
Demographic Characteristics
This study analyzed the data through Smart partial least squares (PLS), primary data was collected from 220 respondents,
DATA ANALYSIS
The study used PLS modeling using Smart-PLS 3.2.8 version (Ringle et al., 2015) as the numerical tool to analyze the structural and measurement model, as it can accommodate a smaller number of observations without normality assumptions and survey research is generally not normally distributed (Chin et al., 2003). Also, since data were collected using a single source, we followed Kock (2015) to test the common method bias using the full collinearity method. The test showed that all the VIFs were lower than five; thus, we can conclude common method bias is not a severe problem in our study (Sánchez-Hernández et al., 2020).
Reliability and Validity of the Constructs
The Measurement model explains the relationships among the constructs and the indicator variables. As part of the measurement model assessment, all the indicators held factor loading greater than 0.60 and were retained in the model (Gefen and Straub, 2005). The reliability analysis is the first element of the measurement model, which contains composite reliability. According to Ramayah et al. (2018), the composite reliability's required threshold value is 0.70. Subsequently, the indicators' findings are greater than 0.7, confirming the measurement model's composite reliability (see Table 2). The composite reliability values of all the constructs are also greater than 0.7, which further strengthens the reliability of all the variables (see Figure 2). Convergent Validity evaluates whether or not constructs measure what they are supposed to measure. In this study, convergent Validity was assessed by calculating the average-variance-extracted (AVE) that shows whether the construct variance can be described from the selected items (Fornell and Larcker, 1981a). According to Bagozzi and Yi (1988), the cut-off value for the average variance extracted is 0.5, and the Values of AVE of all constructs are greater than the recommended threshold, as shown in Table 2. This reflects the convergent Validity of the measurement model. Table 3 displays the discriminant validity assessment whereby the HTMT ratios were all below the 0.90 cut-off value. The confidence intervals do not include a zero or one, as suggested by Henseler et al. (2016). Thus, we can conclude that the measures used in this are reliable, valid, and distinct.
Discriminant Validity
In short, the Fornell and Larcker technique demonstrates discriminant validity when the square root of the AVE enhances the relationships between the measure and every other measure.
To stimulate the measurement of the model's discriminant validity, the AVE estimation of every construct is produced using the Smart-PLS algorithm, as shown in Table 3.
The values that lie in the off-diagonal are smaller than the average variance's square root (highlighted on the diagonal), supporting the scales' satisfactory discriminant validity. Consequently, the outcome affirmed that the Fornell and Larcker (1981b) model is met.
Structural Model
The structural model reflects the paths hypothesized in the research model and is assessed based on multicollinearity, coefficient of determination R 2 , predictive relevance Q 2 , and the paths' significance (see Figure 3). The goodness of the model is determined by the structural path's strength, determined by the R 2 value for the dependent variable (Hair et al., 2014). All the VIFs were below five; thus, this confirms that the structural model results are not negatively affected by collinearity. Furthermore, following the thumb rules, the R 2 values of intention to leave (0.551) exceed the minimum value of 0.1 suggested by Falk and Miller (1992), confirming a satisfactory predictability level. Furthermore, the Q 2 value of the endogenous construct is considerably above zero, thus providing support for the model's predictive relevance regarding the endogenous latent variables.
Next, to assess the four hypotheses developed we ran a bootstrapping of 5,000 subsamples. First, we assessed the direct relationships before looking at the moderation effects. The results revealed a significant relationship between role conflict and intention to leave (β = 0.32, p < 0.01, BCI LL = 0.169 and BCI UL = 0.463) which gives positive support for H1 of our study. The moderation hypotheses of the job embeddedness in the path between role conflict and intention to leave (H2, H3, and H4) are tested using the two-stage continuous moderation analysis (Hair et al., 2017). The moderating effect of RC × OTJF → ITL (β = 0.047, p < 0.01, BCI LL = −0.193 and BCI UL = 0.298), RC × OTJL → ITL (β = 0.089, p < 0.01, BCI LL = −0.417 and BCI UL = 0.061), and RC × OTJS → ITL (β = 0.023, p < 0.01, BCI LL = −0.115 and BCI UL = 00.22) indicating the moderating effect are statistically significant at the 0.01 level. This gives support for H2, H3, and H4 of this study (see Table 4).
Also, for H2 the moderation graph indicates at the low level of On-the-Job Fit, there is a low impact of RC on ITL. However, increasing OTJF enhances the significant positive effect of RC on ITL (see Figure 4A). Moreover, the H3 moderation graph describes at the low level of the On-the-Job link there is a low impact of RC on ITL. Again, though, increasing OTJL augments the significant positive effect of RC on ITL (see Figure 4B). Finally, the final relationship graph shows at the low level of On the Job sacrifice, and there is a low impact of RC on ITL. Nevertheless, increasing OTJS improves the significant positive effect of RC on ITL (see Figure 4C).
DISCUSSION AND CONCLUSION
This present research aims to determine whether job abandonment moderates the relation between role conflict and intention to leave in the banking sector. For highly embedded employees in the banking sector, role conflict is a common occurrence. This study analyses job embeddedness in terms of employee retention. In particular, the results suggest that job embeddedness moderates the relationship between the WFC and job turnover intention. The results indicate that role conflict and job embeddedness are opposing factors that lead people to quit or remain within their banks. The results suggest that job embeddedness connects workers within the organizations (Lee et al., 2004;Jiang et al., 2012) is supported by the theory of embeddedness (Mitchell and Lee, 2001). This means that job embedding has a beneficial influence when companies minimize "dysfunctional" revenue. In terms of the suggested hypothesis, all three moderating effects are straightforward to comprehend. Employees who have a high level of linkage embeddedness have a bank of instrumental resources at their disposal that may help them cope more effectively with the difficulties of work and family conflict. The stronger the employee's connection to the organization and the higher the linkage embeddedness, the better the employee can access and utilize a variety of existing organizational resources (Jung and Kim, 2021). Employees who are more integrated into their work will be more able to obtain inter-personal assistance from their colleagues and will be less reluctant and more effective in using existing organizational support systems due to their integration. So, employees reporting a lower connection between work and family conflict and leaving intention are more likely to be those who had more significant degrees of linkage embeddedness in their job. Burton et al. (2010) showed that the embeddedness of workers reinforced the link between the history of bullying in a workplace and resulting violence in the workplace. Theory and studies on embedding work typically indicate that job embeddedness in the workplace has beneficial effects where the needs of workers and institutions are matched. In other terms, workers are trained to use working connections, health, and compromises to delegate services in the job domain and institutions to embedded employees. Because of the limitations generated by role conflict, our results indicate that an engaged employee with a high degree of embeddedness is more likely than a low embedded working employee to feel mental fatigue, remorse, and aggression. Thus, the high degree of job-embedded employees is more likely to be adversely impacted by role conflict. By concentrating on previously untested competing factors, this research leads to sparse work on the "dark side" of work integration (Sekiguchi et al., 2008;Burton et al., 2010;Allen et al., 2016).
This research provides deep insights into the moderating role of job embeddedness between role conflict and employees' intention to leave and exposes significant shortcomings on the theory of work-building. This study also leads to limiting role conflict implementing and job embeddedness measures (Oviatt et al., 2017). In addition, the results indicate that those who are more deeply embedded in the job may improve their resource investment in the role conflict, as this relationship with turnover has been negatively impacted by a higher degree of job embeddedness. However, around the same period, these working parents became more vulnerable to mental fatigue, remorse, and hostilities because of resource drain. With these negative findings in mind and a high degree of embeddedness, this research goes beyond recent studies on the moderating impact of convergence between role conflict and turnover (Ringle et al., 2015).
THEORETICAL AND PRACTICAL IMPLICATIONS
The studies provide several implications. First of all, this paper shows that Job embeddedness has a moderating impact on the relationship between role conflict and turnover intention. This contributes to the role conflict literature by extending the literature of job embeddedness. Essentially, the three elements of job embeddedness do not function as a unifying entity in this paper. Instead, there were differing impacts to the three components on the job. The fitness of embedding had no effect; embedding the link had an improving outcome, and embedding sacrifices had a multiplier effect. This shows no established linkage between embedded components on the job (Kiazad et al., 2014b;Bohle et al., 2017). JET researchers should therefore use the three on-the-job parts instead of combined measures such as job integration or work-based embedding. Second, this article shows the efficient justification of how job embedders moderates the impact of role conflict on the employee's intention of turnover by COR theory. This research adds to an earlier study's findings into role conflict, strain, and burnout that explains the impact of work and family conflict on the depletion and acquisition of employee resources (Ghosh et al., 2017).
In addition to that, this paper would discuss plans to establish processes to understand how work incorporation impacts perceptions and attitudes (Kiazad et al., 2014b). Finally, in connection with this study, the increasing COR research describes the influence of employee embeddedness as a form of resource abundance (Kiazad et al., 2014b;Wehner, 2016).
This study has three Practical Implications. Firstly, by improving the usage by workers of current changes in the company to minimize the impact of role conflict, the detrimental effects of role conflict will be minimized. This research indicates that workers with stronger linkage embeddedness are more capable of using interpersonal and organizational resources. The administration would enhance the efficiency of existing ameliorating arrangements to increase employee interaction, subordinates, and organization (Holtom, 2016). Finally, the design of corporate training programs, the growth relating to teamwork judgment, organizational training and professional development programs, social networks within the company, and employee recruiting schemes are used to improve employee engagement embedding (Treuren, 2019).
Secondly, introduce measures to make employees' connections to coworkers, bosses, and the company more effective. To create a relationship between employees and the company, mentorship programs may be set up. In addition, teams can have work and decision-making responsibilities expanded, in-house training and career development programs implemented, and referral systems used to hire employees. The various programs may be tailored to the preferences and requirements of different groups of employees.
Thirdly, even though workers with greater sacrifice embeddedness had fewer intentions to leave their jobs, these individuals are more sensitive to work and family conflict than employees with lower levels of sacrifice embeddedness. Therefore, at times of more conflict, management may offer extra and readily available ameliorative services to reduce the likelihood of early departure thoughts.
LIMITATIONS AND FURTHER RESEARCH
There are also limitations to this study. First, it employs crosssectional measures and studies in social sciences to deduce the complexities of workers' behavior. Longitudinal research is required to explain the effect and the purpose of job embedding on role conflict and turnover intention. Second, the research is focused on workers' expectations and is thus susceptible to social desirability and the usual empirical biases. While the typical process bias in this study is minimal, more analysis could minimize the error of parameter estimation by having actual data on turnover or from various sources, for example, a supervisor or colleague, as the periods, have differed. Thirdly, the sample size is small (n = 220), which may have an issue of generalizability, leading to Type 2 errors and is also limited to the study. Finally, the limited size of the feminine cohort excluded substantial gender disparities from being investigated.
Further experiments should strive to obtain more significant balanced populations so that that gender effects can be compared. Furthermore, to determine the paper's results, a single set of data from one sector of Pakistan was used; the findings are not necessarily generalizable. Uncertain jobs inside and outside the company may be calculated to be a biased estimation. The generalizability of these results would be explained further analysis into other geographic, social, and economic contexts. This paper outlines a structure for the influence of job embeddedness and role conflict and has defined new problems to address.
DATA AVAILABILITY STATEMENT
The datasets generated for this study are available on request to the corresponding author.
|
2021-11-19T14:25:43.723Z
|
2021-11-19T00:00:00.000
|
{
"year": 2021,
"sha1": "e2f2a2d6469ef90c86b637eff9c3f35f2ea255dd",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2021.719449/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e2f2a2d6469ef90c86b637eff9c3f35f2ea255dd",
"s2fieldsofstudy": [
"Business",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
229923485
|
pes2o/s2orc
|
v3-fos-license
|
A Review of Machine Learning Techniques for Applied Eye Fundus and Tongue Digital Image Processing with Diabetes Management System
Diabetes is a global epidemic and it is increasing at an alarming rate. The International Diabetes Federation (IDF) projected that the total number of people with diabetes globally may increase by 48%, from 425 million (year 2017) to 629 million (year 2045). Moreover, diabetes had caused millions of deaths and the number is increasing drastically. Therefore, this paper addresses the background of diabetes and its complications. In addition, this paper investigates innovative applications and past researches in the areas of diabetes management system with applied eye fundus and tongue digital images. Different types of existing applied eye fundus and tongue digital image processing with diabetes management systems in the market and state-of-the-art machine learning techniques from previous literature have been reviewed. The implication of this paper is to have an overview in diabetic research and what new machine learning techniques can be proposed in solving this global epidemic.
INTRODUCTION
Diabetes Mellitus (DM) or diabetes is a long-term chronic disease, where the human body loses the ability to produce or respond to the hormone insulin. Due to this deficiency, abnormal metabolism of carbohydrates, fat, and proteins occur in the body. According to the World Health Organization (WHO) [1], 422 million adults estimated were living with diabetes in 2014. The global prevalence of diabetes has nearly doubled since 1980, rising from 4.7% to 8.5% in the adult population. In addition, data scientists had predicted that the prevalence of diabetes increases drastically in the year 2045. A report from the International Diabetes Federation (IDF) [2] projected that the total number of people with diabetes globally increase to 48%, from 425 million people (year 2017) to 629 million people (year 2045).
To reduce the prevalence of diabetes, existing innovative applications had been commercialised in the market today.
Diabetes management system (DMS) is an innovative tool that assists diabetic patients to self-check their blood glucose level, calorie intake, and insulin doses. Traditionally, invasive DMS uses a blood glucose meter, a lancing device, and an online data management platform to keep track the blood glucose level. Conversely, non-invasive DMS had invented to ease patients. Patients do not require finger pricking themselves to keep track their blood glucose level. They only need to attach a small device onto the skin and the device will automatically keep track the blood glucose level in real-time without interfering their daily activities.
Until today, although there are many invasive and noninvasive DMSs (e.g. Accu-Chek [3] and Dexcom [4]) available in the market, scientific literature also reported that new research had been conducted to innovate the existing DMS [14][15][16][17][18]. Instead of collecting blood samples (e.g. invasively or non-invasively), eye and tongue images become another focused area to predict the prevalence of diabetes or to prevent diabetes complications. Therefore, machine learning plays an important role in this research area. Gulshan et al. [5] applied machine learning technique to develop an algorithm that detect Diabetic Retinopathy (DR) and Diabetic Macular Edema (DME) by using retinal fundus images. The result of the study achieved high sensitivity and specificity of 96.1% and 93.9% respectively. Machine learning (ML) is sub-field of Artificial Intelligence (AI) that provides machines with the ability to learn and improve over time from experience without giving any explicit instructions. Thus, this paper investigates innovative applications and past literature in the areas of DMS with applied eye fundus and tongue digital images. Different types of existing applied eye fundus and tongue digital image processing with DMS and state-of-the-art machine learning techniques from previous literature have been reviewed.
Diabetes Complications
Diabetes Mellitus (DM) is one of the common endocrine disorder, affecting 200 million people worldwide; and it is estimated that DM cases are increasing dramatically in the upcoming years [6]. The IDF [2] projected that the prevalence of DM increases significantly in most countries. For example, the Middle-East and North Africa region estimated that there would be an increase of 110%, from 39 million people in the year 2017 to 82 million people with diabetes in the year 2045.
Without proper monitoring, different types of diabetes complications can occur in different parts of the body. Patients would have an increased risk of developing various life-threatening health problems. The most common complication is Diabetic Retinopathy (DR). DR is a common terminology for all retina disorders. DR can be divided into two main types which include non-proliferative and proliferative DR. Non-proliferative DR (NPDR) is the earliest stage of retinopathy. However, NPDR can progress into deeper stages (mild, moderate, and severe) as more blood vessels become blocked in the retina. More severe DR would be Proliferative DR (PDR). PDR happens when the retina starts to grow new blood vessels. Growing new blood vessels often bleed, which may block vision partially or completely. Both NPDR and PDR will have signs of different types of lesions in the retina. According to IDF [2], DR affects over one-third of the population globally and is the leading cause of vision loss in working-age adults.
Uncontrolled blood glucose will affect patients' oral health as well. However, the awareness of oral manifestations and complications of diabetes are lacking worldwide [7]. Diabetic patients may suffer orally from periodontal diseases (e.g. gingivitis) and salivary dysfunction (e.g. reduction of salivary production, changes in saliva decomposition, and taste dysfunction). Oral fungal and bacterial infections have also reported in diabetic patients. Moreover, oral mucosa lesions which include stomatitis, geographic tongue, benign migratory glossitis, fissured tongue, traumatic ulcer, lichen planus, lichenoid reaction and angular cheilitis may also present in the oral region [8][9][10][11].
Machine Learning Techniques
Different machine learning techniques had been adopted in diabetic research. This paper addresses the machine learning techniques and the use of eye fundus and tongue images as dataset to conduct innovative research.
Machine Learning using Eye Fundus Images
Medical imaging plays a central role in diagnosing and treating diseases, including DR. Retinal image classification becomes the attention among researchers in the field of computer vision, where it carries potential benefits which will enable personalised health care and will provide physicians high quality diagnosis/therapy. Mahiba and Jayachandran [12] achieved high accuracy in classifying glaucoma using retina fundus images. A total of 550 retinal images were used. The images were gathered using ZEISS retina camera (FF450) at the Government medical college in India. Convolutional Neural Network (CNN) and Support Vector Machine (SVM) were adopted in the classification. Their proposed model claimed to achieve an accuracy of 98.71%.
Another novel approach was Samant and Argawal [13] attempted to use iris images, instead of retina fundus images; to evaluate the feasibility of diabetes diagnosis. A total of 180 features were extracted to quantify the broken tissue information of the iris. Then, these extracted features were broken down into three groups which were first order statistics, textural features, and wavelet features. 10-fold cross-validation technique had applied in the study. Moreover, 6 different classifiers including Binary Tree Model (BT), Support Vector Machine (SVM), Adaptive Boosting Model (AB), Generalized Linear Model (GL), Neural Network (NN), and Random Forest (RF) had been used. Using different feature selections and classification methods were aiming to investigate the best feature selection algorithms and classification given the available dataset. Samant and Argawal [13] reported ttest feature selection yielded the highest classification accuracy almost in all classifiers. Among the different types of feature selections and classifiers, t-test feature selection and RF classifier performed the best with 89.63% of accuracy.
Furthermore, research has been conducted in the field of mobile computing and ophthalmology. Recently, Tan et al. [14] conducted a study that focused on Age-related Macular Degeneration (AMD). AMD is a form of eye disease that affects the elderly and diabetic patients. Tan et al. [14] developed a 14 layers CNN model to automatically detect and diagnose AMD accurately. Data was collected from Kasturba Medical Hospital in India. A total of 402 normal eye fundus, 583 retinal images with early, intermediate AMD, and 125 retinal images with wet AMD. Using the blindfold and 10-fold cross validation strategies, the CNN model achieved 91.17% and 95.45% accuracy respectively. Tan et al. [14] claimed that their solution is cost effective and portable. The advantage of the CNN model does not require feature extraction, selection, and classification. Moreover, the CNN model can be installed in a cloud system. Tan et al. [14] also mentioned that their solution can financially replace the medical grade SR screening equipment such as the Optical Coherence Tomography.
In addition, Toy et al. [15] suggested that the use of portable smartphone-based telemedicine system is to improve access to screening, surveillance and treatment of DR. Toy et al. [15] conducted a research to use a smartphone as a screening tool to detect referral warranted diabetic eye disease. A total of 50 adult patients with 100 eyes participated in this research. The research also compares smartphone-based results with clinical assessment of diabetic eye disease by standard dilated examination. Firstly, all patients underwent clinical assessment on ophthalmic examination. Next, patients underwent smartphone-assisted acquisition of spectacle-corrected near visual acuity and anterior/posterior segment photography. The phone is attached with an adapter containing a macro lens and external light source. All the patients' eyes were dilated with 1 drop of each of 2.5% phenylephrine and 1% tropicamide after visual acuity measurement. The results show that smartphone visual acuity was successfully measured in all eyes. Furthermore, smartphone-acquired fundus photography demonstrated 91% sensitivity and 99% specificity to detect moderate non-proliferative and worse diabetic retinopathy. Overall, this research demonstrates the potential use of a smartphone with low-cost adapted and lenses to screed for referral-warranted diabetic eye disease.
Tongue Image Analysis in Diabetes Diagnosis
Aside from using retinal fundus images, tongue images have been used as well. There are several advancements in tongue image analysis over the past decade [16,17]. Prior to diabetes diagnosis, Zhang et al. [16] used tongue images (tongue body and tongue coating as features) to diagnose diabetes. Machine learning algorithms such as Support Vector Machine (SVM), Principle Component Analysis (PCA), and Genetic Algorithm (GA) were utilised. SVM was used to examine the tongue features. PCA was used to reduce the dimension of the tongue features. The result showed that the rate of prediction was 77.83%. After parameters normalization of the tongue images, the accuracy increased to 78.77%. Moreover, GA was adopted for feature selection and it enhanced the accuracy rate of cross-validation from 72% to 83.06%.
Similarly, Zhang and Zhang [17] used 672 images to differentiate between healthy and diseased tongue images to diagnose DM. Images were collected from Traditional Chinese Medicine Hospital but the images were classified based on western medical practice. Decision Tree (DT) was used to classify five different types of tongue shapes whereas SVM was used to classify diseases for each tongue shape utilising 13 geometric features. Furthermore, Sequential Forward Selection (SFS) was used to optimise the number of features. The average accuracy of disease classification is 76.24%.
Zhang, Kumar and Zhang [18] used tongue images to detect DM and NPDR. A total of 34 tongue features (colour, texture, and geometry) were used. K-nearest neighbour (k-NN) and Support Vector Machine (SVM) were adopted to classify the tongue features. The result showed that both machine learning algorithm achieved the same average accuracy for all 34 features. The highest average accuracy is 66.26% using the tongue geometry feature. By utilising Sequential Forward Selection (SFS) to optimise the feature selection, the average accuracy of SVM and k-NN were 80.52% and 67.87% respectively.
Discussion
In general, the main feature of the DMS is to monitor patients' blood glucose level either using invasive or non-invasive approach. However, these systems lack monitoring other important body parts as well such as the retina. Therefore, this paper proposes an additional feature that extend the existing DMS's functionality.
The proposed feature captures an image of the retina and detects the presence DR by computationally extracting and classifying the DR lesions present in the retina. The proposed feature enables users to self-check their retina, which in turn creates an awareness of their retina condition and seek professional guidance if there exist any complications. To achieve this research idea, Deep Learning (DL) is proposed as the ML model to detect the DR. DL has achieved high confidence in identifying, localizing, and quantifying pathological features in retinal disease [19]. EyePACS and Messidor will be used as dataset to train the proposed ML model. EyePACS is a free DR image dataset that can be obtained at Kaggle website [20]. It contains 35,126 training images and 53,576 testing images. Messidor is a publicly available dataset and it contains 1200 eye fundus colour numerical images [21]. Since a vast amount of dataset can be obtained easily, a DL model is proposed. The advantage of employing DL is due to its efficiency at handling large amount of data.
On the other hand, tongue image analysis can be an auxiliary diagnosis system within the primary DMS. Tania, Lwin and Hossain [22] stated that there is inconsistency in input images, for example, the image quality, image segmentation, and feature extraction. Therefore, the accuracy, robustness, and reliability are unconvincing. To address this problem, it is crucial that the obtained dataset must include, but not limited to tongue colour and texture, shape and geometry, oral lesions including stomatitis, geographic tongue, and fissured tongue to enhance the feature extraction. One possible solution is to adopt Transfer Learning (TL). TL is a ML technique that focuses on storing knowledge gained while solving one problem and applying it to a different but related problem. There is existing literature on tongue analysis [16][17][18] that had promising results that we can use and enhance the accuracy using TL.
Conclusion
In this study, an effort was made to identify and review the ML approaches applied on eye fundus and tongue digital image processing research, particularly in DR. To date, there are a number of significant studies carried out in the classification of DR using different ML techniques. Research ideas have proposed in this paper to extend the existing DMS by incorporating additional features to the device's functionality. Image processing is able to capture and analyse the captured retina and tongue images to detect DR and diagnose DM respectively. This proposed feature can create awareness of patients' retina condition and seek professional guidance if there exist such disease. Thus, it can minimise the risk of having latter stages of DR. Tongue image analysis can be an auxiliary diagnosis system to detect DM. Machine learning models, particularly Deep Learning, where it produces convincing result, is adopted in classifying DR. Furthermore, this paper addresses the challenge of adopting tongue image analysis due to the lack of quality dataset in this research area. Thus, Transfer Learning has proposed to solve the insufficient data and to enhance the result output.
|
2021-01-01T02:16:24.057Z
|
2020-12-30T00:00:00.000
|
{
"year": 2020,
"sha1": "c7ab85ec497da58de6a632c2cf64015539ed251b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "c7ab85ec497da58de6a632c2cf64015539ed251b",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering"
]
}
|
3994645
|
pes2o/s2orc
|
v3-fos-license
|
Prevalence of Vitamin D Deficiency Varies Widely by Season in Canadian Children and Adolescents with Sickle Cell Disease
Sickle cell disease (SCD) is an inherited disorder caused by a variant (rs334) in the β-globin gene encoding hemoglobin. Individuals with SCD are thought to be at risk of vitamin D deficiency. Our aim was to assess serum 25-hydroxyvitamin D (25OHD) concentrations, estimate deficiency prevalence, and investigate factors associated with 25OHD concentrations in children and adolescents with SCD attending BC Children’s Hospital in Vancouver, Canada. We conducted a retrospective chart review of SCD patients (2–19 y) from 2012 to 2017. Data were available for n = 45 patients with n = 142 25OHD measurements assessed using a EUROIMMUN analyzer (EUROIMMUN Medizinische Labordiagnostika AG, Lübeck, Germany). Additional data were recorded, including age, sex, and season of blood collection. Linear regression was used to measure associations between 25OHD concentration and predictor variables. Overall, mean ± SD 25OHD concentration was 79 ± 36 nmol/L; prevalence of low 25OHD concentrations (<30, <40, and <75 nmol/L) was 5%, 17% and 50%, respectively. Mean 25OHD concentrations measured during Jul–Sep were higher (28 (95% confidence interval CI: 16–40) nmol/L higher, P < 0.001) compared to Jan–Mar. Vitamin D deficiency rates varied widely by season: Based on 25OHD <30 nmol/L, prevalence was 0% in Oct–Dec and 6% in Jan–Mar; based on <40 nmol/L, prevalence was 0% in Oct–Dec and 26% in Jan–Mar.
Introduction
Sickle cell disease (SCD) is an inherited disorder [1,2] caused by a variant (rs334) in the β-globin gene encoding hemoglobin. It is one of the most common and severe monogenic disorders worldwide [2,3]. Mutation of the rs334 nucleotide from a thymine to an adenine base pair produces a hydrophobic motif, which, when deoxygenated, leads to polymerization and crystallization of the hemoglobin molecule, causing a sickle shape [2,3]. The severity of SCD is determined via the extent of polymerization, as sickled cells are rigid and inflexible. Sickling leads to vaso-occlusive crises, increased erythropoiesis and hemolysis, anemia, and further associated health complications [2,3]. Due to the increase in red cell turnover and basal metabolic rate (BMR), individuals with SCD are at increased risk of multiple nutrient deficiencies [4][5][6][7].
One nutrient of concern for individuals with SCD is vitamin D. Vitamin D plays an important role in cell growth and differentiation [8], cardiovascular health, immunity, and bone health [9,10]. It has been previously reported that patients with SCD have lower concentrations of 25-hydroxyvitamin D (25OHD) and an increased prevalence of vitamin D deficiency [7,11,12], which may be exacerbated by increased erythropoiesis and BMR [7], inadequate dietary intake [4,6], and decreased nutrient absorption due to inflammatory damage of the intestinal mucosa [13,14]. Additionally, the sickle cell variant is most commonly found in individuals of African-origin [15][16][17], thus, there is an increased risk of vitamin D deficiency in this population, as darker skin pigmentations absorb less ultraviolet B (UVB) radiation, reducing the skin's production of vitamin D [9].
Due to this increased risk, the Canadian Haemoglobinopathy Association recommends daily supplementation of individuals with SCD with 1000-2000 IU vitamin D, following assessment of 25OHD concentrations [18]. However, a recent Cochrane review reported on the limited evidence of the effectiveness of vitamin D supplementation on outcomes among individuals with SCD (only one study was included with moderate to low quality of evidence), and concluded that more research is needed in this area before clinical recommendations could be made [1]. We aimed to measure serum 25OHD concentrations (as a biomarker of vitamin D status), estimate the prevalence of vitamin D deficiency, and investigate factors associated with 25OHD concentrations in children and adolescents with SCD attending the British Columbia Children's Hospital in Vancouver, Canada.
Study Design and Participants
A retrospective chart review was conducted among SCD patients attending the sickle cell clinic at British Columbia Children's Hospital in Vancouver, Canada over the past 5-year period (2012-2017). Data were collected for n = 45 patients aged 2-19 y. All children and adolescents with SCD were living in British Columbia between the 49th and the 54th parallel. All children and adolescents with SCD attending the sickle cell clinic between 2012 and 2017 were included in the study. Ethical approval was received through the University of British Columbia/Children's and Women's Health Centre of British Columbia Research Ethics Board (CW17-0175/H17-00655).
Data Collection
Data were gathered through the hospital's electronic charting system, as well as through archived patient charts. Information including the patient's date of birth, sex, ethnicity, sickle cell genotype, medication history, and supplement history was collected into a database. Sickle cell genotype was categorized by homozygous sickle cell anemia (β S β S ), hemoglobin SC disease (β S β C ), and hemoglobin S/β-thalassemia. Any current medications and supplements were also recorded along with their corresponding doses. Weight (kg) and height (cm) measurements were recorded. Month of blood collection was noted and categorized into four seasons (groups): January to March (Jan-Mar), April to June (Apr-Jun), July to September (Jul-Sep), and October to December (Oct-Dec).
Serum 25OHD concentration was measured using a EUROIMMUN analyzer with the corresponding 25OHD Vitamin D ELISA (EUROIMMUN Medizinische Labordiagnostika AG, Lübeck, Germany) at the British Columbia Children's Hospital Clinical Biochemistry Lab (Vancouver, BC, Canada). Quality controls and three levels of calibrators provided by the manufacturer were run in each assay. The British Columbia Children's Hospital participates in the Vitamin D External Quality Assessment Scheme (DEQAS), an external quality control program for 25OHD measurement and has a Certificate of Proficiency during the time in which the current analyses were completed. A complete blood count was performed using a Sysmex XN hematology analyzer (Sysmex Corporation, Kobe, Japan), including measurement of hemoglobin concentration (g/L), red cell distribution width (RDW; % of red blood cell), and mean corpuscular volume (MCV; fL). Serum was assessed for zinc (µmol/L), copper (µmol/L), and selenium concentrations (µmol/L) using a NexION 350 ICP-MS (Perkin Elmer, Waltham, MA, USA). Ferritin concentration (µg/L) and alkaline phosphatase (ALP) activity (U/L) were measured using a Vitros ® 5600 (Ortho Clinical Diagnostics, Raritan, NJ, USA).
Data Analysis
Body mass index (BMI)-for-age z-scores were calculated using an online anthropometric calculator, based on the World Health Organization Growth Reference Charts [19]. Vitamin D deficiency was defined as a serum 25OHD concentration <40 nmol/L [20], while insufficiency was defined as <75 nmol/L, as per the Canadian Paediatric Society guidelines [21].
For chemical and clinical biomarkers, concentrations were reported as mean ± SD or median (interquartile range, IQR) depending on the distribution (normal or skewed, respectively). Serum 25OHD concentrations are expressed as nmol/L (to obtain values in ng/mL: Divide nmol/L by 2.5).
A multivariable linear regression model was used to measure the association between mean serum 25OHD concentration (continuous outcome variable based on all available 25OHD measurements) and independent predictor variables which were selected based on a crude vs. adjusted change-in-estimate of ≥10%, controlling for repeated-measures of individuals. The primary predictor variable was age (continuous, years) given that our population was between 2 and 19 years and it was necessary to control for the wide variation in this variable in our population. Predictor variables that were known or suspected to be associated with vitamin D status that were available (recorded in patient charts) were assessed for inclusion in the model: age, sex, hemoglobin concentration, MCV, RDW, zinc, copper, selenium, ferritin, ALP, BMI-for-age z-score, sickle cell genotype, and whether children were receiving hydroxyurea or antibiotics for asplenia prophylaxis (penicillin or amoxicillin).
An analysis of variance (ANOVA) model was used to predict the marginal means (95% CI) of 25OHD concentrations by season (for all serum 25OHD measurements recorded in the past 5-year in all individuals), controlling for age and repeated-measures of individuals. Bonferroni-adjusted comparisons were used to detect statistical differences in 25OHD concentrations across seasons (P < 0.05). Stata/IC 15.0 (StataCorp, College Station, TX, USA) was used for statistical analyses.
Characteristics of the Studied Population
Data were available for n = 45 children and adolescents with SCD. Of these, n = 42 had at least one 25OHD measure. Among all children, a total of n = 142 25OHD measurements were recorded in the 5-year period. The mean ± SD age of participants was 11.4 ± 5.3 y (Table 1). Overall, 47% of the studied population were male (n = 21/45). Self-reported ethnicity included African, Caribbean, Latino, or South Asian, and overall, 87% (n = 39/45) of individuals were of African-origin. Overall, 78% (n = 35/45) of individuals were diagnosed with homozygous sickle cell disease (β S β S ) genotype (the remaining 22% of individuals had hemoglobin SC disease or hemoglobin S/β-thalassemia genotypes). A total of 62% (n = 28/45) of individuals were prescribed hydroxyurea (between 600 and 1000 mg/d). All individuals were recommended vitamin D supplements (between 500 and 1000 IU/d).
Factors Associated with Serum 25OHD Concentrations
Season, hemoglobin concentration, and ALP activity were significantly associated with serum 25OHD concentrations in children and adolescents (2-19 y) with SCD, after adjustment for confounding variables and repeated-measures of individuals (Table 3). Mean 25OHD concentrations assessed during the months of Jul-Sep were significantly higher (28 (95% CI: 16-40) nmol/L higher, P < 0.001), as compared to Jan-Mar. A 1 g/L increase in hemoglobin concentration was associated with a 0.4 (95% CI: 0.1, 0.8) nmol/L increase in mean serum 25OHD concentration (P = 0.01). A 1 U/L increase in ALP activity was associated with a 0.1 (95% CI: 0.1, 0.2) nmol/L increase in mean serum 25OHD concentration (P = 0.03).
Vitamin D Concentration by Season of Blood Collection
A total of 35 individuals had ≥2 serum measurements of 25OHD concentration in the 5-year period. Of those 35 individuals, n = 27 (77%) had a difference of ≥20 nmol/L between two of the measured values. The mean difference between the lowest and highest 25OHD measurements in all 35 individuals was 35.5 ± 20.2 nmol/L; overall, the individual differences ranged between 2 and 90 nmol/L (data not shown).
Mean serum 25OHD concentrations varied by season of blood collection, as did the prevalence of vitamin D deficiency and insufficiency. Prevalence of 25OHD <40 nmol/L varied by up to 26%, depending on the season of blood collection (26% in Jan-Mar and 0% in Oct-Dec). Similarly, the prevalence of vitamin D insufficiency (<75 nmol/L) varied by up to 36%, depending on the season of blood collection (38% in Jul-Sep vs. 74% in Jan-Mar) ( Table 4).
Mean 25OHD concentrations collected in Jul-Sep and Oct-Dec were similar and both significantly higher as compared to in Jan-Mar, but not in Apr-Jun (Bonferroni-adjusted, P < 0.0125 to account for the four-group comparison) (Figure 1
Discussion
In this population of children and adolescents with SCD, who were predominately of Africanorigin, living in British Columbia, Canada, and were recommended daily vitamin D supplements (500-1000 IU/d), the prevalence of low serum 25OHD concentrations (<30, <40, and <75 nmol/L) was 5%, 17% and 50%, respectively, based on the individual's most recent measure of serum 25OHD. Serum 25OHD concentrations measured in the summer months of Jul-Sep were significantly higher (28 (95% CI: 16-40) nmol/L higher, P < 0.001) than those collected in the winter months of Jan-Mar, highlighting the wide variation in mean 25OHD concentration by season.
The mean serum 25OHD concentrations in our studied population of SCD children and adolescents were higher than those observed in a nationally representative sample of healthy Canadian children (as per the Canadian Health Measures Survey 2007-2009) [22]. Albeit, comparisons among these two population groups are not justified given the major differences among children (e.g., disease-state of SCD, vitamin D supplementation practices, etc). Comparatively,
Discussion
In this population of children and adolescents with SCD, who were predominately of African-origin, living in British Columbia, Canada, and were recommended daily vitamin D supplements (500-1000 IU/d), the prevalence of low serum 25OHD concentrations (<30, <40, and <75 nmol/L) was 5%, 17% and 50%, respectively, based on the individual's most recent measure of serum 25OHD. Serum 25OHD concentrations measured in the summer months of Jul-Sep were significantly higher (28 (95% CI: 16-40) nmol/L higher, P < 0.001) than those collected in the winter months of Jan-Mar, highlighting the wide variation in mean 25OHD concentration by season.
The mean serum 25OHD concentrations in our studied population of SCD children and adolescents were higher than those observed in a nationally representative sample of healthy Canadian children (as per the Canadian Health Measures Survey 2007-2009) [22]. Albeit, comparisons among these two population groups are not justified given the major differences among children (e.g., disease-state of SCD, vitamin D supplementation practices, etc). Comparatively, healthy Canadian children aged 6-11 years had mean serum 25OHD concentrations of 75.0 (95% CI: 70.3-79.9) nmol/L and those aged 12-19 years had a mean 25OHD of 68.1 (95% CI: 63.8-72.4) nmol/L. Of note, we reiterate that all individuals with SCD in our study were recommended daily vitamin D supplements and we speculate that supplementation was likely one reason for the relatively high 25OHD concentrations observed in this population.
It is well-established that winter season is associated with lower serum 25OHD concentrations [23]. At latitudes of 35 • N and above, the zenith angle at which the UVB photons hit the ozone layer during the winter months (November to February) causes a reduced amount of UVB to pass through the ozone, thus leading to reduced vitamin D synthesis in the skin [24]. Further, in dark-skinned individuals, high levels of epidermal melanin compete with 7-dehydocholesterol for UVB photons, decreasing the efficiency of vitamin D synthesis [9,24]. Thus, in our study, we were surprised to find seasonal changes in 25OHD concentrations in children of African-origin living at latitudes between 49 and 54 • N.
Similar to our study, George et al. also observed that 25OHD deficiency prevalence varied by season (40% in winter, 31% in spring, 30% in summer, and 4% in autumn) in a healthy population of African adults residing in Johannesburg, South Africa (latitude: 26 • S) [25], suggesting that despite the reduced vitamin D synthesis in the skin in individuals of African-origin, potential still exists for variation in 25OHD concentrations by season. Buison et al. also observed an association between season and 25OHD concentrations in children with SCD in the USA [26]. A multicenter cross-sectional survey conducted in England and the USA found that seasonal variation in 25OHD concentrations was observed in a pediatric SCD population (based on median 25OHD concentrations), but season had no effect on the prevalence of deficiency [27]. Conversely, limited seasonal variation in adult populations with SCD has been observed. We note; however, variation in 25OHD concentrations likely depends on multiple factors such as the overall vitamin D status of the population, dietary intakes of vitamin D, the latitude at which a population resides, and sun exposure [28,29].
Another significant predictor of serum 25OHD concentration in our model was ALP activity. Typically, in vitamin D deficiency, serum ALP activity levels are elevated, as ALP is released from osteoclasts during the process of bone demineralization [30]. Despite this, we found a significant positive association between serum ALP and 25OHD concentration. As SCD affects multiple systems throughout the body, and because ALP is secreted from tissues other than bone, ALP activity could be a result of other health-related complications (such as hepatic sequestration crisis and progressive cholestasis), rather than due to bone turnover and vitamin D status [18]. Given our limited available data in this retrospective chart review, we were unable to investigate this unexpected association further.
All children were recommended vitamin D supplements (500-1000 IU/d). However, we did not have data on adherence, as this information was not collected during regular patient visits. SCD is a chronic disorder often requiring several oral medications for its clinical management (e.g., hydroxyurea, penicillin, and folic acid). Further, vitamin D supplements are not often covered through health care plans. Therefore, the cost of the supplements and associated pill burden may negatively influence adherence rates among children and adolescents in our study. In summary, due to our lack of data on adherence, we could not assess the relationship between supplement use and serum 25OHD concentrations in our study.
More research is needed to investigate the effect of vitamin D supplementation on individuals with SCD, as they are a group that is particularly vulnerable to bone disorders and poor growth trajectories. To date, there has been only one randomized controlled trial on vitamin D supplementation in children with SCD (n = 42 were initially enrolled but only n = 37 completed the six month follow up to the trial) [31]. In the Osunkwo et al. trial, Vitamin D supplementation (40,000-100,000 IU/wk) caused a significant increase in 25OHD concentrations in the treatment group as compared to the placebo group; however, there was not sufficient evidence in improvement of clinical outcomes in children to guide clinical practice [31]. However, the dose of vitamin D provided in this supplementation trial was up to 10× the dose typically prescribed to children with SCD [31]. Additional studies have found that vitamin D supplementation was associated with increased 25OHD concentrations in SCD patients [32], as well as pain resolution [33], and improved bone mineral density [33,34]. However, a test of the safety and efficacy of high dose vitamin D supplementation (4000 IU vs. 7000 IU/d) in children and young adults found that neither dose was high enough to achieve the defined efficacy criterion (>32 ng/ml (equivalent to~80 nmol/L)) in 80% of subjects after 12 weeks [35]. However, in individuals with the homozygous β S sickle cell disease genotype, significant (P < 0.05) increases in fetal hemoglobin, decreases in high-sensitivity C-reactive protein, and decreases in platelet count were observed [35].
In conclusion, more high-quality controlled trials are needed on vitamin D supplementation in this population to guide clinical practice.
Some limitations should be considered when interpreting our results. First, only two markers of vitamin D status (serum 25OHD concentrations and ALP activity) were measured and recorded in patient charts. Additional biomarkers of vitamin D status, such as parathyroid hormone and vitamin D-binding protein, would have been useful for a more comprehensive assessment of vitamin D status [36]. Moreover, information on bone mineral density and inflammatory markers would also aid in the assessment of vitamin D status. Severe vitamin D deficiency is associated with lower bone mineral density, and vitamin D-binding protein is mildly affected by inflammation (e.g., 25OHD concentrations may decrease in the presence of inflammation) [7,9]. In addition, our studied population included only 42 individuals with 25OHD measurements; thus, we may have had limited power to detect significant associations in our linear regression model. Dietary intake data of children and adolescents were not collected during clinic visits; thus, we could not estimate dietary intakes of vitamin D in this population. Future research in this population group could include components of dietary intake assessment for a more comprehensive approach to assessing vitamin D status. A total of seven children were reported to have had a prior blood transfusion, which would influence serum 25OHD measurement if the sample was taken in approximately the~3 weeks prior to the transfusion. Furthermore, a comparative group of age, sex, and ethnicity-matched controls would be useful to more rigorously compare 25OHD concentrations among children and adolescents with and without SCD living in the same geographical area.
In conclusion, the findings of this study highlight the importance of the season of blood collection when interpreting 25OHD concentrations, even among dark-skinned individuals with SCD living in northern latitudes. This information is important for clinicians when interpreting 25OHD concentrations in different seasons, as an individual classified as deficient in one month may not be deficient year-round (or vice versa).
|
2018-04-03T00:10:39.442Z
|
2018-01-30T00:00:00.000
|
{
"year": 2018,
"sha1": "23b77f7fea95352814ebb7e84eee8f69bc80d967",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/7/2/14/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4f54d5373c11168377688cd734d228e1c2fd2a39",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
263956401
|
pes2o/s2orc
|
v3-fos-license
|
Monte Carlo, harmonic approximation, and coarse-graining approaches for enhanced sampling of biomolecular structure
The rugged energy landscape of biomolecules and associated large-scale conformational changes have triggered the development of many innovative enhanced sampling methods, either based or not based on molecular dynamics (MD) simulations. Surveyed here are methods in the latter class - including Monte Carlo methods, harmonic approximations, and coarse graining - many of which yield valuable conformational insights into biomolecular structure and flexibility, despite altered kinetics. MD-based methods are surveyed in an upcoming issue of F1000 Biology Reports.
Introduction and context
Computer modeling and simulation offer a modern 'microscope' by which to simulate a variety of conformational events in many molecular systems and subsequently extract related mechanistic, thermodynamic, and kinetic information.The governing force fields have been extensively developed on the basis of experimental data and fundamental physical laws.The force fields define complex 'energy landscapes' that relate motion and function, as described by Frauenfelder and Wolynes [1,2], and later Onuchic, Thirumalai, and others.These foundations for protein dynamics, folding, and function led to a hierarchical notion of energy landscapes with conformational substates separated by barriers that can be as high as of the order of 100 kJ/mol.Experimental studies, such as from fluorescence spectroscopy, nuclear magnetic resonance (NMR), single-molecule experiments, or four-dimensional electron microscopy provide detailed views on biomolecular motion and confirm a wide range of the timescales involved [3].Sampling these rugged conformational landscapes to link dynamics to function and bridge the gap between experimental timescales and atomic-level behavior remains a grand challenge.
Methods not based on molecular dynamics (MD) include three broad classes: Monte Carlo (MC) approaches, harmonic approximations, and coarse graining.Although in their own right MC methods are not always satisfactory for large systems, they form essential components of more sophisticated methods (for example, transition path sampling or Markov chain MC sampling, surveyed in the MD-based sampling methods review in an upcoming issue of F1000 Biology Reports [4]).Harmonic approximation-based methods can provide valuable insights into structure/flexibility/ function relationships of complex systems, and coarsegraining approaches allow studies of key features of complex systems not amenable to regular atomistic treatments.These methods will be surveyed, with promising directions highlighted.
Major recent advances
Monte Carlo approaches MC approaches have long been used due to their simplicity and generality.For example, they can be applied to many types of potentials, even discontinuous ones, like the square-well potential for fluid or colloidal suspensions [5], or lattice and off-lattice protein models (for example, [6]).They also allow exploration of variable conditions not amenable to fixed potentials, for instance, the conformational dependencies on ionization states of proteins, which affect side-chain protonation states, as in the electrostatically driven MC (EDMC) method of Scheraga and colleagues [7].EDMC in combination with different dihedral angle constraints was shown to successfully fold a villin headpiece in close agreement to the NMR structure [7].For recent reviews on MC applications to biomolecules, see [8,9]; see [10] for a recent review of MC theory.
The general premise in canonical MC sampling is to generate a set of conformations under Boltzmann statistics.Based on the Metropolis acceptance criterion, states that decrease the energy are always accepted and those that increase the energy are accepted with a probability P = exp(−bΔU), where b = 1/k B T and ΔU is the energy difference between the internal energy of the new and old configurations.In practice, this probability is achieved by generating a uniform random variate ran on (0,1) and accepting the new state if P > ran in order to ensure detailed balance and the target thermal distribution.The result of this procedure is the acceptance probability: (Note that, if P ≤ ran, the old state is re-counted and a new trial state is generated.)This approach allows the molecular system to overcome barriers in the vast conformational space and escape from local minima.
Because convergence of this protocol can be slow, simulated annealing (SA), a form of global optimization, has been developed so that the effective temperature is gradually lowered according to a specified cooling protocol to overcome barriers in the rugged landscape.SA can be used successfully as an extended form of MC, as well as molecular, Langevin, or Brownian dynamics simulations.
Still, selecting the appropriate trial move set and movement magnitudes for a biomolecule without high rejection rates can be challenging.Biased MC variants have been devised with trial moves and hence the conformational deformations designed to move the system to more probable states.Therefore, the Rosenbluth, instead of the Metropolis, criterion is used to factor in the probability (Boltzmann weights) of all trial positions that were skipped in favor of the biased moves: Here, the Rosenbluth factor W is equal to the product of the sum of the Boltzmann weights of trial positions for each segment i insertion: where N is the number of chain segments and U i k is the potential energy of the kth trial of adding the ith segment.(One of these trial moves is selected for each segment i with a probability proportional to its Boltzmann weight, and this process is repeated for all segments until the entire chain is re-grown.)Thus, additional overhead is required in biased MC simulations to calculate that probability ratio.
Configurational bias MC (CB-MC) is a biased MC variant that helps 'grow' a molecule toward particular states.Traditional CB-MC 're-grows' a deleted position of a polymer at the same end in variable orientations (instead of trying out all neighboring sites randomly).This results in an exponential scaling time with polymer length to re-grow a self-avoiding lattice chain due to the high probability of segment overlaps.In certain applications, much more effective variants can be developed, as in the 'end-transfer CB-MC' for chromatin, where one end of the polymer is grown at the other end.Dramatic efficiency can be achievedquadratic versus exponential scalingin such applications [11].
Many hybrid MC methods [10] have also been developed to marry the advantages of MC (global sampling potential) with those of MD (continuous local sampling).The success of such methods has been highly application-dependent but can be very effective, especially for small systems [12].Finally, J-walking or temperature jumps can be introduced to accelerate sampling (similar to SA) but here multiple simulations of non-interacting systems are involved.This parallel tempering approach [13,14] periodically exchanges replicas at different temperatures with a transition probability that maintains each temperature's equilibrium ensemble distribution: ) and U i is the internal energy of state i (new and old).In this way, barriers over rough energy landscapes can be overcome.These parallel tempering methods have been particularly effective in their MD incarnation, termed replica exchange MD; see accompanying survey [4].
An MC method with advantages similar to parallel tempering in escaping from local barriers was introduced by Wang and Landau [15].Their method performs multiple random walks in energy space, each to sample a different range of energy; the resulting information is combined to produce canonical averages for calculating thermodynamic quantities at any temperature.When performance of this energy-restricted multiple random walk protocol was compared with parallel tempering for protein conformational sampling, the two methods performed similarly and were faster by two orders of magnitude when compared with a canonical MC simulation at a low temperature; the Wang/Landau MC method was found to be easier to implement on singleprocessor systems, whereas parallel tempering is advantageous for multi-processor implementations [16].
Given recent successes [8,9,12], some advocate that recent improvements in MC methodology and increased computer memory and speed lend support for the increased application of MC algorithms for folding small biomolecules.Indeed, canonical, multi-canonical, and biased MC protocols that incorporate experimental information (knowledge-based dihedral angle distributions, hybrids involving global optimization techniques and MD, and so on) can significantly enhance the sampling of low energy configurations and reveal folding ensembles of small proteins.General and flexible MC modules have been built into standard programs like CHARMM (Chemistry at HARvard Macromolecular Mechanics) [12], with automatic optimization of step sizes and efficient combinations with minimization or MD modules.These optimized MC methods were found to outperform standard Langevin dynamics simulations in reaching folded states of small proteins.
In general, MC methods can become inefficient for large systems but can be effective for coarse-grained methods (for example, chromatin folding [17]) and as vital components of other methods (for example, transition path sampling; see accompanying survey [4]).These MC extensions and hybrids argue for further development of MC methods for biomolecular applications as a whole.
Harmonic approximations
Normal mode analysis (NMA) and principal component analysis (PCA) are based on harmonic theory.Thus, in their purest forms, spectral decompositions (diagonalization) of a mass-weighted Hessian at thermal equilibrium are performed [18].This harmonic approximation is far from accurate at ambient temperatures when significant biomolecular fluctuations between minimum-energy regions, as well as occasional rearrangements, occur.Still, these techniques have provided valuable information on collective motions of biomolecules.Elastic networks [19][20][21] are modern extensions that forgo the computationally demanding diagonalization because the simplified bead/spring-type models are assumed by construction to reflect minimum states of the molecular system.
Besides elastic networks, a successful extension of these techniques that focuses on low-frequency high-amplitude vibrational modes is called 'essential dynamics' (ED), to which key contributions have been made by Berendsen, de Groot, Amadei, and others [22].ED can be used to simulate the dynamics in the low-dimensional space spanned by the low-frequency modes.This is accomplished by constructing the variance/co-variance matrix of positional fluctuations, projecting the original configurations onto each of the principal components, and then following the principal motions in time.There is no explicit assumption of thermal equilibrium here.
The literature is vast with applications of PCA, NMA, and ED with both all-atom and coarse-grained models and in combination with various algorithms, including molecular, Langevin, and Brownian dynamics, to biomolecular conformational flexibility and dynamics.Clearly, these approaches have provided valuable insights into biomolecular flexibility and functional activity.However, the results depend strongly on the level of convergence of the sampling, which influences the results and hence the interpretations.
As one example, a PCA study of the closing conformational change of DNA polymerase b upon binding the nucleotide substrate revealed that the top three principal components involve correlations between the thumb subdomain and other regions of the protein (palm, 8-kDa) [23].Another study, also using PCA, of 13 singlebase variants of TATA-box DNA sequences bound to the TATA-binding protein [24], helped explain why these variants revealed a wide range of transcriptional efficiency despite remarkably similar structures: highefficiency variants favored complexation motions while low-efficiency variants tended toward dissociation deformations.The dominant motions common to all complexes are shown in Figure 1A and are dissected for the protein and bound TATA-box DNA separately.
Network models have been particularly effective for applications to molecular machines like GroEL and the ribosome modeled by coarse-grained formulations.For example, in an application to the ribosome [25], collective ratchet-like motions were identified that are key in the translocation of the mRNA-tRNA complex.
A tour de force computational comparison between coarse-grained NMA and atomistic ED studies on many proteins [26] showed that both techniques are valid for describing the spectrum of the low-frequency modes and tracing protein flexibility in water, despite the fact that individual eigenvectors from NMA have small values.
An extensive PCA study of a beta-protein WW domain using the coarse-grained protein model UNRES (united residue) [27] showed that dynamics of fast, slow, and non-folding MD trajectories can be well characterized by PCA and that the top few principal components describe the dynamics processes well.
Note that, besides normal-mode-based methods, another class of harmonic approximation methods based on MD includes internal (for example, torsionangle) MD propagation and variable transformation of classical statistical mechanical configuration partition functions.The latter will be mentioned in the forthcoming MD-based survey [4].As for the former, internal coordinate dynamics approaches have long been attempted with the rationale that the fewer degrees of freedom (compared to Cartesian coordinates) allow for longer integration timesteps, and hence greater sampling.Indeed, peptide folding and refinement with dihedral angle MD demonstrated a computational advantage of several orders of magnitude compared to Cartesian analogues [28], as well as the capturing of folding pathways of helical peptides and local side-chain and domain dynamics [29].Another recent study combined dihedral space MD with PCA (dPCA) in a clever way to systematically construct the low-dimensional free energy landscape from a classical MD simulation [30].Although this analysis is interpretive, it shows that major conformational states, barriers, and reaction pathways for solvated peptides can be visualized from the constructed energy landscape.
In general, such dihedral angle MD approaches for propagating biomolecular motion have not yet caught on at large, perhaps due to both the added cost of the transformation involved in the Newtonian laws of motion and the fact that biomolecular vibrational modes are intricately coupled and hence dynamics can be critically altered by neglecting the high-frequency bond-length and bond-angle modes.However, the increase of coarsegraining models argues for their resurgence.
Coarse graining
System-specific coarse-grained methods are attractive because they drastically reduce the number of degrees of freedom.However, their formulations are highly system-dependent and require as much art as science in constructing, testing/validating, and applying them to appropriately formulated questions.Coarse graining can involve bead models, implicit solvent approximations, discrete lattice models, and general multiscale formulations.
The simplest type of coarse graining involves bead models, long used for proteins (for example, Warshel and Levitt's united residue model [31]) and supercoiled DNA (wormlike chain model of Allison and McCammon [32]), and more recently developed for RNA (for example, [33]).Such methods can lead to meaningful insights into larger-scale rearrangements, including folding, not typically amenable to all-atom simulations.However, the neglect of many details (for example, solvent/solute interactions, which at best can only be accounted for indirectly, as in Langevin or Brownian dynamics) should be considered in the biological interpretations.
In addition to bead models, coarse graining can involve implicit solvent approaches (developed by McCammon, Case, Karplus, Roux, Honig, Truhlar, and many others, and reviewed recently [34]) that reduce the number of degrees of freedom drastically, accounting for them in an average sense in the form of solvation free-energy estimates.Such treatments can be effective, especially when combined with coarse-grained models of molecular systems.However, Chen and Brooks [34] caution that current surface-area-based non-polar models have significant limitations and thus could benefit from incorporating several non-polar solvation aspects.
Lattice models also reduce the conformational degrees of freedom to a discrete set, therefore allowing (in theory) exhaustive sampling of the conformational space.Lattice models of proteins, such as those developed by Goand Taketomi [21] and by Miyazawa and Jernigan [35], are associated with ideal funnel energy landscapes: a protein chain is modeled by attractive interactions between pairs of residues that interact in the native structures and repulsive interactions of the other pairs, based on statistical data.Recently, Coluzza and Frenkel [36] also applied such lattice models to study the effect of substrates on the folding of their partner proteins.Such lattice models of polymers are typically sampled by MC methods, with tailored moves like corner-flip, crankshaft (rotation by 90°of two consecutive particles), branch rotation, and center-of-mass translation.Protein lattice models have also been extended to off-lattice protein versions.
General coarse-grained or multiscale models are most challenging to formulate and validate because the various components need to be resolved by different approaches and combined effectively.For example, simplified models of the chromatin fiber developed by the groups of Langowski [37], Schiessel [38], Schlick [39], and others, necessarily select the molecular parts to resolve in more detail and those that can be effectively approximated.For example, in studies aimed at deducing the architecture of the 30-nm chromatin fiber, the nucleosome core, histone tails, linker DNA, and linker histones are each modeled differently in a mesoscale model sampled by MC (Figure 1B).The nucleosome core -DNA wrapped around a histone octameris represented as an irregular surface with Debye-Hückel point charges that approximate the electrostatic field, as evaluated by the non-linear Poisson-Boltzmann equation; the linker DNA, histone tails, and linker histone protein are described by coarse-grained bead models.Such a chromatin model, when carefully parameterized, can reveal the dynamics/structure of each component as a function of internal and external factors [40].
An impressive example of general coarse graining is the membrane system as modeled by Arkhipov et al. [41] and highlighted in [42].Six coarse-grained amphiphysin BAR-domain proteins placed on top of a coarse-grained planar membrane patch of lipids triggered a re-shaping of the electrostatics-dominated surface by inducing global curvature within several microseconds, in agreement with curvature dimensions observed experimentally.Other successful and insightful coarse-grained membrane systems were reported recently, revealing pore formation [43], membrane architecture [44], and protein/membrane-binding interactions [45].Dynamics simulations of virus capsids were also pursued by a coarse-grained model to study the factors affecting capsid stability [46].An ambitious coarse-grained model of the GroES chaperone showed that the equatorial region of the GroEL/GroES chaperonin complex creates a channel that blocks the passage of folded proteins while at the same time welcomes the passage of secondary segments of diameter up to that of an alpha helix [47].
A minimalist coarse-graining model for proteins based on a 'switching Gomodel' was also developed and applied to derive a rotational mechanism of a biomolecular machine, an ATP-driven molecular motor, F 1 -ATPase [48].
A key question that was recently addressed by the Voth group [49] was how, in general, should the coarse graining be chosen.Their work proposed a systematic elastic network coarse-graining approach that essentially selects beads to represent groups of atoms so that atoms in the same domain reflect the collective motions as computed by PCA [49].These beads are determined by minimizing a residual of displacement differences.As shown for models of the HIV-1 capsid protein dimer, six-and eight-site models both approximate the system's 'essential dynamics' well, as determined by subdomain dynamics.They also showed that such coarse-grained models for peptides can visit and re-visit the folded state, unlike atomistic MD simulations, which reveal limited sampling [50].
A systematic parameterization of protein side chains for a coarse-grained peptide model in coarse-grained solvent was also reported by Han et al. [51], who demonstrated comparable solvation free energies with respect to atomistic models and a factor of 1,000 speedup.
Another rigorous approach to multiscale formulations was described recently by Noid et al. [52], who developed a formal statistical mechanical framework for multiscale coarse-grained models by constructing a many-body potential of mean force that generates equilibrium probability distributions for the coarse-grained sites using information from atomistic simulations.Thus, the work rigorously connects equilibrium ensembles of allatom and multiscale models.Many interesting applications of multiscale models in various scientific fields are collected in a special volume [53].
Future directions
With advances in computer memory and speed, MC methods are enjoying increased applications in biomolecular simulations, both for atomistic and coarsegrained models.They are vital components of various enhanced sampling methods like transition path sampling (see [4]) and thus deserve further consideration and development as our molecular models and forcefield potentials evolve and become more complex and hence more amenable to MC methods.
While harmonic approximation methods like PCA, NMA, ED, and elastic networks continue to add valuable insights into biomolecular flexibility and function, they are also participating in more applications with the growth of network models for molecular machines that help dissect and distill complex functional motions.
Coarse-grained models are clearly emerging as a favored approach to study either long-time behavior of small systems like peptides, as in folding trajectories, or large supramolecular systems that are too complex to study at atomic resolution, such as the chromatin fiber, membrane systems, and viruses.Exciting new rigorous frameworks for coarse-graining general systems are also under development and will likely increase.
Significantly, all of these approaches for enhanced sampling can be combined for cumulative and significant computational advantages.For example, a coarsegrained energy function with parallel tempering MC was used to study protein-protein binding through creating equilibrium ensembles of various complexes to help interpret paramagnetic relaxation enhancement experimental data [54].The combined populations of the specific complexes and the relatively small number of distinct but non-specific complexes helped explain the existence of observed transient encounter complexes.
Of course, careful testing, parameterization, and cautious interpretations are especially warranted in these creative coarse-grained approaches.Still, all of these advances, including elastic networks, coarse-grained approaches, implicit solvation, internal coordinate PCA, and lowfrequency vibrational mode propagation, are collectively opening the way to exciting applications of a rich variety of biomolecular systems regarding large-scale conformational changes and functional dynamics on millisecond and longer timescales that are helping to close the gap between experimental and theoretical time frames.
Figure 1 .
Figure 1.Examples of principal component analysis and coarse-grained models
|
2019-08-18T23:34:19.701Z
|
2009-06-29T00:00:00.000
|
{
"year": 2009,
"sha1": "e80e04161f872973873182dcf84cc809d56d0da5",
"oa_license": "CCBYNC",
"oa_url": "https://facultyopinions.com/prime/reports/b/1/48/pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "6d8c08ac1f8b301b313175f3c9c8b0133a5b3515",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
}
|
14159040
|
pes2o/s2orc
|
v3-fos-license
|
Non-finite-difference algorithm for integrating Newton's motion equations
We have presented some practical consequences on the molecular-dynamics simulations arising from the numerical algorithm published recently in paper Int. J. Mod. Phys. C 16, 413 (2005). The algorithm is not a finite-difference method and therefore it could be complementary to the traditional numerical integrating of the motion equations. It consists of two steps. First, an analytic form of polynomials in some formal parameter $\lambda$ (we put $\lambda=1$ after all) is derived, which approximate the solution of the system of differential equations under consideration. Next, the numerical values of the derived polynomials in the interval, in which the difference between them and their truncated part of smaller degree does not exceed a given accuracy $\epsilon$, become the numerical solution. The particular examples, which we have considered, represent the forced linear and nonlinear oscillator and the 2D Lennard-Jones fluid. In the latter case we have restricted to the polynomials of the first degree in formal parameter $\lambda$. The computer simulations play very important role in modeling materials with unusual properties being contradictictory to our intuition. The particular example could be the auxetic materials. In this case, the accuracy of the applied numerical algorithms as well as various side-effects, which might change the physical reality, could become important for the properties of the simulated material.
Introduction
Recently, we have published a numerical algorithm for the Cauchy problem for the ordinary differential equations [1]. We showed that it could be much more 2) and (2 ′ ) starting at point (P 1 ), (3) and (3 ′ ) starting at point (P 2 ). We have chosen N = 5 in the approximations (1), (2), (3), whereas N = 4 in (1 ′ ), (2 ′ ), (3 ′ ). In this example the number of exact digits is equal to 6, ε = 10 −4 . accurate, even by few orders of magnitude, than traditional numerical methods based on finite differences. In physical applications, the requirement of one force evaluation per time step makes that the most often chosen algorithm is the Verlet algorithm [2,4], being the simple third order Taylor predictor method, or the equivalent leap-frog algorithm [3,4]. In this case, the possibility to use algorithm being much more accurate then Verlet algorithm and as fast as the Verlet algorithm makes new perspective for simulating such complex systems as, e.g., tetratic phases [5] or auxetics [6]- [8]. Apart from the problem of numerical accuracy there is also the possibility of the loss of the time-reversibility in finitedifference methods [9], [10].
In the following, we discuss our algorithm with respect to integrating the motion equations. To this aim we have introduced a few examples of the forced linear and nonlinear oscillators and 2D Lennard-Jones fluid.
Short description of the algorithm
We present the procedure [1] of finding an approximate solution of the following initial value problem for the second order differential equation of the form: where x ′ = v, f and g are given functions, and x 0 , v 0 are fixed reals. For the function f we assume that it is sufficiently smooth, so we can write f , using the Taylor formula, in some neighborhood of (x 0 , v 0 ) in the form (3) We introduce a formal real parameter λ and instead of the Eqs. (1-2) we consider the family of problems with the initial data in Eq. (2). Next, we seek the approximate solution of Eq. (4) in the form where ϕ k (τ ) are unknown functions of τ = t − t 0 satisfying the condition Putting Eq.(5) into Eq.(4) and next comparing the coefficients of order λ k we get the system of differential equations for ϕ k , which, together with initial conditions Eq.(6), determine ϕ k in the unique way. The differential equations for ϕ k we solve by simple integration.
To illustrate this procedure we consider the mathematical pendulum problem with external force cos(t) For N = 3 we get where we substitute t 0 + τ for t and the derivatives with respect to t for the derivatives with respect to τ . Hence, and after integrating the above equations in the interval [0, τ ] we obtain We claim that for sufficiently large N and λ = 1 the expression x N (t) is a good approximation of the solution of the Eqs. (1,2) on a small interval of t ∈ [t 0 , t 0 + δ 1 ].
Practically, for a fixed N we look for the interval of t ∈ [t 0 , t 0 + δ 1 ] such that where ε > 0 is a fixed accuracy. In the above example of mathematical pendulum the condition states that |ϕ 3 (t − t 0 )| < ε. Next, we repeat our procedure for the Eq. (1) with the new initial data and so on. Fig.1 is a visualization of the updating procedure for the initial data. Thus, every time the condition in Eq.(14) fails at some value of In many examples it is enough to put N = 3 to get the good approximation of the solution.
Some features of the algorithm
While performing numerical integrating motion equation one always is fighting for the numerical accuracy. In classical finite-difference methods like Verlet algorithm, leapfrog algorithm or Runge-Kutta algorithm this is connected with the chosen size h of the time step. However, the smaller step size the larger cumulated round-off error because more time steps are necessary to cover the given time interval. Thus, one should use a numerical method using the smaller number of steps (the larger value of h) without loss in numerical accuracy. The advantage of our method is already evident in Fig. 2, where three solutions of the forced oscillator equation have been plotted, the exact one represented by the equation and two numerical approximations represented by the Velocity-Verlet algorithm with the step size h = 0.01 and our polynomial x N of degree N = 5 in formal variable λ. In the case of the polynomial method there have been plotted, in the figure, only the dots representing the points, where the condition Eq. (14) fails for a given accuracy ε = 0.01. They are the only points where the numerical round-off errors contribute to the approximate solution. The remaining points (in between), which have not been plotted, do not contribute to round-off errors cumulation. One always can recalculate them from the exact expression for the polynomial representation of x(t).
The following advantage of our algorithm could be relatively shorter total calculation time than in any numerically stable finite-difference method in the limit of small values of h. In Fig. 3, we have presented calculation time dependence of the Velocity-Verlet algorithm on the value of h and our polynomial algorithm on the given accuracy ε = h. The results in the figure have been obtained from the programs calculating deviations of the approximate solutions from the exact one. The numerical errors arising from the assumed value of ε can be by a few orders smaller than in classic finite-difference methods. This feature has already been discussed in our paper [1], where we compared various numerical algorithms with respect to their numerical accuracy.
The next feature of the presented algorithm is that it applies also for strongly non-linear motion equations. In particular, in Fig.4, we have presented two different attractors of the forced Duffing oscillator (the parameters have been taken from the Fig. 2.20 in a book by Holden [11]) with the same values of a = 0.1 and b = 3.5 but different initial conditions. In Fig. 5, there has been presented entire trajectory starting from the initial condition and leading to one of the attractors. One can use our method also for chaotic solutions of the oscillator. However, we do not discuss this possibility in this paper.
The polynomial x 2 (t), in the case of the Duffing oscillator, is represented by the following formulae: and in this case the accepted approximated solution should satisfy the inequality |ϕ 2 (τ )| < ε for a given ε, where ϕ 2 (τ ) is the coefficient of λ 2 (we set the formal parameter λ = 1 after all). In our previous paper [1] we have shown that our method could be used also for molecular-dynamics simulations of large number of particles. To this end, we have simulated barometric formula in the case of the ideal gas of 1000 molecules in the gravitational field and the gas was contacted Nosé-Hoover thermostat [12], [13].
In all mentioned by us cases, till now, the series expansion of the force (Eq. (3)) consisted of a finite number of terms. The question arises, could the method be extended to a more general case, where the number of terms is infinite? In order to show the possibility we have considered 2D Lennard-Jones fluid represented by a system of N particles interacting with Lennard-Jones potential energy, Then, the force experienced by the particle i from another particle j being a distance r ij away is repesented by the following formula In this case the series expansion in the neighborhood of (x 0 , v 0 ) (see Eq. 3) leads to an infinite number of terms including the powers of (x 0i − x 0j ) 2 + (y 0i − y 0j ) 2 .
In the case of the approximating polynomials of the order of λ 1 the numerical algorithm is equivalent to the Velocity-Verlet algorithm and it is represented by the following set of equations: where and − → r i0 and − → v i0 are the initial location and velocity of the particle i, respectively. The accuracy control parameter ε should satisfy the condition The generalization of the algorithm to the case of the polynomial approximation of the order λ 2 becomes much more complex and it is not presented in this paper. However, already the results obtained in the linear approximation (in formal parameter λ) become promising. In Fig. 6, there has been plotted the kinetic energy (per particle) of 500 particles representing 2D Lennard-Jones fluid versus time in the case of the Velocity-Verlet algorithm and our polynomial approximation (Eqs. 22-23), linear in λ. In this case, the total time t P used by the polynomial method was of the same order as the total time t V of the Verlet method (t P = 1.5 t V ). In order to preserve the given numerical accuracy ε the polynomial algorithm was runnining according to the following steps: (i) start with τ = h 0 , where h 0 = 0.001 (ii) if Eq. (26) fails then change the value of τ by some factor, e.g. τ = τ /10 (iii) calculate the values of the polynomials − → r i and − → v i (iv) goto (i).
The total calculation time strongly depends on the value of ε and the higher order of the approximating polynomial (in the formal variable λ) makes possible larger values of τ .
|
2014-10-01T00:00:00.000Z
|
2007-01-18T00:00:00.000
|
{
"year": 2007,
"sha1": "abd54ceb738256a25d867912cd0be9176592a551",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/nlin/0701037",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "abd54ceb738256a25d867912cd0be9176592a551",
"s2fieldsofstudy": [
"Mathematics",
"Physics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
}
|
252636631
|
pes2o/s2orc
|
v3-fos-license
|
Using the Multidimensional AIMES to Estimate Connection-to-Nature in an Australian Population: A Latent Class Approach to Segmentation
Individuals can interact and develop multiple connections to nature (CN) which have different meanings and reflect different beliefs, emotions, and values. Human population are not homogenous groups and often generalised approaches are not effective in increasing connectedness to nature. Instead, target-group specific approaches focusing on different segments of the population can offer a promising approach for engaging the public in pro-environmental behaviours. This research employed latent class analysis to identify subgroups of individuals in a large, representative sample (n = 3090) of an Australian region. Three groups were identified using the AIMES measure of CN with its focus on five types of connection to nature. The high CN group comprised about one-third (35.4%) of participants while the group with the lowest profile of scores contained around a fifth (18.6%) of participants. The majority (46.0%) of participants registered CN levels between the high and low groups. These classes were then regressed on predictor variables to further understand differences between the groups. The largest, consistent predictors of class membership were biocentric and social-altruistic value orientations, stronger intentions to perform pro-environmental behaviours in public (e.g., travel on public transport), the amount of time spent in nature, and the age of participants.
Introduction
Increasing interactions and connection with nature have been a priority for many government agencies because of its positive outcomes for humans and nature [1,2]. Spending time in nature and feeling psychologically connected to nature have been associated with various wellbeing outcomes, such as positive affect, vitality, and life satisfaction [3,4]. Importantly, it has also been linked to increased engagement in pro-environmental behaviours leading to biodiversity protection [5,6]. There is however still much to learn on how to best foster human-nature connection [7,8].
The urban public is not a homogenous group and often generalised approaches are not effective in increasing connectedness to nature. Instead, target-group specific approaches focusing on different segments of the population appear to be promising in engaging the public in pro-environmental behaviours [9,10]. Segmenting a population can help to develop more effective strategies and meet the needs of different communities [9]. The current research segments the Victorian population (Australia) along five dimensions of the AIMES connectedness to nature scale [11]. Findings reveal three distinct types of connection and provide opportunities to develop targeted interventions.
The Multidimensional AIMES Connection to Nature Scale
Building on the findings about connection-to-nature (CN) components over the last two decades, Meis-Harris, Borg and Jorgensen [11] developed and validated a multidimen- The AIMES scale showed that people differ along the five dimensions. That means the Victorian population is not a homogenous group and therefore one-size-fits all approaches to increase connectedness to nature seem less effective. Instead dividing the Victorian population along the five dimensions of CN may result in more detailed information about the different styles in which Victorians express their connectedness to nature. This segmentation can be used to develop more effective strategies that more specifically meet the needs of the different parties as research has shown that policies are more likely to be accepted when they are designed to fit around individuals' beliefs and lifestyles [9,23]. Slater [24] describes the aim of segmentation to identify subgroups in the population that cluster together based on their shared values and beliefs. Members of each group or segment are being more similar to each other than members of other groups or segments [24].
Segmentation has been widely applied in environmental management to maximise the efforts of communication and engagement strategies. A number of models have focused on major environmental topics such as climate change [25][26][27][28][29][30], sustainability [31], consumption [32][33][34], and conservation [35]. Some other models had a more distinct focus on the human-nature relationship [10,36,37] and environmental worldviews [9] but lack the depth of knowledge that comes when working with a multidimensional approach.
The current research employs latent class analysis to identify subgroups of individuals that are similar within groups and different between groups. Group formation is based on the items of the AIMES with its focus on five types of connection to nature. These subgroups are then regressed on key variables from the environmental social science literature to further understand differences between the groups. These objectives are reflected in the following research questions: 1.
Based on the AIMES, how are latent classes defined to represent individual connections to nature? 2.
What is the relationship between CN subgroup membership and key environmental variables (i.e., environmental values, time spent in nature, types of places of con-nection, types of activities undertaken in nature, pro-environmental intentions and behaviours) and socio-demographic characteristics.
The Study Context
Victoria is Australia's second most populated State with a population of over 6.6 million. Over the last twenty years, Victoria has had a significant biodiversity loss, leading to the extinction of more than 50 animals and over 50 plant species [2]. Consequently, engaging people to connect to nature and to protect biodiversity is a major aim of the State's Biodiversity strategy [2].
This study thus contributes to the literature, but equally, findings can help to develop targeted interventions that more directly align with the specific sub-groups of how people connect to nature, which may lead to greater connection and pro-conservation behaviours.
Methods
Participants and sampling. We conducted an online survey in the Australian state of Victoria. The Online Research Unit (ORU), an online survey panel company, recruited a representative sample of adults (18 years or older). Stratified random sampling was employed to ensure the responding sample reflected the Victorian population in relation to gender, age group, and metro versus regional. Participants received an email invitation stating the length, incentive, and close date of the survey. The survey subject was not included to avoid sample selection biases. Email invitations were distributed to 30,753 survey panel members, with a response rate of 9.95%. For more information about data recruitment and survey development see Meis-Harris, Borg and Jorgensen [11] and the State's Biodiversity strategy [2].
The final sample consisted of n = 3090 participants ranging in age from 18 to 89 years. In line with population data from the Australian Bureau of Statistics (ABS) (National, state and territory population, March 2021. Australian Bureau of Statistics, 16 September 2021. Archived from the original on 18 September 2021. Retrieved 26 October 2021.) 50.2% of the sample identified as female (ABS: 50.9%), 23.9% lived in regional Victoria (ABS: 24.5%), and the mean age of respondents was 47 years (SD = 16.31) which is higher than the population (median = 37) as the sample did not include those aged under 18 years. This research was approved by the authors' University Human Research Ethics Committee (#14010).
To determine the number of classes, four Latent Class Analysis (LCA) models were estimated using MPlus 8. Following Asparouhov and Muthén [38] each model was run using different sets of starting values to ascertain if the loglikelihood was replicated in the bootstrap draws. The Lo-Mendell-Rubin (LMR) test and the bootstrapped likelihood test were used to identify the correct number of classes. A range of goodness-of-fit indices were also consulted to identify the best model. To add to the validity of the results, the statis-tical analysis was performed in two subsamples following a random split of the full sample.
How Are Latent Classes Defined to Represent Individual Connections to Nature?
The results from the first subsample indicated that the model specifying two classes was preferred (see Table 1). Both the LMR and Vuong-Lo-Mendell-Rubin (VLMR) tests were significant, indicating that two classes were preferred to just one class. These same tests were not significant when three classes were tested, suggesting that the model specifying two classes was again the better model. The remaining goodness-of-fit statistics decrease as more clusters are added. However, the reductions observed in the Model Log Likelihood (LL), Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and the Sample-size Adjusted BIC (SABIC) are small for models of three or more classes compared with the reductions observed in these statistics for the model with two classes. Furthermore, the decrease in entropy values over the progression of models is relatively small following the specification of two classes. Finally, the Bootstrap Likelihood Ratio Test (BLRT) was significant for all models such that the addition of more classes results in a better model, but this is likely due to the relatively large sample size serving to increase statistical power and the Type I (see Table 2). The results of the second subsample suggested that the model with three classes is preferred. In support of the 3-class model, the entropy statistic was highest and the LMR and VLMR tests were significant when three classes were compared with two, but not significant when four classes were compared with three. These results notwithstanding, the Model LL, AIC, BIC and SABIC decreased as more classes were introduced, but these reductions were small following 2-class model.
To choose between the two models, the means of the 20 items were compared between the two and three classes identified in each subsample (see Table 3). These comparisons showed that the classes in each case were ordered rather than nominal. The two class solutions comprised participants whose profile of scores indicated either a high or low connection to nature. Similarly, these classes were ordered as high-, medium-, and low-level connection for the models with three classes. In both subsamples, the third (medium CN) class was formed by splitting both the high and low CN groups from the two-class solution. Table 3. Distribution across the classes for two and three classes for subsample 1 and 2.
Subsample Class 1 (High) Class 2 (Low) Class 3 (Medium)
n (%) n (%) n (%) Three classes were selected for further investigation because the third class was located between the high-and low-CN groups offering greater discrimination between participants on an ordinal metric. The mean scores of each item were plotted to illustrate the high, medium, and low levels of CN that characterise the classes in each subsample (see Figures 1 and 2). First, the pattern of item means is virtually identical in the two random subsamples. Recall that all items were randomly presented to each participant, so the closely matching pattern of item means suggests that the items display considerable consistency between groups. Second, in both subsamples, the first three Materialism items show relatively little discrimination between high/medium/low classes compared with the items measuring other factors. The fourth item has means that resemble the discriminating high, medium, and low pattern observed for the items measuring the Attachment, Identity, Experiential, and Spirituality factors.
ticipants on an ordinal metric. The mean scores of each item were plotted to illustrate the high, medium, and low levels of CN that characterise the classes in each subsample (see Figures 1 and 2). First, the pattern of item means is virtually identical in the two random subsamples. Recall that all items were randomly presented to each participant, so the closely matching pattern of item means suggests that the items display considerable consistency between groups. Second, in both subsamples, the first three Materialism items show relatively little discrimination between high/medium/low classes compared with the items measuring other factors. The fourth item has means that resemble the discriminating high, medium, and low pattern observed for the items measuring the Attachment, Identity, Experiential, and Spirituality factors. Tests were conducted on the item means between subsamples and revealed a similar pattern of results in the two datasets. In subsample 1, all item means were significantly different between classes (p < 0.000) except for one Materialism item (Materialism 1) which was not significant (F = 2.63, p = 0.073). For the means in the second subsample, the item means were significantly different (p < 0.000) except for the means of the three Materialism items noted above (p > 0.05 in all three tests).
The two subsamples were pooled and three classes were estimated. As expected, the Tests were conducted on the item means between subsamples and revealed a similar pattern of results in the two datasets. In subsample 1, all item means were significantly different between classes (p < 0.000) except for one Materialism item (Materialism 1) which was not significant (F = 2.63, p = 0.073). For the means in the second subsample, the item means were significantly different (p < 0.000) except for the means of the three Materialism items noted above (p > 0.05 in all three tests).
The two subsamples were pooled and three classes were estimated. As expected, the results for the whole sample were very similar to those reported for the subsamples. The percentage of participants classified into the classes were: highest CN (35.4%), medium CN (46.0%), and low CN (18.6%). The entropy value indexing the classification quality of the model equalled 0.91.
What Is the Relationship between CN Subgroup Membership and Key Environmental Variables and Demographic Characteristics?
Latent Class Analysis (LCA) with auxiliary variables was conducted to identify significant predictors of class membership [39,40]. Having established three classes of participants based on their AIMES scores, the prediction of class membership was conducted using the full sample.
The predictors were those shown in Table 4 and include several demographic characteristics and psychological variables. The information in the table provides a description of each variable and the goodness-of-fit statistics for variables having multiple indicators. Factor scores were employed for multiple indicator variables rather than the latent predictors themselves because of the large computer processing resources such models require [40,41]. For this reason, the reliability coefficients for these variables are also included in the variable descriptions. The likelihood of undertaking 11 public (e.g., "volunteering in community-based activities") and private (e.g., "reducing energy use") activities over the next 12 months were measured on 7-point scales. The construct reliabilities equalled 0.87 and 0.68, respectively. Measured with a single item: "In the last year, about how often have you generally spent time in nature?" Response options were 1 (never); 2 (less than once a year); 3 (at least once a year); 4 (at least twice a year); 5 (at least once a month); 6 (at least once a fortnight); 7 (at least once a week); 8 (every other day); and 9 (every day). Prior to the analysis, zero-order correlations among the predictor variables were examined to identify examples of high collinearity. The largest correlation was between biospheric values and social altruistic values (r = 0.62, p < 0.000). The results of the analysis indicated that the correlation between the two value orientations was influencing the sign of the coefficient for the social-altruistic values variable. The coefficient was positive when the biospheric orientation was included in the analysis but negative when it was omitted. For this reason, separate analyses were conducted using either the biospheric variable or the social-altruistic variable.
The results of the categorical latent variables multinomial logistic regressions using the 3-step procedure of Asparouhov and Muthén [40] appear in Tables 5 and 6. Table 5 presents the results of the equation with biocentric value orientation included as a predictor, while the data in Table 6 contains the social-altruistic value orientation. From Table 5, increases in the levels of several predictors resulted in decreased odds of being in the low CN cluster compared with the high CN group. That is the odds of reporting a strong CN increased with being older, spending one's childhood outside Australia, stronger public and private intentions for pro-environmental activities, having spent more time in nature over the last 12 months, and adherence to biospheric and egocentric values. Comparing the medium CN group with individuals classified in the high CN cluster revealed that the odds of membership in the medium CN cluster decreased with higher levels of age, stronger intentions to perform public pro-environmental behaviours, time spent in nature, and stronger support for biospheric values.
The information in Table 6 shows the results of the analysis where the social-altruistic value orientation was substituted for the indicator of biospheric values. Where demographic characteristics were concerned, older participants were more likely to be classified in the high CN group, as were those whose childhood was experienced in a country other than Australia. Furthermore, males were less likely than females to be classified in the low CN class. All remaining variables had significant effects with those in the high CN class more likely to have stronger intentions for pro-environmental activities, spent more time in nature over the last 12 months, and stronger support for biospheric and egocentric values.
Comparing the medium CN group with individuals classified in the high CN cluster, only age emerged as a significant demographic predictor, with older participants less likely to be members of the medium CN class. The odds of membership in the medium CN cluster also decreased with higher levels of pro-environmental intentions, time spent in nature, and greater endorsement of social-altruistic values.
Discussion
Based on the individual items of the AIMES, three latent classes were found to represent individual connections to nature. These classes represented ordered categories of CN ranging from low to high degrees of individual connections to nature. Class membership was consistently associated with age, willingness to engage in pro-environmental activities in public, spending time in nature, and support for biospheric and social-altruistic values. These results and their implications are discussed in the following sections.
Generalisation from the Sample to the Population
This study employed the AIMES measure of CN to classify a large representative sample into three groups. These groups were ordered on a continuum ranging from lower to higher levels of CN. The high CN group comprised about one-third (35.4%) of participants while the group with the lowest profile of scores on the AIMES items contained around a fifth (18.6%) of participants. The majority (46.0%) of participants registered CN levels between the high and low groups.
When generalised to the population of adults in Victoria (Australian Bureau of Statistics, 2016) (National, state and territory population, March 2021. Australian Bureau of Statistics, 16 September 2021. Archived from the original on 18 September 2021. Retrieved 26 October 2021) approximately 1.6 million Victorian adults are likely to experience a relatively strong connection to nature. On the other hand, about 900,000 adults have relatively little connection. The large class of participants defined by medium levels of CN (about 2 million adults), as well as the smaller yet still substantial proportion of participants classified as low CN, suggests there is considerable scope to improve the range and quality of Victorians' connections to nature and, therefore, the wellbeing they might derive from these connections and the benefits to biodiversity [3,5,6].
No matter what the CN group, beliefs about the material consumption of nature were not a defining characteristic in terms of membership. This is unsurprising given that the Material Consumption factor of the AIMES is largely independent of the other factors [11,43]. Therefore, some individuals held stronger materialism beliefs across levels of CN. Within the adult Victorian population, individuals do not make sense of material connections to nature in the same way they experience connections of identity, affect, experience in nature, or spirituality [44][45][46].
One might suppose that the anthropocentric underpinnings of beliefs in the primacy of the material goods and services supplied by nature would stand in direct contrast to the more biospheric orientations of CN. For example, other work with the AIMES has shown that the Material Consumption dimension was significantly correlated with egocentric values and positively correlated with biospheric and social-altruistic values [11]. However, instead of observing high levels of materialism contributing to the formation of low levels of CN, the data indicate that individuals do not necessarily bring into relationship the exploitation of nature to satisfy their material consumption needs with their spiritual, emotional, and identity connections. In other words, a connection to nature expressed through an appreciation of its contribution to meeting material needs, over and above its intrinsic value, is a relationship fundamentally different to the identity, attachment, experiential, and spiritual connections examined here and in other CN research [1]. Given that so much of nature is exploited for the purpose of facilitating and encouraging a material connection to nature via consumption of goods and services, pro-conservation interests might renew efforts to communicate the consequences of material consumption for threatened ecosystems and its sustainable limits [47,48].
Consistent Predictors of Class Membership
Comparison of the low CN and medium CN classes with the high CN class revealed a subset of predictor variables that are consistently significant in predicting CN group membership. Participants' age was the only demographic variable that was significant across all regressions showing that those in the two lower CN groups were likely to be younger than members of the highest-scoring group on the AIMES. Furthermore, participants in the low and medium CN groups were less likely to report public pro-environmental intentions, spent less time in nature, and were less supportive of both biospheric and social-altruistic value orientations than those in the highest CN group.
Of the remaining variables in the analyses, pro-environmental intentions to perform behaviours in private contexts was a significant predictor in all regressions but the one comparing the medium and high CN classes. This is evidence suggesting that the biospheric value orientation explains the relationship between intentions and membership in these classes whereas social-altruistic values do not.
Other predictors showed a pattern of relationships with CN class membership that appeared to be independent of whether biospheric or social-altruistic value orientation was include in the regression analysis. For example, support for egoistic values had a small-to moderate predictive effect only when the lowest level CN class was compared with the highest. Egoistic value orientation was not a significant predictor of membership in the medium CN class compared with the high CN group. Similarly, whether participants spent their childhood outside Australia or within it, and whether they identified as male or female, depended to an extent on which CN classes constituted the dependent variable. Regression coefficients tended to be larger when membership in the low and high CN classes was the focus of analysis rather than membership in the medium and high groups.
Value Orientations and CN Class Membership
That social-altruistic and biospheric values were held by individuals reporting strong connections to nature is consistent with previous research [49]. The observation that holding an egocentric value orientation can also support high levels of connection is not widely reported in the literature but is unsurprising upon reflection.
Previous research by Bouman, et al. [50] reported significant positive correlations between self-reports of how frequently participants showered/bathed in an average week and/or bath a week (an energy conservation behaviour) and both egocentric and biospheric value orientations. Imaningsih, et al. [51] examined the effect of egocentric and biospheric values (among other variables) on outcomes such as consumers' purchasing loyalty to green products. The researchers found that both egoistic and biospheric values had positive effects on green loyalty.
In related research, Hansla, et al. [52] reported significant positive correlations among environmental concern for oneself, others and the biosphere and three-out-of-four values (i.e., achievement, benevolence, and universalism). The fourth value-power-was positively related to environmental concern for oneself, but not to either concern for others or the biosphere.
The aforementioned studies, while examples of contexts in which egocentric and biospheric value positions appear to support pro-environmental behaviour and concern for the environment, values research has tended to report that biospheric and egocentric orientations operate counter to each other when predicting connection to nature and pro-environmental outcomes generally (e.g., [49][50][51][52][53]).
Evidence for this counter relationship notwithstanding, interventions have engaged egocentric beliefs and values to promote pro-environmental behaviour change. For example, environmental campaigns have sought to reduce energy and water consumption by pointing to the cost savings accruing from resource conservation. Further, the protection of endangered species is underpinned by the opportunity to continue experiencing them firsthand as much as the importance of the biodiversity of ecological systems. Environmental behaviours can be underpinned by multiple motives that span all three value orientations.
An earlier construct validity analysis of the AIMES [11] showed that egocentric values were statistically unrelated to all AIMES dimensions except Materialism. A relationship between egocentricism and connection to nature via its material consumption is consistent with a good deal of thinking in environmental behaviour research [46,54,55] and sustainable consumption [47]. Baird, Dale, Holzer, Hutson, Ives and Plummer [14] for example refer to materialism as representing a "shallow connections to nature" (p. 3) recognising its basis in anthropocentrism and distinguishing it from "deeper" forms of connection such as cognitive and emotional connections.
Recall that the Material consumption dimension of the AIMES was not a strong contributor to the formation of the CN classes and the classification of participants. Therefore, the effect of egocentric values that might counter the effects of biospheric and social-altruistic value orientations was diminished or negated. Without the influence of material consumption in the formation of the classes, the effect of egocentric values that did emerge supported, rather than contradicted, the effects of the other two value orientations.
While this explanation is speculative at this stage, future research might attempt to focus on how the relationships between value positions can influence environmental variables of interest. A study by de Groot and Steg [56] begun research along these lines and found that conflict between altruistic and biospheric goals provided a unique source of influence on pro-environmental intentions. Future research along these lines may provide insights into not only how the level of support for a particular value position can affect behaviour, but also how its relationship with other value orientation may offer a distinct motivational basis.
Conclusions
The AIMES measure of CN was used to classify a large representative sample into three latent classes which were ordered on a continuum ranging from lower to higher levels of CN. Beliefs about the primacy of nature as an input to material goods and their consumption was the only dimension of the AIMES that did not contribute to the formation of the latent classes. This suggests that materialism as measure by the AIMES is a relatively distinct type of connection compared to other types of connections studied in previous CN research.
On the basis of the representative sample employed in the research, it was possible to generalise the sample statistics to the wider population of adults aged 18 years or more. The majority of the population were classified as having medium and high levels of connection to nature, and classification into the different classes was consistently predicted by public pro-environmental intentions, time in nature, and both biospheric and social-altruistic value orientations.
Future Research
Future research might further explore the validity of the AIMES and multidimensional approaches to CN in general [13]. Riechers, Pătru-Dus , e and Balázsi [15], for example, showed that a multidimensional appreciation of CN was required to capture the diversity of effects associated with different types of landscapes and the human relationships associated with them. Baird, Dale, Holzer, Hutson, Ives and Plummer [14] also benefitted from a multidimensional approach when evaluating environmental education programs. Further validation and development of the AIMES provides researchers with the benefits of understanding keyways individuals can connect with nature, and can facilitate a comparison of research results across different research contexts in which CN has been measured by the same instrument.
This study has provided further evidence that individuals do not make sense of material connections to nature in the same way they experience connections of identity, affect, experience in nature, or spirituality. Baird, Dale, Holzer, Hutson, Ives and Plummer [14] have suggested that the five types of connections developed by Ives, Abson, von Wehrden, Dorninger, Klaniecki and Fischer [13] might be conceived as systematically varying from shallow to deeper connections, and that future research might seek to test this position. Our results support this need for future research and suggest that it be examined in the context of different types of pro-environmental behaviours (e.g., dematerialisation behaviours) varying in the level of commitment required to perform them and involve subpopulations that are likely to prioritise different connections to nature (e.g., pro-environmental versus pro-development groups). Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Written informed consent has been obtained from the participants to publish this paper. Data Availability Statement: Data available at Open Science Framework: Data from: Victorians Valuing Nature Foundations Survey.
|
2022-10-01T15:13:44.186Z
|
2022-09-28T00:00:00.000
|
{
"year": 2022,
"sha1": "eac67b8399a7269062cb351736efc63952b4249b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/19/19/12307/pdf?version=1664354906",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ce69a608dc7f308981c537b4bb98800f2b564de6",
"s2fieldsofstudy": [
"Environmental Science",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
7099571
|
pes2o/s2orc
|
v3-fos-license
|
Learning Reductions that Really Work
We provide a summary of the mathematical and computational techniques that have enabled learning reductions to effectively address a wide class of problems, and show that this approach to solving machine learning problems can be broadly useful.
Introduction
In a reduction, a complex problem is decomposed into simpler subproblems so that a solution to the simpler subproblems gives a solution to the complex problem. When this is a simple process, it is conventionally called "programming", while "reduction" is reserved for more difficult applications of this technique. Computational complexity theory, for example, relies on reductions in an essential fashion to define computational complexity classes, most canonically NP-hard problems. In machine learning, reductions are often used constructively to build good solutions to hard problems.
The canonical example here is the one-against-all reduction, which solves k-way multiclass classification via reduction to k base predictions: The ith predictor is trained to predict whether a label is class i or not. Figure 1 shows how this reduction works experimentally, comparing the multiclass loss to the average loss of base predictors. There are no base predictors with small loss inducing a large multiclass loss-as ruled out by theory. Are learning reductions an effective approach for solving complex machine learning problems? The answer is not obvious, because there is a nontrivial representational concern: maybe the process of reduction creates "hard" problems that simply cannot be solved well? A simple example is given by 3-class classification with a single feature and a linear predictor. If class i has feature value i, then the 2 versus 1, 3 classifier necessarily has a large error rate, causing the one-against-all reduction to perform poorly. The all-pairs reduction, which learns a classifier for each pair of labels, does not suffer from this problem in this case. Figure 1: Multiclass loss rate compared to average base predictor loss rate for the one-againstall reduction applied to 2-class classification datasets. (Normally, one-against-all is applied to problems with more than 2 classes. However, we expect the relative scaling of base predictor loss and multiclass loss to vary with the number of classes, so it was desirable to fix the number of classes for this plot.) Although this concern is significant, it is of unclear strength as there are many computationally convenient choices made in machine learning, such as conjugate priors, proxy losses, and sigmoid link functions. Perhaps the representations created by natural learning reductions work well on natural problems? Or perhaps there is a theory of representation respecting learning reductions?
We have investigated this approach to machine learning for about a decade now, and provide a summary of results here, addressing several important desiderata: 1. A well-founded theory for analysis. A well-founded theory makes the approach teachable, and provides a form of assurance that good empirical results should be expected, and carry over to new problems.
2. Good statistical performance. In addition, the theory should provide some effective guidance about which learning algorithms are better than other learning algorithms.
3. Good computational performance. Computational limitations in machine learning are often active, particularly when there are large amounts of data. This is critical for learning reductions, because the large data regime is where sound algorithmics begin to outperform clever representation and problem understanding.
4. Good programmability. Programmability is a nontraditional concern for machine learning which can matter significantly in practice.
5.
A unique ability. Learning reductions must either significantly exceed the performance of existing systems on existing problems or provide a means to address an entirely new class of problems for the effort of mastering the approach to be justified.
Here we show that all the above criteria have now been met.
Strawman one-against-all
A common approach to implementing one-against-all for k-way multiclass classification is to create a script that processes the dataset k times, creating k intermediate binary datasets, then executes a binary learning algorithm k times, creating k different model files. For test time evaluation, another script then invokes a testing system k times for each example in a batch. The multiclass prediction is the label with a positive prediction, with ties broken arbitrarily. A careful study of learning reductions reveals that every aspect of this strawman approach can be improved.
Organization
Section 2 discusses the kinds of reduction theory that have been developed and found most useful.
Section 3 discusses the programming interface we have developed for learning reductions. Although programmability is a nonstandard concern in machine learning applications, we have found it of critical importance. Creating a usable interface which is not computationally constraining is critical to success. Section 4 discusses several problems for which the only known solution is derived via a reduction mechanism, providing evidence that the reduction approach is useful for research. Section 5 shows experimental results for a particularly complex "deep" reduction for structured prediction, including comparisons with many other approaches.
Together, these sections show that learning reductions are a useful approach to research in machine learning.
Reductions theory
There are several natural learning reduction theories on a spectrum from easy to powerful, which we discuss in turn.
Error reductions
In an error reduction, a small error rate on the created problems implies a small error rate on the original problem. When multiple base problems are created, we measure the average error rate over the base problems. Since it is easy to resample examples or indicate via an importance weight that one example is more important than the other, nonuniform averages are allowed.
For example, in the strawman one-against-all reduction, an average binary classification error rate of implies a multiclass error rate of at most (k − 1) (see [4,29]). A careful examination of the analysis shows how to improve the error-transformation properties of the reduction: The first observation is that it helps to break ties randomly instead of arbitrarily. The second observation is that, in the absence of other errors, a false negative implies only a 1/k probability of making the right multiclass prediction, while for a false positive this probability is 1/2. Thus modifying the reduction to make the binary classifier more prone to output a positive, which can be done via an appropriate use of importance weighting, improves the error transform from (k − 1) to roughly k 2 . As predicted by theory, both of these elements yield an improvement in practice [11].
Another example of an error reduction for multiclass classification is based on error-correcting output codes [23,29]. Figure 2: Mnist experimental results for one-against-all reduced to binary classification and oneagainst-all reduced to squared loss regression while varying training set size. The x-axis is the 0/1 test loss of the induced subproblems, and the y-axis is the 0/1 test loss on the multiclass problem. The classifiers are linear in pixels. The regression approach (a regret transform) dominates the binary approach (an error transform).
A valid criticism of error reductions is that the guarantees they provide become vacuous if the base problems they create are inherently noisy. For example, when no base binary classifier can achieve an error rate better than 2 k , the one-against-all guarantee above is vacuous.
Regret reductions
Regret analysis addresses this criticism by analyzing the transformation of excess loss, or regret.
Here regret of a predictor is the difference between its loss and the minimum achievable loss on the same problem. In contrast to many other forms of learning theory, the minimum is over all predictors.
A reduction that translates any optimal (i.e., no-regret) solution to the base problems into an optimal solution to the top-level problem is called consistent. Consistency is a basic requirement for a good reduction. Unfortunately, error reductions are generally inconsistent. To see that oneagainst-all is inconsistent, consider three classes with true conditional probabilities 1 2 − 2δ, 1 4 + δ, and 1 4 + δ. The optimal base binary prediction is always 0, resulting in multiclass loss of 2/3. The corresponding multiclass regret is 1 6 − 2δ, which is positive for any δ < 1/12. Strawman one-against-all can be easily made consistent by reducing to squared-loss regression instead of binary classification. The multiclass prediction is made by evaluating the learned regressor on all labels and predicting with the argmax. As shown below, this regression approach is consistent. It also resolves ties via precision rather than via randomization, as seems more likely to be effective in practice. Figure 2 illustrates the empirical superiority of reducing to regression rather than binary classification for the Mnist [41] data set.
We will analyze the regret transform of this approach for any fixed x, taking expectation over x at the end. Let f (x, a) be the learned regressor, predicting the conditional probability of class a on x. Let p a be the true conditional probability. The squared loss regret of f on the predicted class a is Similarly, the regret of f on the optimal label a * = arg max a p a is (p a * − f (x, a * )) 2 . To incur multiclass regret, we must have f (x, a) ≥ f (x, a * ). The two regrets are convex and the minima is reached when . The corresponding squared loss regret suffered by f on both a and a * is (p * a − p a ) 2 /2. Since the regressor doesn't need to incur any loss on other predictions, the regressor can pay reg(f ) = (p * a − p a ) 2 /2k in average squared loss regret to induce multiclass regret of p a * − p a on x. Solving for multiclass regret in terms of reg(f ) shows that the multiclass regret of this approach is bounded by 2k reg(f ). Since the adversary can play this optimal strategy the bound is tight.
Although a regret reduction is more desirable than an error reduction, the typical square root dependence introduced when analyzing regret is not desirable. Nonetheless, moving from an error reduction to a regret reduction is often empirically beneficial (see Figure 2).
There are many known regret reductions for such problems as multiclass classification [33,47], costsensitive classification [12,39], and ranking [2,3]. There is also a rich body of work on so called surrogate regret bounds. It is common to use some efficiently minimizable surrogate loss instead of the loss one actually wishes to optimize. A surrogate regret bound quantifies the resulting regret in terms of the surrogate regret [2,7,55]. These results show that standard algorithms minimizing the surrogate are in fact consistent solutions to the problem at hand. In some cases, commonly used surrogate losses actually turn out to be inconsistent [27].
Adaptive reductions
Adaptive reductions create learning problems that are dependent on the solution to other learning problems. In general, adaptivity is undesirable, since conditionally defined problems are more difficult to form and solve well-they are less amenable to parallelization, and more prone to overfitting due to propagation and compounding of errors.
In some cases, however, the best known reduction is adaptive. One such example is logarithmic time multiclass prediction, discussed in section 4.4. All known unconditional log-time approaches yield inconsistency in the presence on label noise [12].
The average base regret is still well defined as long as there is a partial order over the base problems, i.e., each base learning problem is defined given a predictor for everything earlier in the order.
Boosting [49] can be thought of as an adaptive reduction for converting any weak learner into a strong learner. Typical boosting statements bound the error rate of the resulting classifier in terms of the weighted training errors t on the distributions createdy adaptively by the booster. Ability to boost is rooted in the assumption of weak learnability-a weak learner gets a positive edge over random guessing for any distribution created by the booster. As with any reduction, there is a concern that the booster may create "hard" distributions, making it difficult to satisfy the assumption. Although linear separability with a positive margin implies weak learnability [49], linear separability is still a strong assumption. Despite this concern, boosting has been incredibly effective in practice.
Optimization oracle reductions
When the problem is efficiently gathering information, as in active learning (discussed in section 4.3) or contextual bandit learning (discussed in section 4.2), previous types of reductions are inadequate because they lack any way to quantify progress made by the reduction as examples are used in learning.
Suppose we have access to an oracle which when given a dataset returns an minimum loss predictor from a class of predictors H with a limited capacity. The form of the learning problem solved by the oracle can be binary classification, cost-sensitive classification, or any other reasonable primitive. Since many supervised learning algorithms approximate such an oracle, these reductions are immediately implementable.
Since the capacity of H is limited, tools from statistical learning theory can be used to argue about the regret of the predictor returned by the oracle. Cleverly using this oracle can provide solutions to learning problems which are exponentially more efficient than simpler more explicit algorithms for choosing which information to gether [1,9].
Interfaces for learning reductions
A good interface for learning reductions should simultaneously be performant, generally useful, easy to program, and eliminate systemic bugs.
The Wrong Way
The strawman one-against-all approach illustrates interfacing failures well. In particular, consider an implementation where a binary learning executable, treated as a black box, is orchestrated to do the one-against-all approach via shell scripting.
1. Scripting implies a mixed-language solution, which is relatively difficult to maintain or understand.
2. The approach may easily fail under recursion. For example, if another script invokes the one-against-all training script multiple times, it is easy to imagine a problem where the saved models of one invocation overwrites the saved models of another invocation. In a good programming approach, these sorts of errors should not be possible.
The transformation of multiclass examples into binary examples is separated from the trans-
formation of binary predictions into multiclass predictions. This substantially raises the possibility of implementation bugs compared to an approach which has encoder and decoder implemented either side-by-side or conformally.
4. For more advanced adaptive reductions, it is common to require a prediction before defining the created examples. Having a prediction script operate separately creates a circularity (training must succeed for prediction to work, but prediction is needed for training to occur) which is extremely cumbersome to avoid in this fashion.
5. The training approach is computationally expensive since the dataset is replicated k times. Particularly when datasets are large, this is highly undesirable.
6. The testing process is structurally slow, particularly when there is only one test example to label. The computational time is Ω(pk) where p is the number of parameters in a saved model and k is the number of classes simply due to the overhead of loading a model. 7. Even if all models are loaded into memory, the process of querying each model is inherently unfriendly to a hardware cache.
A Better Way
Our approach [40] eliminates all of the above interfacing bugs, resulting in a system which is general, performant, and easily programmed while eliminating bugs due to antimodular implementation. We require a base learning algorithm which presents two online interfaces: 1. Predict(example e, instance i) returns a prediction for base problem i and is guaranteed to not update the internal state.
2.
Learn(example e, instance i) returns the same result as Predict, but may update the internal state.
Note that although we require an online learning algorithm interface, there is no constraint that online learning must occur-the manner in which state is updated by Learn is up to the base learning algorithm. The interface certainly favors online base learning algorithms, but we have an implementation of LBFGS [43] that functions as an effective (if typically slow) base learning algorithm.
Since reductions are composable, this interface is both a constraint on the base learning algorithm and a constraint on the learning reduction itself-the learning reduction must define its own Predict}and Learn}interfaces.
It is common for reductions to have some state which is summarized in a reduction-specific datastructure. Every reduction requires a base learner which may either be another reduction or a learning algorithm. Reductions also typically have some reduction-specific state such as number of classes k for the one-against-all reduction. In a traditional object-oriented language, these arguments can be provided to the constructor of the reduction and encapsulated in the reduction object. In a purely functional language, the input arguments can be augmented with an additional state variable (and the return value of Learn augmented with an updated state variable).
The above interface addresses all the previously mentioned problems except for problem (7). Consider a dense linear model with sparse features. In this situation, the speed of testing is commonly limited by the coherency of memory access due to caching effects. To use coherent access, we stripe models over memory. In particular, for a linear layout with 4 models, the memory for model i is at address i, i + 4, i + 8,... This layout is nearly transparent to learning reductions-it is achieved by multiplying feature ids by the total number of models at the top of the reduction stack, using the instance to define offsets to feature values as an example descends the stack, and then requiring base learning algorithms to access example features via a foreach feature() function that transparently imposes the appropriate offset for a model.
Uniquely solved problems
Do reductions just provide a modular alternative that performs as well as other, direct methods? Or do they provide solutions to otherwise unsolved problems? Rephrased, are learning reductions a good first tool for developing solutions to new problems?
We provide evidence that the answer is 'yes' by surveying an array of learning problems which have been effectively addressed only via reduction techniques so far.
A common theme throughout these problems is computational efficiency. Often there are known inefficient approaches for solving intractable problems. Using a reduction approach, we can isolate the inefficiency of optimization, and remove other inefficiencies, often resulting in exponential improvements in efficiency in practice.
Efficient Contextual Bandit Learning
In contextual bandit learning, a learning algorithm needs to be applied to exploration data to learn a policy for acting in the world. A policy is functionally equivalent to a multiclass classifier that takes as input some feature vector x and produces an action a. The term "policy" is used here, because the action is executed-perhaps a news story is displayed, or a medical treatment is administered. Exploration data consists of quads (x, a, r, p) where x is a feature vector, a is an action, r is a reward, and p is the probability of choosing action a on x.
Efficient non-reduction techniques exist only for special cases of this problem [35]. All known techniques for the general setting [10,28,56] use reduction approaches.
Efficient Exploration in Contextual Bandits
Effectively doing contextual bandit learning in the online setting requires efficiently creating a good probability distribution over actions. There are approaches to this problem based on exponential weights [5] with a running time linear in the size of the policy set. Can this be done in more efficiently?
The answer turns out to be "yes" [1]. In particular, it is possible to reduce the problem to O( √ T ) instances of cost-sensitive classification which are each trained to find good-but-different solutions. This is an exponential improvement in computational complexity over the previous approach.
Efficient Agnostic Selective Sampling
A learning algorithm with the power to choose which examples to label can be much more efficient than a learning algorithm that passively accepts randomly labeled examples. However, most such approaches break down if strong assumptions about the nature of the problem are not met.
The canonical example is learning a threshold on the real line in the absence of any noise. A passive learning approach requires O(1/ ) samples to achieve error rate , while selective sampling requires only O ln 1 samples using binary search. This exponential improvement, however, is quite brittle-a small amount of label noise can yield an arbitrarily bad predictor.
Inefficient approaches for addressing this brittleness statistically have been known [6,30]. Is it possible to benefit from selective sampling in the agnostic setting efficiently?
Two algorithms have been created [9,31] which reduce active learning for binary classification to importance-weighted binary classification, creating practical algorithms. No other efficient general approaches to agnostic selective sampling are known.
Logarithmic Time Classification
Most multiclass learning algorithms have time and space complexities linear in the number of classes when testing or training. Furthermore, many of these approaches tend to be inconsistent in the presence of noise-they may predict the wrong label regardless of the amount of data available when there is label noise.
It is easy to note that logarithmic time classification may be possible since the output need only be O(log k) bits to uniquely identify a class. Can logarithmic time classification be done in a consistent and robust fashion?
Two reduction algorithms [12,16] provide a solution to this. The first shows that consistency and robustness can be achieved with a logarithmic time approach, while the second addresses learning of the structure directly.
Learning to search for structured prediction
Structured prediction is the task of mapping an input to some output with complex internal structure. For example, mapping an English sentence to a sequence of part of speech tags (part of speech tagging), to a syntactic structure (parsing) or to a meaning-equivalent sentence in Chinese (translation). Learning to search is a family of approaches for solving structured prediction tasks and encapsulates a number of specific algorithms (e.g., [18,20,22,24,25,32,44,48,50,53,54]). Learning to search approaches (1) decompose the production of the structure output in terms of an explicit search space (states, actions, etc.); and (2) learn hypotheses that control a policy that takes actions in this search space.
We implemented a learning to search algorithm (based on [20,48]) that operates via reduction to cost sensitive classification which is then further reduced to regression. This algorithm was then extensively tested against a suite of many structured learning algorithms which we report here (see [21] for full details). The first task we considered was sequence labeling problem: Part of Speech tagging based on data form the Wall Street Journal portion of the Penn Treebank (45 labels, evaluated by Hamming loss, 912k words of training data). The second is a sequence chunking problem: named entity recognition using the CoNLL 2003 dataset (9 labels, macro-averaged Fmeasure, 205k words of training data).
We use the following freely available systems/algorithms as points of comparison: per-feature normalized updates, and importance invariant updates. The variant VW Search (own fts) uses computationally inexpensive feature construction facilities available in Vowpal Wabbit (e.g., token prefixes and suffixes), whereas for comparison purposes VW Search uses the same features as the other systems.
These approaches vary both objective function (CRF, MIRA, structured SVM, learning to search) and optimization approach (L-BFGS, cutting plane, stochastic gradient descent, AdaGrad). All implementations are in C/C++, except for the structured perceptron and DEMI-DCD (Java).
In Figure 3, we show trade-offs between training time (x-axis, log scaled) and prediction accuracy (y-axis) for the six systems described previously. The left figure is for part of speech tagging and the right figure is for named entity recognition. For POS tagging, the independent classifier is by far the fastest (trains in less than one minute) but its performance peaks at 95% accuracy. Three other approaches are in roughly the same time/accuracy tradeoff: VW Search, VW Search (own fts) and Structured Perceptron. All three can achieve very good prediction accuracies in just a few minutes of training. CRF SGD takes about twice as long. DEMI-DCD eventually achieves the same accuracy, but it takes a half hour. CRF++ is not competitive (taking over five hours to even do as well as VW Classification). Structured SVM (cutting plane implementation) runs out of memory before achieving competitive performance, likely due to too many constraints.
For NER the story is a bit different. The independent classifiers are far from competitive. Here, the two variants of VW Search totally dominate. In this case, Structured Perceptron, which did quite well on POS tagging, is no longer competitive and is essentially dominated by CRF SGD. The only system coming close to VW Search's performance is DEMI-DCD, although its performance flattens out after a few minutes. 1 In addition to training time, test time behavior can be of high importance in natural applications. On NER, prediction times varied from 5.3k tokens/second (DEMI-DCD and Structured Perceptron to around 20k (CRF SGD and Structured SVM) to 100k (CRF++) to 220k (VW (own fts)) and 285k (VW). Although CRF SGD and Structured Perceptron fared well in terms of training time, their test-time behavior is suboptimal.
When looking at POS tagging, the effect of the O(k) dependence on the size of the label set further increased the (relative) advantage of VW Search over alternatives.
Summary and future directions
In working with learning reductions for several years, the greatest benefits seem to be incurred with modularity, deeper reductions, and computational efficiency.
Modularity means that the extra code required for multiclass classification (for example) is minor compared to the code required for binary classification. It also simplifies the use of a learning system, because (for example) learning rate flags apply to all learning algorithms. Modularity is Named entity recognition (tuned hps) also an easy experimentation and optimization tool, as one can plug in different black boxes for different modules. While there are many experiments showing near-parity prediction performance for simple reductions, it appears that for deeper reductions the advantage may become more pronounced. This is well illustrated for the learning to search results discussed in section 5, but has been observed with contextual bandit learning as well [1]. The precise reason for this is unclear, as it is very difficult to isolate the most important difference between very different approaches to solving the problem.
Not all machine learning reductions provide computational benefits, but those that do may provide enormous benefits. These are mostly detailed in section 4, with benefits often including an exponential reduction in computational complexity.
In terms of the theory itself, we have often found that qualitative transitions from an error reduction to a regret reduction are beneficial. We have also found the isolation of concerns via encapsulation of the optimization problem to be quite helpful in developing solutions.
We have not found that precise coefficients are predictive of relative performance amongst two reductions accomplishing the same task with the same base learning algorithm but different representations. As an example, the theory for error correcting tournaments [12] is substantially stronger than for one-against-all, yet often one-against-all performs better empirically. The theory is of course not wrong, but since the theory is relativized by the performance of the base predictor, the representational compatibility issue can and does play a stronger role in predicting performance.
There are many questions we still have about learning reductions.
1. Can the interface we have support effective use of SIMD/BLAS/GPU approaches to optimization? Marrying the computational benefits of learning reductions to the computational benefits of these approaches could be compelling.
2. Is the learning reduction approach effective when the base learner is a multitask (possibly "deep") learning system? Often the different subproblems created by the reduction share enough structure that a multitask approach appears plausibly effective.
3. Can the learning reduction approach be usefully applied at the representational level? Is there a theory of representational reductions?
|
2015-02-09T14:05:25.000Z
|
2015-02-09T00:00:00.000
|
{
"year": 2016,
"sha1": "6ef7195f05aaa54f6fbe088b796aedd822ec3419",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1502.02704",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "0166cac5d0bbaf123fcb86b273078235e627e498",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
1526857
|
pes2o/s2orc
|
v3-fos-license
|
Editorial: Pathophysiology of the Basal Ganglia and Movement Disorders: Gaining New Insights from Modeling and Experimentation, to Influence the Clinic
Citation: Andres DS, Merello M and Darbin O (2017) Editorial: Pathophysiology of the Basal Ganglia and Movement Disorders: Gaining New Insights from Modeling and Experimentation, to Influence the Clinic. Front. Hum. Neurosci. 11:466. doi: 10.3389/fnhum.2017.00466 Editorial: Pathophysiology of the Basal Ganglia and Movement Disorders: Gaining New Insights from Modeling and Experimentation, to Influence the Clinic
The human brain is complex at every level, from the scale of single neurons to microcircuits and large neuronal networks. Although this complexity is well known and studied in neuroscience, few tools or concepts of complex analysis have been transferred to the clinic yet. In the case of the basal ganglia, there has been much debate about the necessity to include nonlinear concepts into pathophysiology models (Andres and Darbin, in press). However, for new approaches to make an impact on the clinic active research is needed on many fronts: new clinical, experimental and modeling insights are crucial. In other words, research in the field of basal ganglia and related disorders is becoming increasingly interdisciplinary. On this research topic different authors challenge classic paradigms of basal ganglia pathophysiology and movement disorders in 6 areas of research. Main findings and breakthroughs are summarized in the next paragraphs.
BASAL GANGLIA MODELS AND THEORY
• Current theories of the basal ganglia are not always consistent with clinical and experimental observations. New models need to be built based on advances in the fields of complexity, chaos and non-linear systems (Montgomery). • Action selection concepts and pharmacokinetics combined in a mixed modeling approach can be used to study how dopamine affects a motor task at different stages of PD (Baston et al.).
BASAL GANGLIA FUNCTIONS
• The basal ganglia together with the cerebellum and prefrontal cortical areas play a role in time processing, which is altered in movement disorders and affects both motor and cognitive performance (Avanzino et al.).
• Dopamine signaling influences the level of physical activity. Chronic exposure to obesogenic diets cause striatal dopamine dysfunction, which might be related to the difficulty of people with obesity to increase their physical activity (Kravitz et al.).
MOLECULAR PATHWAYS AND NEUROTRANSMISSION
• Accumulation of alpha-synuclein (αSyn) in Lewy bodies and Lewy neurites characterizes the progression of Parkinson's disease. The discovery of cell-to-cell propagation of αSyn opens new therapeutic avenues for the treatment of PD and related disorders (Prymaczok et al.).
• Huntington's disease (HD) is considered as a paradigm of epigenetic dysregulation. Cell-type specific techniques and 3D-based methods can be used to advance knowledge in the context of brain region vulnerability in neuordegenerative diseases, leading to the design of new therapeutic targets (Francelle et al.) • Dysregulation of glutamate in the corticostriatal pathway is implicated in HD. Alterations of dopamine, a modulator of glutamatergic activation, also plays a role in deficits of neuronal communication throughout the basal ganglia in HD (Bunner and Rebec).
NON-MOTOR SYMPTOMS OF BASAL GANGLIA DISORDERS
• Apathy is a cardinal symptom of PD, but its pathophysiology is poorly understood. A new animal model (VMAT2 deficient mice) shows an apathetic-like phenotype that might be independent of depressive-like symptoms. This is a step forward to study the biological substrates of apathy in PD.
(Baumann et al.) • A study based on electroencephalograms (EEG) of PD patients and aged-matched healthy individuals shows that pharmacologic treatment helps maintaining long-term actionoutcome representations in PD patients, but not the initial experience of action-effect (Bednark et al.).
SIGNAL ANALYSIS AND CLINICAL APPROACHES
• The temporal structure function is a robust and simple to compute tool for the analysis of neuronal activity, which helps identifying random, oscillatory and non-linear behavior in the dynamics of single neurons. This technique can be used to quantify complex neuronal activity in healthy and PD neurons (Nanni and Andres).
• A new cost-effective screening protocol for parkinsonism based on combined objective and subjective monitoring of balance using a game industry balance board might be a strategy for PD screening in communities with limited access to healthcare (Darbin et al.).
• Bicycling ability remains preserved in PD patients who suffer freezing of gait, but the neural mechanisms underlying this observation are not known. A new experimental setup allows to investigate this phenomenon, combining recording of basal ganglia LFP and scalp EEG in PD patients while bicycling, walking or performing other motor tasks (Gratkowski et al.).
• Disparate patterns of subcortical degeneration evidenced by automated volumetric magnetic resonance imaging can explain some differences in symptoms between PD clinical subtypes, such as gait disturbances and cognitive functions. This finding may help to design personalized therapeutic approaches in the future (Rosenberg-Katz et al.).
• Quantification of specific functional deficits of gait could provide a basis for locating the source and extent of neurological damage in PD, aiding clinical decision-making for individualizing therapies (König et al.).
DEEP BRAIN STIMULATION (DBS)
• A new method uses intraoperative stimulation test data to identify optimal implant position of DBS leads by relating electric field simulations to patient/specific anatomy and the clinical effects of stimulation as measured by accelerometry (Hemm et al.).
• A new study based on near-infrared spectroscopy (NIRS) in PD patients concludes that therapeutic DBS promotes neuronal network remodeling in the prefrontal cortex (Morishita et al.).
• Impulsivity is related to an abnormally fast reaction time in high conflict situations, which is high under DBS of the STN. In a computational model, reaction time can be controlled varying the DBS electrode position within the STN and causing antidromic activation of the globus pallidus externa (GPe) (Mandali and Chakravarthy).
The results published in this topic promise great advancements in coming years in the field of basal ganglia pathophysiology and related disorders.
AUTHOR CONTRIBUTIONS
DA, MM, and OD are responsible for the full content of this article.
|
2017-09-20T21:43:26.910Z
|
2017-09-20T00:00:00.000
|
{
"year": 2017,
"sha1": "8b4d17acfda0193888cd05ceb9c4f50ccb90b2d2",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnhum.2017.00466/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8b4d17acfda0193888cd05ceb9c4f50ccb90b2d2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
230543996
|
pes2o/s2orc
|
v3-fos-license
|
Geochemical Controls on Uranium Release from Neutral-pH Rock Drainage Produced by Weathering of Granite, Gneiss, and Schist
: We investigated geochemical processes controlling uranium release in neutral-pH (pH ≥ 6) rock drainage (NRD) at a prospective gold deposit hosted in granite, schist, and gneiss. Although uranium is not an economic target at this deposit, it is present in the host rock at a median abundance of 3.7 µ g / g, i.e., above the average uranium content of the Earth’s crust. Field bin and column waste-rock weathering experiments using gneiss and schist mine waste rock produced circumneutral-pH (7.6 to 8.4) and high-alkalinity (41 to 499 mg / L as CaCO 3 ) drainage, while granite produced drainage with lower pH (pH 4.7 to > 8) and lower alkalinity ( < 10 to 210 mg / L as CaCO 3 ). In all instances, U release was associated with calcium release and formation of weakly sorbing calcium-carbonato-uranyl aqueous complexes. This process accounted for the higher release of uranium from carbonate-bearing gneiss and schist than from granite despite the latter’s higher solid-phase uranium content. In addition, unweathered carbonate-bearing rocks having a higher sulfide-mineral content released more uranium than their oxidized counterparts because sulfuric acid produced during sulfide-mineral oxidation promoted dissolution of carbonate minerals, release of calcium, and formation of calcium-carbonato-uranyl aqueous complexes. Substantial uranium attenuation occurred during a sequencing experiment involving application of uranium-rich gneiss drainage into columns containing Fe-oxide rich schist. Geochemical modeling indicated that uranium attenuation in the sequencing experiment could be explained through surface complexation and that this process is highly sensitive to dissolved calcium concentrations and pCO 2 under NRD conditions.
Introduction
Uranium contamination in water is a global concern due to this element's chemical toxicity towards humans and other living organisms [1]. Water-quality guidelines for U in Canada are 20 µg/L for drinking water and 15 µg/L for the protection of aquatic life (long-term exposure) [2,3]. Contamination of water by U frequently involves U mining and milling [4][5][6] but can also arise through natural weathering of rocks enriched in U through magmatic differentiation, such as granites and rhyolites [7][8][9][10][11].
Additionally, past studies on U behavior in a mining context have largely focused on wastes derived from U mining [12,28,30,36,[45][46][47], yet magmatically differentiated rocks hosting a wide variety of ores (e.g., REE, Li) can also contain elevated U that could be mobilized through mine-waste weathering [48]. Thus, NRD-producing magmatically differentiated rocks may be particularly The objective of this work is, therefore, to evaluate the geochemical processes that control U mobility in Ca-rich NRD resulting from weathering of granitic and metamorphic rocks enriched in U. A combination of geochemical data gathered from exploration drill-core and kinetic leaching tests on a proposed mine's waste rock are analyzed to assess the outcomes of Reactions (1) through (3) above on U release in this setting.
Study Site and Geological Setting
The study site is located at the Coffee gold deposit in subarctic Yukon, northwest Canada ( Figure 1). This deposit formed ca. 97 million years ago through interaction of Au-As-Sb-S bearing fluid with metamorphic and granitic country rock along structurally controlled faults and fractures [49,50]. The deposit is located in the Dawson Range, wherein plutonic and metamorphic rocks are regionally enriched in U relative to typical crustal rocks [51]. Weathering of these rocks produces baseline U concentrations that can exceed 500 µg/L in groundwater and 300 µg/L in surface water, i.e., an order of magnitude above water-quality guidelines, making U an element of regional environmental concern [51]. The dominant local geological units are the Permian-aged Sulphur Creek orthogneiss, Klondike schist, and Snowcap schist, which were intruded by the Cretaceous-aged Coffee Creek biotite granite [49]. Minor expressions of marble and younger dykes are also present. Extensively weathered bedrock and the absence of Pleistocene glaciation has produced a deep oxidation front in the deposit, reaching depths of up to 300 m.
The extent of oxidation in drill-core samples at the Coffee deposit is used to categorize rocks into three weathering facies: the "oxide" facies corresponds to rocks with >95% of surfaces showing visible oxidation and the presence characteristic limonite/hematite stains or cubic vugs that remain after pyrite oxidation; the "transition" facies corresponds to 5% to 50% of surface with visible oxidation on pyrite grains, and the "fresh facies" is defined by samples where <5% of surfaces are oxidized and there is only minor joint-controlled oxidation that does not extend into adjacent rock. The extent of oxidation in drill-core samples at the Coffee deposit is used to categorize rocks into three weathering facies: the "oxide" facies corresponds to rocks with >95% of surfaces showing visible oxidation and the presence characteristic limonite/hematite stains or cubic vugs that remain after pyrite oxidation; the "transition" facies corresponds to 5% to 50% of surface with visible oxidation on pyrite grains, and the "fresh facies" is defined by samples where <5% of surfaces are oxidized and there is only minor joint-controlled oxidation that does not extend into adjacent rock.
Drill-Core Characterization
Geochemical characteristics of the rock types and weathering facies were established using a dataset of 479 drill-core samples collected at the Coffee deposit. All samples were classified by field exploration geologists based on rock type (gneiss, schist, or granite) and weathering facies (oxide, transition, or fresh). In the laboratory, samples were pulverized for geochemical analyses of elemental abundances using an aqua regia digestion followed by analysis via inductively coupled plasma mass spectrometry (ICP-MS), total inorganic carbon (TIC), and total sulfur. Analyses were either conducted at a commercial lab (ALS, North Vancouver, BC, Canada), or at the Pacific Centre for Isotopic and Geochemical Research (University of British Columbia, Vancouver, BC, Canada).
The acid-neutralization potential ratio (NPR), which is a standard parameter used to assess the likelihood that mine wastes produce acidic-rock drainage [38], was calculated from TIC and total sulfur results. In this calculation, the acid-generation potential is calculated from total sulfur assuming that all S can be released as H 2 SO 4 , which consumes 1 mol of CaCO 3 per mol H 2 SO 4 . Acid-neutralization potential is calculated from TIC assuming that all TIC is CaCO 3 . The NPR reflects the ratio of acid-neutralization potential to acid-generation potential, both expressed as kg CaCO 3 per ton rock as per the NPR equation [38]: Rocks with a NPR < 1 are classified as potentially acid generating (PAG), rocks with a NPR >> 1 are classified as non-acid generating (NAG), and rocks with a NPR near or slightly above 1 (e.g., 1 < NPR < 2) are classified as uncertain [38].
The oxidation state of U in select rock samples representing each major lithology (gneiss, schist, and granite) and weathering facies (oxide, transition, fresh) was determined by X-ray absorption near-edge spectroscopy (XANES). Data were collected on the Hard X-ray Micro-Analysis beamline (HXMA, 061D-1) at the Canadian Light Source (CLS, Saskatoon, SK, Canada). The X-ray beam was calibrated to a Y foil at the first inflection point of the Y K-edge (17,038 eV) and U spectra were collected at the L 3 edge (17,166 eV) over 5 energy regions: −200 eV to −160 eV, 10 eV steps; −160 eV to −100 eV, 0.5 eV steps (covering the yttrium foil calibration standard); −100 eV to −30 eV, 10 eV steps; −30 eV to +40 eV (U L 3 edge), 0.5 eV steps; and +40 eV to 14.2 K, 0.05 K steps. The dwell time was 1 s per step. Data were collected in fluorescence mode with a 32-element Ge detector. Multiple spectra were collected for each sample and to correct for potential beam drift. Each analysis featured a Y-foil calibration measurement. We also collected XANES spectra of uranyl nitrate and uraninite standards using the same beamline settings as samples. To estimate the relative proportion of U redox species in each sample, linear combination fitting was conducted using the ATHENA software suite (Demeter v. 0.9.22) [52] and the uranyl nitrate and uraninite as standards.
Experimental Design
Relationships between lithological characteristics, aqueous geochemistry, and U release were assessed with rock weathering experiments that included "field bins" and columns wherein rock was exposed to water and oxygen and the resulting leachate was characterized. These experiments were conducted under water-unsaturated conditions to mimic field waste-rock weathering conditions where fluid exchange with atmospheric oxygen leads to the oxidation of sulfide minerals as per Reaction (1). Field-bin experiments have a longer water residence time and lower water/rock ratio than columns, which leads to better representation of secondary sorption and mineral precipitation reactions that occur in full-scale waste-rock weathering environments in comparison with columns. Composite rock samples representing the various lithologies (granite, gneiss, schist) and weathering facies (oxide, transition, fresh) were prepared for these experiments (Supplementary Table S1).
The field bins consisted of 119 L HDPE barrels that were filled with approximately 200 kg of dry crushed rock, sieved to a grain size < 6 mm, and exposed from 2013 to 2018 to outdoor temperature and precipitation conditions in Whitehorse, Yukon, Canada (Supplementary Figure S1) [53,54]. Field bins were initially irrigated with 28 L of de-ionized water, after which atmospheric precipitation was the only source of water. Leachates were collected into closed plastic containers that were sampled approximately monthly for geochemical analyses.
The column experiments involved application of 400 mL of Nanopure™ water purified through distillation and de-ionization (DDI water) onto to Plexiglas ® reactors containing 10 kg of crushed and sieved (<6 mm) waste rock every 14 days. Columns were 21 cm in inner diameter and 20.5 cm tall and were operated at a lab temperature of 4 • C to reflect the subarctic climate of the field site. A nylon mesh overlying a perforated Plexiglas ® disk at the base of the columns was used to retain the rock in the columns.
Three "Phases" of column experiments were performed. Phase 1 involved application of DDI water to the columns every 2 weeks between 2014 and 2018, with the leachate collected for chemical analysis. During a 6-month period in Phase 1 and beginning in May 2015, the effluent from the base of the columns was collected and then re-applied to the top columns once. Upon the second pass of water through the column, samples were withdrawn for chemical analyses. This recirculation procedure was included to increase the water contact time with the rock such that it is closer to field conditions in waste-rock storage facilities (WRSF). After January 2016, chemical concentrations stabilized and thus the recirculation procedure was removed: thereafter, DDI water was applied as influent once and the resulting leachate collected for analysis without recirculation. Phases 2 and 3 were designed to investigate U mobility controls during interaction of leachate produced by one rock type with another rock's solids. In Phase 2, leachate from high-alkalinity and U-poor schist-bearing columns was used as influent for U-rich gneiss-bearing columns, which was hypothesized to drive U desorption via CCU complexation. In Phase 3, this order was reversed, to assess whether the Fe-rich schist could attenuate U from gneiss leachate via sorption. Phases 2 and 3 were conducted using gneiss and schist pairings, with one experimental set devoted to oxide-facies rocks and another to transition-facies rocks (Supplementary Table S1).
Geochemical Analyses
Column and field bin leachates were sampled for pH, alkalinity, anions, and metals analyses at SGS Canada Inc. (Burnaby, BC, Canada). Alkalinity and anions were determined on filtered (0.45 µm) sample aliquots using titration with H 2 SO 4 and ion chromatography, respectively. Metals were analyzed on filtered (0.45 µm) and acidified (HNO 3 to pH < 2) sample aliquots using inductively coupled plasma mass spectrometry. Each water-sample batch was accompanied by a method blank, a duplicate sample analysis to monitor precision, and a matrix spike to monitor accuracy. Lab QA/QC criteria at SGS include matrix spike recoveries better than 70% for metals and 75% for anions, and reproducibility between duplicate samples better than 20%.
The grain-size distribution and geochemical composition of rocks used in weathering experiments was also determined by SGS Canada. Metals were analyzed by ICP-MS after aqua-regia digestion of pulverized samples. Analyses of duplicate and reference materials (OREAS 260 and OREAS502B, Melbourne Australia) yielded ICP-MS reproducibility and accuracy were ≤5%. Total inorganic carbon (TIC) was measured by treating rock sample powders with HClO 4 to convert C to CO 2 , which was measured by coulometry. Rock total sulfur content was determined by Leco combustion analysis. Sulfide was measured by first leaching sulfate with NaCO 3 and analyzing the residual sample with the Leco analyzer. Shake-flask extractions were conducted by SGS on rock samples using distilled de-ionized water at a 3:1 water:rock ratio shaken for 24 h, with the leachate analyzed for U content by ICP-MS.
Mineralogical abundances were quantified by powder X-ray diffraction (XRD) at the CLS (analytical details in Section S.1. of the Supplementary Materials) or by SGS Canada Inc (Burnaby, BC, Canada). Quantitative evaluation of materials by scanning electron microscopy (QEMSCAN) analysis was used at SGS to quantify the proportions of Fe-minerals found as oxides, and of S-minerals as sulfides, respectively. QEMSCAN analyses were conducted on samples pulverized to 80% passing <106 µm and then graphite-impregnated into polished thin sections.
At the end of column experiments, residues were also characterized for elemental abundances (ICP-MS) and shake-flask extractable U as described above. Sequential chemical extractions (SCEs) were conducted on the residues from the two schist columns (C3-ScO and C4-ScT) using a 5-step protocol based on the method of Tessier et al. [55] (procedural details in Section S.2. of the Supplementary Materials).
Geochemical Modeling
The geochemical code PHREEQC [56] was used to calculate aqueous speciation and mineral saturation indices (SI) on field-bin and column effluents. We also conducted a series of simulations in PHREEQC to investigate U sorption during Phase 3 of the column experiments. A detailed explanation of the modeling approach is given in the Supplementary Materials (Section S.4). Briefly, U sorption was assumed to be dominated by surface complexation and it was modeled using a 2-step approach. In the first step, HFO availability was calibrated for the schist columns using Phase 1 major-ion chemistry data, the rock/water ratio, QEMSCAN quantification of Fe-oxides, and U solid-phase abundance data from aqua regia and sequential chemical extractions. These calibrated HFO compositions for the schist were then used in simulations of Phase 3, which involved batch reactions with gneiss influent solution and residual porewater in the schist column in the presence of HFO, calcite, and H 2 SO 4 (assumed to be sourced from sulfide-mineral oxidation). Changes in major-ion chemistry (pH, alkalinity, Ca, sulfate) were considered by allowing dissolution/precipitation of calcite, dissolution/exsolution of CO 2 , and addition of H 2 SO 4 along with surface complexation with HFO (full details in Supplementary Materials Section S.4.2).
PHREEQC simulations were conducted using the wateq4f.dat database, which was amended with association constants for the Ca-carbonato-uranyl and Mg-carbonato-uranyl aqueous complexes from Dong and Brooks [24] and the surface complexation constants for HFO-uranyl and HFO-carbonate species as per Mahoney et al. [18,57] (Supplementary Table S2). Surface complexation was modeled using the Dzombak-Morel diffuse-double layer with a 50:1 ratio of strong and weak sorption sites [57,58].
Uranium, Sulfur, and Carbonate Content of the Coffee Deposit
Drill-core geochemical analyses indicate that rock at the Coffee deposit is generally NAG owing to excess of TIC over sulfur, which average 0.5 wt.% and <0.07 wt.%, respectively. 97.5% of samples are classified as NAG with NPR > 1 (Supplementary Figure S2). Rock types can generally be characterized as follows: schist is typically the most enriched in sulfur and TIC, granite has low sulfur and low TIC, and gneiss has intermediate sulfur and TIC content ( Figure 2). Sulfide contents are minimal in oxide-facies samples of all rock types: median values are 0.005 wt.% in granite-oxide and gneiss-oxide, and 0.01 wt.% for schist-oxide. Gneiss-oxide and especially granite-oxide rocks are also depleted in TIC relative to their less-weathered counterparts, with median TIC values of 0.2 wt.% and 0.1 wt.%, respectively, although they generally are NAG (Supplementary Figure S2). Weathering of the deposit over geological timescales has therefore created an oxidized zone that is depleted in sulfide minerals yet has maintained excess TIC to create NAG rock.
Rocks at the Coffee deposit are modestly enriched in U relative to typical crustal rocks, with a median abundance of 3.7 µg/g and a maximum of 144 µg/g (Table 1), both above the 2.7 µg/g average of Earth's upper crust [59]. The deposit is also enriched in U in comparison with regional country rocks: Sulphur Creek suite orthogneiss and Whitehorse suite granite samples collected outside the Coffee deposit have median abundances of 3.3 and 3.2 µg/g, respectively [51]. The lack of clear relationships between U and hydrothermally sourced elements As, S, and Sb in fresh-facies rocks in the deposit suggests that its U was not sourced from the hydrothermal system (Supplementary Figure S3). The strongest enrichment in U is found the oxide facies of each major lithology (granite, gneiss and schist; Table 1), where median values are 8.9, 7.6, and 2.2 µg/g, respectively (Supplementary Table S1). This enrichment of oxide-facies rocks in U may indicate of retention of U transported by the groundwater flow system. This hypothesis is supported by significantly higher U/Ti in the oxide facies (p < 0.05, Mann-Wilcox test) in comparison with the fresh facies across all lithologies ( Figure 2) because Ti is relatively immobile in groundwater but not U.
Uranium redox speciation of select gneiss and schist samples determined by linear combination fitting (LCF) of the XANES spectra show that samples from all weathering facies have an appreciable U(VI) content, with proportionally more U(VI) in oxide-facies rocks than in transition-and fresh-facies rocks ( Figure 3 and Supplementary Table S3). These results suggest the presence of U(IV) phases in unoxidized portions of the deposit, while rock oxidation has liberated U that was transported and re-deposited as U(VI) in oxide-facies rocks.
Minerals 2020, 10, x FOR PEER REVIEW 8 of 23 Figure 2. Sulfur, total inorganic carbon (TIC), U, and U/Ti by rock type and by weathering facies in Coffee drill-core samples. Red stars indicate median U or U/Ti is significantly higher (p < 0.05); or TIC and sulfur content is significantly lower (p < 0.05) in transition or oxide facies than in fresh facies for a given rock type (Mann-Wilcox Test).
Figure 2.
Sulfur, total inorganic carbon (TIC), U, and U/Ti by rock type and by weathering facies in Coffee drill-core samples. Red stars indicate median U or U/Ti is significantly higher (p < 0.05); or TIC and sulfur content is significantly lower (p < 0.05) in transition or oxide facies than in fresh facies for a given rock type (Mann-Wilcox Test). Supplementary Table S3. The LCF error is estimated at ±10%.
Rock Weathering Experiments
Fresh-facies rocks for a given rock type used in kinetic tests contained the highest sulfide and carbonate mineral content, while oxide-facies rocks were depleted in sulfide and carbonate minerals, reflecting observations from the larger drill-core dataset ( Figure 4). All rocks used in kinetic tests were NAG, with NPRs ranging from 6.5 to 81, except the granite-transition which had an NPR of 0.7 (Supplementary Table S1). Carbonate minerals included calcite, dolomite, and ankerite, the sum of which varied in the order schist > gneiss > granite (Supplementary Table S1). Carbonates comprised 9.7 to 24 wt.% in schist and ranged from below XRD detection limits to 4.3 wt.% in gneiss, while no carbonates were detected in granite (Supplementary Tables S1 and S4). Pyrite and arsenopyrite were the dominant sulfide minerals observed by XRD and QEMSCAN analyses with abundances below 1 wt.%. The Fe-oxide content determined by QEMSCAN was notably higher in the schist-oxide at 2.0 wt.% in comparison with other rocks which had 0.20 to 0.56 wt.% Fe-oxide (Supplementary Table S1). Fe-oxide abundance was generally greater in oxide-facies rocks for a given lithology than in their transition-facies and fresh-facies counterparts. No U minerals were identified by XRD. The initial Supplementary Table S3. The LCF error is estimated at ±10%.
Rock Weathering Experiments
Fresh-facies rocks for a given rock type used in kinetic tests contained the highest sulfide and carbonate mineral content, while oxide-facies rocks were depleted in sulfide and carbonate minerals, reflecting observations from the larger drill-core dataset ( Figure 4). All rocks used in kinetic tests were NAG, with NPRs ranging from 6.5 to 81, except the granite-transition which had an NPR of 0.7 (Supplementary Table S1). Carbonate minerals included calcite, dolomite, and ankerite, the sum of which varied in the order schist > gneiss > granite (Supplementary Table S1). Carbonates comprised 9.7 to 24 wt.% in schist and ranged from below XRD detection limits to 4.3 wt.% in gneiss, while no carbonates were detected in granite (Supplementary Tables S1 and S4). Pyrite and arsenopyrite were the dominant sulfide minerals observed by XRD and QEMSCAN analyses with abundances below 1 wt.%. The Fe-oxide content determined by QEMSCAN was notably higher in the schist-oxide at 2.0 wt.% in comparison with other rocks which had 0.20 to 0.56 wt.% Fe-oxide (Supplementary Table S1). Fe-oxide abundance was generally greater in oxide-facies rocks for a given lithology than in their transition-facies and fresh-facies counterparts. No U minerals were identified by XRD. The initial solid-phase U content was higher in the gneiss and granite (5.5 to 7.6 µg/g) than in the schist (2.6 to 3.2 µg/g) (Figure 4).
Minerals 2020, 10, x FOR PEER REVIEW 10 of 23 solid-phase U content was higher in the gneiss and granite (5.5 to 7.6 µg/g) than in the schist (2.6 to 3.2 µg/g) ( Figure 4). Leachate pH and alkalinity were distinctly lower in granite weathering experiments in comparison with gneiss and schist experiments ( Figure 5), owing to the minimal carbonate content in the granite. Granite-transition rock produced leachate pH ca. 7.5 in the column, and pH <6 in the field bin, with alkalinity concentrations being close to detection limits. In contrast, the carbonate-rich gneiss and schist produced drainage in the pH 7.6 to 8.4 range, and several hundred mg/L of alkalinity (as CaCO3) ( Figure 5). Geochemical modeling indicated that leachates were near calcite saturation in all column experiments, with mineral saturation indices (SI) typically ranging between −0.25 and +0.50, except the granite-transition column where calcite was undersaturated (SI −1 to −3). All columns were undersaturated with respect to gypsum (SI < −0.8). Thus, gneiss and schist drainage can generally be described as high-alkalinity NRD, while granite drainage is circum-neutral to moderately acidic with low alkalinity.
Within a given weathering facies, U concentrations followed the order gneiss > schist > granite ( Figure 5). Leachates in the gneiss and schist field bins contained tens to hundreds of µg/L U, while all granite U concentrations remained <10 µg/L. This lithological control was also observed in column experiments: in the gneiss-transition and schist-transition columns, U concentrations stabilized between ca. 180 µg/L and ca. 30 µg/L respectively, while they rapidly declined below 1 µg/L in the granite columns ( Figure 5). Comparable lithological controls and U concentration ranges occur in groundwater around the Coffee deposit, where monitoring wells screened in gneiss can produce water containing several hundred µg/L U while those in schist and granite typically have lower concentrations that range from ca. 30 to 80 µg/L, and ca. 4 to 75 µg/L, respectively [60]. There was an additional effect of weathering facies, with oxide-facies rocks releasing the least U. In gneiss-oxide and schist-oxide field bins, U concentrations were <10 µg/L, while they ranged from 23 to 750 µg/L in the transition-and fresh-facies experiments ( Figure 5). Leachate pH and alkalinity were distinctly lower in granite weathering experiments in comparison with gneiss and schist experiments ( Figure 5), owing to the minimal carbonate content in the granite. Granite-transition rock produced leachate pH ca. 7.5 in the column, and pH < 6 in the field bin, with alkalinity concentrations being close to detection limits. In contrast, the carbonate-rich gneiss and schist produced drainage in the pH 7.6 to 8.4 range, and several hundred mg/L of alkalinity (as CaCO 3 ) ( Figure 5). Geochemical modeling indicated that leachates were near calcite saturation in all column experiments, with mineral saturation indices (SI) typically ranging between −0.25 and +0.50, except the granite-transition column where calcite was undersaturated (SI −1 to −3). All columns were undersaturated with respect to gypsum (SI < −0.8). Thus, gneiss and schist drainage can generally be described as high-alkalinity NRD, while granite drainage is circum-neutral to moderately acidic with low alkalinity.
Within a given weathering facies, U concentrations followed the order gneiss > schist > granite ( Figure 5). Leachates in the gneiss and schist field bins contained tens to hundreds of µg/L U, while all granite U concentrations remained <10 µg/L. This lithological control was also observed in column experiments: in the gneiss-transition and schist-transition columns, U concentrations stabilized between ca. 180 µg/L and ca. 30 µg/L respectively, while they rapidly declined below 1 µg/L in the granite columns ( Figure 5). Comparable lithological controls and U concentration ranges occur in groundwater around the Coffee deposit, where monitoring wells screened in gneiss can produce water containing several hundred µg/L U while those in schist and granite typically have lower concentrations that range from ca. 30 to 80 µg/L, and ca. 4 to 75 µg/L, respectively [60]. There was an additional effect of weathering facies, with oxide-facies rocks releasing the least U. In gneiss-oxide and schist-oxide field bins, U concentrations were <10 µg/L, while they ranged from 23 to 750 µg/L in the transition-and fresh-facies experiments ( Figure 5). Uranium, Ca, and sulfate release rates in field bins also correlated with rock weathering facies, with the highest rates produced in the least oxidized rocks ( Figure 6). This was not the case for alkalinity, which was released at the lowest rate in the fresh facies for each rock type ( Figure 6). Lower alkalinity release rates in fresh-facies rocks can be explained by their higher sulfide-mineral content, which led to sulfide-mineral oxidation, sulfuric acid (and sulfate) generation, and alkalinity consumption. (Figure 6). Calcium release rates followed sulfate release rates because Ca was released from carbonate-mineral dissolution reactions that buffered H 2 SO 4 ( Figure 6). While there is longstanding knowledge of the relationship between alkalinity and U release due to aqueous complexation of U by bicarbonate [13], the lack of correlation between alkalinity and U release rates in our experiments indicate that alkalinity was not a limiting factor in forming mobilizing U. Rather, greater U release from fresh-facies rocks may be explained by their higher Ca release that promoted formation of weakly sorbing CCU complexes (Reaction (3)) [18]. This hypothesis is supported by the higher proportion of U associated with CCU complexes in fresh-and transition-facies field bins in comparison with their oxide-facies counterparts (Supplementary Figure S4).
Controls on U Surface Complexation during Column Sequencing Experiments
Upon application of leachate from the schist-transition column to the gneiss-transition column in Phase 2, a pronounced increase in U concentrations in gneiss effluent was observed, reaching up to 272 µg/L (Figure 7). In the transition-facies columns and prior to Phase 2 and 3 sequencing experiments, drainage from the schist-transition column had produced considerably higher majorion concentrations, including dissolved Ca and sulfate, than the gneiss-transition column (Figure 7). Alkalinity concentrations were slightly lower for the schist than for the gneiss. Uranium concentrations were notably higher for the gneiss-ca. 150 µg/L-in comparison with values of approximately 30 µg/L for the schist. Given the lower alkalinity in the schist-transition leachate, alkalinity was not responsible for driving the U released during the schist-to-gneiss sequencing experiment. The increase in U concentrations during Phase 2 is instead hypothesized to be attributed to elevated Ca in the schist effluent that promoted CCU formation and U desorption after this effluent was circulated through the gneiss column.
In the analogous oxide-facies columns, circulation of leachate from the schist-oxide column through the U-rich gneiss-oxide column also produced an increase in U concentrations in the gneissoxide effluent, albeit more modest than in the transition-facies column, from ca. 80 µg/L prior to the sequencing experiment up to 93 µg/L in the first two cycles of Phase 2. A greater Fe-oxide content in the oxide-facies gneiss might explain the more muted release of U observed in comparison with the gneiss-transition column during Phase 2 of the sequencing experiment (Supplementary Table S1).
Upon reversal of the order of flow in Phase 3, marked U attenuation from the U-rich gneiss column was observed in both transition-facies and oxide-facies sequencing experiments (Figure 7). In the transition-facies experiment, despite application of gneiss leachate with U concentrations of >100 µg/L into the schist column, schist U effluent steadily remained in the 30 to 40 µg/L range (i.e., similar to concentrations prior to sequencing experiments) with no sign of elevated U breakthrough. Although it is impossible to explicitly discriminate between U mass release within the schist and U sourced from the gneiss feed solution, mass-balance calculations show that at least 83% of the U Figure 6. Cumulative U release rate in field-bin experiments against release rates of alkalinity (left), sulfate (middle) and calcium (right). Arrows indicate trend from oxide to transition to fresh weathering facies in each rock type.
Uranium release rates were also decoupled from solid-phase U content: despite granite-transition and granite-fresh rock having the highest U abundances, they released substantially less U than their gneiss and schist counterparts ( Figure 6). For the granite-transition field bin, the disproportionally low U release rate relative to the U abundance can be explained by its pH (ca. 5) and low alkalinity (<10 mg/L as CaCO 3 ). Under these conditions, speciation calculations in PHREEQC suggest that a substantial proportion of U was present as UO 2 2+ and UO 2 OH + , which can effectively sorb onto HFO through inner-sphere complexation [20] (Supplementary Figure S4). In the granite-fresh experiment, the concentration of these species was negligible but between 1% and 27% of U (aq) was in the form of uranyl carbonates, predominantly as UO 2 (CO 3 ) 2 2− , which also sorbs onto HFO through outer-sphere complexation [20]. In contrast, in the higher-alkalinity schist and gneiss field bins, 91% to 99% of U (aq) was speciated as CCU, which are essentially unavailable for U sorption [18,20], while only a minor proportion of more sorption-reactive UO 2 2+ , UO 2 OH + , and uranyl carbonate species were present, promoting higher U release rates (Supplementary Figure S4).
Controls on U Surface Complexation during Column Sequencing Experiments
Upon application of leachate from the schist-transition column to the gneiss-transition column in Phase 2, a pronounced increase in U concentrations in gneiss effluent was observed, reaching up to 272 µg/L (Figure 7). In the transition-facies columns and prior to Phase 2 and 3 sequencing experiments, drainage from the schist-transition column had produced considerably higher major-ion concentrations, including dissolved Ca and sulfate, than the gneiss-transition column (Figure 7). Alkalinity concentrations were slightly lower for the schist than for the gneiss. Uranium concentrations were notably higher for the gneiss-ca. 150 µg/L-in comparison with values of approximately 30 µg/L for the schist. Given the lower alkalinity in the schist-transition leachate, alkalinity was not responsible for driving the U released during the schist-to-gneiss sequencing experiment. The increase in U concentrations during Phase 2 is instead hypothesized to be attributed to elevated Ca in the schist effluent that promoted CCU formation and U desorption after this effluent was circulated through the gneiss column. In the oxide-facies Phase 3 sequencing experiment, U attenuation was even greater: U concentrations in the U-rich gneiss-oxide effluent declined from ca. 80 µg/L to ca. 5 µg/L after circulation through the schist-oxide column (Figure 7). Schist-oxide effluent U concentrations were also unchanged during Phase 2 in comparison with the stable concentrations observed toward the end of Phase 1 in that column, similar to the transition-series Phase 3 result (Figure 7). Mass-balance calculations give a lower limit of 94% U retention within the schist-oxide column during Phase 3; this estimate increases to 99% if per-cycle U release from schist solids is assumed to have remained unchanged relative to the rate observed at the end of Phase 1 (cycles 90-99). Additional net U release within the schist during Phase 3 of the oxide sequencing experiment was, therefore, likely negligible. The lack of U breakthrough from the schist-oxide is consistent with its exceptionally high Fe-oxide content relative to other rocks, making it capable of retaining U through sorption despite elevated porewater Ca and alkalinity (Supplementary Table S1).
Shake-flask and SCE of schist-transition and schist-oxide column residues at the end of the sequencing experiments provided an indication of U attenuation via sorption processes. Shake-flask extractable U increased from < 0.014 µg/g prior to column experiments to values ranging from 0.29 to 0.49 µg/g (Supplementary Figure S5). Residue SCE analyses indicated that between 16% and 24% In the analogous oxide-facies columns, circulation of leachate from the schist-oxide column through the U-rich gneiss-oxide column also produced an increase in U concentrations in the gneiss-oxide effluent, albeit more modest than in the transition-facies column, from ca. 80 µg/L prior to the sequencing experiment up to 93 µg/L in the first two cycles of Phase 2. A greater Fe-oxide content in the oxide-facies gneiss might explain the more muted release of U observed in comparison with the gneiss-transition column during Phase 2 of the sequencing experiment (Supplementary Table S1).
Upon reversal of the order of flow in Phase 3, marked U attenuation from the U-rich gneiss column was observed in both transition-facies and oxide-facies sequencing experiments (Figure 7). In the transition-facies experiment, despite application of gneiss leachate with U concentrations of >100 µg/L into the schist column, schist U effluent steadily remained in the 30 to 40 µg/L range (i.e., similar to concentrations prior to sequencing experiments) with no sign of elevated U breakthrough. Although it is impossible to explicitly discriminate between U mass release within the schist and U sourced from the gneiss feed solution, mass-balance calculations show that at least 83% of the U originating from the gneiss solution was retained if U inputs from the schist are assumed to be zero. If the per-cycle U release rate from within the schist is assumed to have remained unchanged during Phase 3 relative to the rate observed at the end of Phase 1 (cycles 90 to 99, when U concentrations were stable), this estimate increases to >100%. As U retention cannot in fact exceed 100%, these calculations suggest that the 30 to 40 µg/L U in schist effluent reflected an equilibrium control threshold concentration that was maintained even after application of the U-rich gneiss leachate to the schist column.
In the oxide-facies Phase 3 sequencing experiment, U attenuation was even greater: U concentrations in the U-rich gneiss-oxide effluent declined from ca. 80 µg/L to ca. 5 µg/L after circulation through the schist-oxide column (Figure 7). Schist-oxide effluent U concentrations were also unchanged during Phase 2 in comparison with the stable concentrations observed toward the end of Phase 1 in that column, similar to the transition-series Phase 3 result (Figure 7). Mass-balance calculations give a lower limit of 94% U retention within the schist-oxide column during Phase 3; this estimate increases to 99% if per-cycle U release from schist solids is assumed to have remained unchanged relative to the rate observed at the end of Phase 1 (cycles 90-99). Additional net U release within the schist during Phase 3 of the oxide sequencing experiment was, therefore, likely negligible. The lack of U breakthrough from the schist-oxide is consistent with its exceptionally high Fe-oxide content relative to other rocks, making it capable of retaining U through sorption despite elevated porewater Ca and alkalinity (Supplementary Table S1).
Shake-flask and SCE of schist-transition and schist-oxide column residues at the end of the sequencing experiments provided an indication of U attenuation via sorption processes. Shake-flask extractable U increased from < 0.014 µg/g prior to column experiments to values ranging from 0.29 to 0.49 µg/g (Supplementary Figure S5). Residue SCE analyses indicated that between 16% and 24% of the U at the end of the experiments was soluble in the chemical reagents MgCl 2 , Na-acetate, and hydroxylamine-hydrochloride which nominally target weakly sorbed, carbonate-bound and exchangeable, and crystalline phases [55]. In both columns, U contents recovered in the MgCl 2 and Na-acetate extractions were highest in the upper 10 cm of column residues and decreased in deeper intervals of the columns, suggesting limited downward transport of U in the column after it received gneiss-oxide U-rich leachate ( Supplementary Figures S5 and S6).
Modeling Sorption during the Application of U-Rich Gneiss Effluent to Schist Columns
The role of sorption on U attenuation during Phase 3 of column-sequencing experiments was further investigated with geochemical models. In these models, the HFO composition of schist-oxide and schist-transition rocks were calibrated from QEMSCAN Fe-oxide results, scaled to reproduce Phase 1 U concentration data (calibration details in Supplementary Materials Section S.4 and presented visually in Supplementary Figures S7 and S8), and yielded 21 g/L HFO and 4.7 g/L HFO, respectively. These HFO abundances correspond to the Fe-oxide content determined by QEMSCAN in these rocks multiplied by a calibration factor of 0.08 and 0.15, respectively (Supplementary Materials Section S.4.1) and they are consistent with the expectation of more HFO in more oxidized rock facies (Supplementary Table S1).
To simulate aqueous geochemistry of the schist-oxide sequencing experiment during Phase 3, the calibrated HFO concentration in the schist was allowed to interact with a mixture of influent gneiss-oxide leachate and residual porewater in the schist, the proportions of which were estimated from water mass balance. This solution-HFO mixture was forced to match the pCO 2 and calcite SI observed in schist effluent at each cycle in batch reactions by allowing calcite dissolution/precipitation and CO 2 dissolution/exsolution. These reactions ensured that the measured pH, Ca, and alkalinity concentrations were reproduced in the model (Supplementary Materials Section S.4.2). These simulations indicated that the marked U attenuation observed in Phase 3 of the schist-oxide experiment could be explained through a surface-complexation model that provided a reasonable fit of observed major ion, pH and U concentrations [root-mean square error (RMSE) 0.0011 for U measured vs. U modeled] (Figure 7). A similar modeling approach was used to examine U attenuation during Phase 3 of transition-facies experiment, except that this model also required consideration of processes related to sulfide-mineral oxidation. Sulfate concentrations were approximately one order of magnitude higher in the schist-transition column experiment in comparison with its schist-oxide counterpart (Figure 1). Sulfate concentrations also showed a steady increase after cycle 60 (Figure 1 and Supplementary Figure S9). These trends in sulfate concentration, along with the higher initial sulfide-mineral content of the schist-transition rock as seen in QEMSCAN analyses (Supplementary Table S1), suggested that sulfide-mineral oxidation was occurring in the schist-transition column as the experiment progressed. Calcium concentrations from the schist-transition effluent were also consistently above those of the gneiss-transition feed solution during Phase 3, suggesting that sulfide-mineral oxidation may have driven calcite dissolution within the schist after the application of gneiss leachate, because of H 2 SO 4 production associated with sulfide oxidation (Figure 7). Therefore, we accounted for calcite dissolution and sulfide-mineral oxidation in a model wherein gneiss influent solution and residual schist porewater were mixed in the presence of HFO, while any excess sulfate in schist-transition effluent above that expected from conservative mixing was attributed to H 2 SO 4 gained from pyrite oxidation (full details in Supplementary Materials Section S.3.2). The HFO-solution-H 2 SO 4 system was forced to match schist effluent calcite saturation indices and pCO 2 by allowing calcite dissolution-precipitation and CO 2 dissolution-exsolution.
Similar to the schist-oxide modeling results, these simulations closely reproduced effluent pH, alkalinity, Ca, and sulfate concentrations in the schist-transition experiment (Figure 7). The decrease in U concentrations observed between column influent (gneiss-transition leachate) and effluent was also reproduced in this model, substantiating the hypothesis that sorption was driving U attenuation during the experiment. However, U concentrations in the model were on average 35% above measured effluent concentrations, and the model fit (RMSE = 0.012) was poorer than in the schist-oxide experiment.
We considered an additional series of simulations to assess whether the oxidation of sulfide minerals in the schist-transition column could have led to growth in HFO over time (as per Reaction (1)), and thus increased U sorption capacity during the experiment. Quantifying HFO growth as pyrite oxidation progresses is highly complex given the evolution in surface area/volume and the possible Fe-oxide phase transformation and recrystallization reactions that occur with ageing [61]. Growth in HFO was modeled by assuming that the rise in sulfate concentrations in the schist-transition column between cycles 60 and 99, which was highly linear (Supplementary Figure S9), translated to proportional and linear growth in HFO availability over time. By applying an HFO growth rate that was 1/100th the rate of per-cycle sulfate concentration increase between cycles 60 and 99, and assuming that this growth rate remained constant during Phase 3, the PHREEQC model provided a better match to experimental U data (RMSE = 0.012) (Figure 7). These simulations suggest that addition of Fe(III)-(oxyhydr)oxides during oxidation of this rock might have led to increasing U sorption capacity, while this effect was unlikely in the schist-oxide experiment where the lack of sulfide-mineral weathering precluded HFO genesis.
Simulating the Effect of Pyrite-Oxidation/Calcite-Dissolution Reactions on U Concentrations in Neutral-Rock Drainage (NRD)
Column experiments produced leachate with lower dissolved Ca and sulfate concentrations and pCO 2 than those which commonly occur in full-scale waste-rock storage facilities (WRSF) that contain carbonate and sulfide minerals. Higher rock/water ratios and longer water residence times in NRD-producing WRSFs can yield Ca and sulfate concentrations in the hundreds and thousands of mg/L, respectively [23,[39][40][41][42][43]. Under these conditions, gypsum (CaSO 4 ·2H 2 O) precipitation can limit the concentrations of Ca. Additionally, pCO 2 can reach up to two orders of magnitude above atmospheric pCO 2 in WRSFs because of carbonate-mineral dissolution and limitations on gas exchange with the atmosphere [44,[62][63][64][65][66]. Given the dependence of U sorption on Ca concentrations and pCO 2 [16,19,20,67], we assessed how a U-HFO system would respond in hypothetical scenarios wherein pyrite oxidation and carbonate-mineral dissolution reactions progressed to a higher degree, i.e., to the point at which Ca concentrations became controled by gypsum mineral saturation.
These scenarios correspond to a theoretical sulfide-and carbonate-rich oxidative weathering setting without any effects of transport (e.g., mixing, dilution), reaction kinetics, or other site-specific considerations (temperature, hydrology, geochemical heterogeneity, particle size or facility construction and design). Simulations were run in PHREEQC in a series of batch reactions in which a U-HFO system was titrated by incremental steps of FeS 2 oxidation with infinite access to atmospheric O 2 and carbonate-mineral dissolution until Ca (aq) concentrations became limited by gypsum saturation. The HFO used in these simulations and initial U(VI) abundance was that of the the gneiss-transition column because this rock released the most U in column experiments ( Figure 5). The HFO of this rock was calibrated in the same way as described above for the schist sorption model (HFO calibration results in Supplementary Materials Section S.4.1). The HFO-U system was equilibrated with leachate chemistry representative of steady-state concentrations in the gneiss-transiton column (i.e., at cycle 89 of the experiment) prior to pyrite and calcite titration simulations.
Four simulations are presented ( Table 2). The base case (Case 1) involved pyrite oxidation in the presence of U(VI) and HFO under atmospheric pCO 2 conditions (10 −3 atm) and with calcite as the carbonate mineral. In Cases 2A and 2B, we replaced calcite with a hypothetical Mg-bearing calcite of the same solubility constant as calcite (log K = −8.48) to investigate whether Mg-uranyl-carbonate complexes could sustain elevated aqueous U after Ca concentrations become controlled by gypsum saturation. In Case 2A the Ca/Mg molar ratio of the Mg-bearing calcite was fixed at 9:1, and in Case 2B this ratio was decreased to 1.5:1 to match the Ca/Mg ratios observed in the gneiss-transition leachates. In Case 4C, the effect of CO 2 buildup was investigated by fixing the pCO 2 to 10 −2.7 atm (ca. 2000 ppm). Table 2. Hypothetical PHREEQC models examining U sorption behavior as a function of pyrite oxidation in a pyrite-carbonate-HFO system initially containing 0.077 mmol of U(VI) and 3.29 g/L HFO (surface area 600 m 2 /g, 40:1 ratio of weak:strong sorption sites).
Case
Carbonate Phase pCO 2 (atm) Progressive pyrite oxidation and carbonate-mineral disolution in these simulations produced NRD conditions (pH 8.15 to pH 7.75) with Ca (aq) reaching >580 mg/L before gypsum saturation was attained (Figure 8). Alkalinity concentrations rose in all simulations to several hundred mg/L (as CaCO 3 ), yet this rise was decoupled from U(VI) desorption. Instead, U desorption directly followed Ca (aq) concentrations, which increased until gypsum saturation was reached (Figure 8). The relationship between Ca and U release corresponds with observations and sorption modeling of the column and field-bin experiments. Reactive-transport processes in actual WRSFs tend to yield effluent with higher sulfate/Ca ratios and lower alkalinity than is presented in these models owing to the more conservative behavior of sulfate over alkalinity and Ca (aq) . Nevertheless, under oxidizing conditions and in the concentration ranges of 100 to 580 mg/L Ca and 100 to 400 mg/L alkalinity (as CaCO 3 ), these simulation highlight the role of Ca (aq) in driving U(VI) release through desorption, consistent with kinetic experiments as well as previous laboratory studies of U mobility aqueous environments [18,19,68]. Table 2. The HFO composition that are shown at bottom right are the respective millimolar sums of all U, DIC, Mg, and Ca species that are sorbed in each simulation.
Conclusions and Implications for U mobility in NRD
Weathering experiments on granite, schist, and gneiss mine wastes indicated that waste rock containing µg/g levels of U can present a source for U release, in particular under NRD conditions. Weathering experiments and geochemical simulations suggested that U release in NRD was more sensitive to dissolved Ca concentrations than to alkalinity. The combined effects of sulfide-mineral oxidation, H2SO4 generation, and carbonate-mineral dissolution generated circumneutral to alkaline-pH drainage and released Ca that promoted formation of weakly sorbing calcium-carbonato-uranyl complexes. So long as the carbonate mineral dissolution rate matches the rate of acid generation, CCU complexation and U mobilization can be sustained. Thus, fresh (unweathered) rocks sustained higher U release rates when exposed to oxidizing conditions because they released more Ca and thus enhanced CCU complexation while also having lower sorption-site availability. Conversely, oxidized rocks effectively attenuated U because of a combination of higher sorption-site availability and lower carbonate and sulfide-mineral abundances. Sequencing experiments indicated that U concentrations could, therefore, be decreased by circulating U-rich NRD into HFO-rich oxidized rocks. Overall, these results indicate that the availability of carbonate and sulfide minerals and HFO can better predict U release than solid-phase U abundances alone. Future studies of U in NRD should also consider the potential for U desorption under higher pCO2 conditions, such as those often found in full-scale WRSFs. Modeled aqueous U speciation in all simulations was overwhelmingly dominated by CCU complexes (>97%). Cases 2A and 2B (with Mg-carbonate) further demonstrated that when Ca (aq) was controlled by gypsum saturation, U desorption was not exacerbated by Mg 2+ release and formation of Mg-uranyl-carbonate complexes. These complexes constituted <1% of aqueous U while CCU complexes overwhelmingly dominated U speciation even after Ca (aq) concentrations were controlled by gypsum precipitation. Rather than increasing U mobility through Mg-uranyl-carbonate complexation, replacing calcite with a Mg-calcite had the opposite effect of decreasing U mobility because of lower Ca (aq) concentrations than in Case 1 in which pure (Mg-free) calcite was used (Figure 8).
Case 4 shows that at higher pCO 2 , U mobilization is also dependent on Ca (aq) until the point of gypsum saturation and that U sorption is subsantially weaker at higher pCO 2. This result is consistent with laboratory studies that show a narrowing of the uranyl-HFO sorption envelope under elevated pCO 2 and neutral pH conditions (pH ca. 7 to 8.5) [16,19,20,67].
Overall, these models emphasized the role of Ca (aq) and pCO 2 on U mobility under well-buffered and oxidizing conditions.
Conclusions and Implications for U mobility in NRD
Weathering experiments on granite, schist, and gneiss mine wastes indicated that waste rock containing µg/g levels of U can present a source for U release, in particular under NRD conditions. Weathering experiments and geochemical simulations suggested that U release in NRD was more sensitive to dissolved Ca concentrations than to alkalinity. The combined effects of sulfide-mineral oxidation, H 2 SO 4 generation, and carbonate-mineral dissolution generated circumneutral to alkaline-pH drainage and released Ca that promoted formation of weakly sorbing calcium-carbonato-uranyl complexes. So long as the carbonate mineral dissolution rate matches the rate of acid generation, CCU complexation and U mobilization can be sustained. Thus, fresh (unweathered) rocks sustained higher U release rates when exposed to oxidizing conditions because they released more Ca and thus enhanced CCU complexation while also having lower sorption-site availability. Conversely, oxidized rocks effectively attenuated U because of a combination of higher sorption-site availability and lower carbonate and sulfide-mineral abundances. Sequencing experiments indicated that U concentrations could, therefore, be decreased by circulating U-rich NRD into HFO-rich oxidized rocks. Overall, these results indicate that the availability of carbonate and sulfide minerals and HFO can better predict U release than solid-phase U abundances alone. Future studies of U in NRD should also consider the potential for U desorption under higher pCO 2 conditions, such as those often found in full-scale WRSFs. Table S1. Solid-phase composition of rocks used in column and field bin experiments. Supplementary Table S2. Geochemical reactions added to the wateq4f.dat database for PHREEQC simulations [71]. Supplementary Table S3. Uranium redox speciation determination by XANES-LCF using uranyl nitrate and uraninite as U(VI) and U(IV) standards, respectively. Supplementary Table S4. Mineralogy of rocks used in kinetic tests determined by pXRD. Supplementary Figures: Supplementary Figure S1. Schematic diagram of field-bin experiment apparatus. Supplementary Figure S2. Acid-generating potential (calculated from total-sulfur) against carbonate neutralizing potential (calculated from TIC) in drill-core samples. Supplementary Figure S3. Bivariate plots of U against As, Sb, and S by rock type for Coffee drill-core samples from the fresh weathering facies. Supplementary Figure S4. Aqueous U speciation in field-bin leachates. Supplementary Figure S5
|
2020-12-17T09:10:56.608Z
|
2020-12-09T00:00:00.000
|
{
"year": 2020,
"sha1": "c47bbad032b15df5f49b91fec223f903873b8663",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-163X/10/12/1104/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "de3e9be302cb21f6877361cc6dfd7dee8765e522",
"s2fieldsofstudy": [
"Environmental Science",
"Geology"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
78492022
|
pes2o/s2orc
|
v3-fos-license
|
Partial Splenic Embolization That Improved Esophageal Varices and Facilitated Antiviral Therapy in a Case of Cirrhosis Due to Hepatitis C
A 57-year-old man with hepatitis C cirrhosis experienced sudden hematemesis and was brought to the hospital via ambulance. He underwent endoscopic variceal ligation for ruptured esophageal varices (EV). His platelet count was reduced to 6 × 104 g/dl and splenomegaly was observed. Therefore, partial splenic embolization was performed. However, a complicating portal vein thrombus required thrombolytic therapy with warfarin. His platelet count increased to 15 × 104 g/dl, the EV did not worsen, and there was no complicating hepatocellular carcinoma (HCC). Thus, a planned 24-week course of combined pegylated-interferon (Peg-IFN)-α2a and ribavirin was initiated. Neutropenia occurred during therapy, which led to a reduction in the dose and frequency of Peg-IFNα2a administration. A sustained viral response (SVR) was obtained after completion of the treatment. The SVR has persisted up to 6 years since the EV rupture. The patientʼs platelet count remains 15 × 104 g/dl, the liver function remains normal, the EV have improved, and there is no complicating HCC.
Introduction
Currently, about 85% of hepatocellular carcinoma (HCC) cases derive from hepatitis B virus (HBV) or hepatitis C virus (HCV). As liver fibrosis advances, the frequency of HCC is said to increase. Moreover, with liver cirrhosis (LC), complications tied to poor prognosis, may occur and lead to liver failure. If the cause is HBV or HCV, it is important to eliminate the virus and stabilize liver functions via antiviral therapy, if possible. In this case, esophageal varices (EV) complicating hepatitis C cirrhosis ruptured twice. After endoscopic therapy, partial splenic emboli zation (PSE) was performed for splenomeg aly, then thrombolytic therapy was performed for a portal vein thrombus (PVT). Then, combined therapy of inter mittent low-dose pegylated-interferon (Peg-IFN) α2a and continuous ribavirin (RBV) was given. This obtained a sustained viral response (SVR). At present, 6 years since the first examination, the patient is doing well with improved platelet count and EV, and no HCC.
Case Presentation
The patient here was a 57-year-old man. In April 2008 he experienced sudden hematemesis and was brought to the hospital by emergency transport. An upper gastrointestinal endoscopy showed the bleeding had stopped naturally. He was diagnosed with EV (F2RC2) and endoscopic variceal ligation (EVL) was performed immediately ( Fig. 1-A, B). After this he stopped coming to the hospital of his own accord, but in July the EV again hemorrhaged and a second EVL was performed. He was then admitted to the hospital for a PSE to address low platelet count and prevent worsening of the EV. The patient at age 16 years received a blood transfusion after being injured in a traffic accident. At age 43 years a local doctor suspected LC but the patient did not come to the hospital regularly. He had a history of alcohol con sump tion of 500 ml per day of beer for 35 years. He had no smoking history. Table shows examination findings from the time of admission, which led to a diagnosis of hepatitis C cirrhosis by genotype 1b. The contrast enhancement computed tomography (CE-CT) showed atrophy of the liver, an irregular surface and blunt margin, as well as splenomegaly and collateral circulation, which are compatible with LC. No ascites were found ( Fig. 2-A, B). After admission, the low platelet count was determined to be due to hypersplenism accompanying splenomegaly, and the EV to derive from the left and posterior gastric veins. According to the treatment guidelines on Ministry of Health, Labour and Welfare, the adaptation standards of the PSE become less than platelet 50,000. There was 60,000 this example, but judged it with PSE PVT seen in the right portal vein branch (). There are areas of poor contrast enhancement in the spleen from PSE. D: CE-CT after warfarin session.
The PVT has disappeared from the right portal vein branch (). Artifacts of coils from PSE are observed in the spleen. final infarction rate of about 80%. The platelet count then gradually improved, but a PVT was found in the right portal vein branch. This was thought to be due to PSE (Fig. 2-C), so in October 2010 thrombolytic therapy using warfarin was initiated. This took a long time, but after 10 months, the PVT had completely disappeared (Fig. 2-D). At this point the platelet count had risen to 15 × 10 4 g/dl the EV had not worsened, and there had been no incidence of HCC, so a planned 24-week course of Peg-IFN-α2a and RBV began in November 2011. Eight weeks later HCV-RNA turned negative, but during the course neutropenia was observed, so the Peg-IFN-α2a dose was reduced to 30 μg (usually 90 μg) and it was administered inter mit tently. Treatment finished in December 2012, obtaining a SVR (Fig. 3). HCV-RNA has not reappeared. Six years after hematemesis occurred (August 2014), the patient's platelet count was 15 × 10 4 g/dl, ALT 13, TB 0.8, there is great improvement of the EV ( Fig. 1-C, D), and there is no HCC. In this case, multidisciplinary therapy including PSE was successful for hepatitis C cirrhosis.
Discussion
Prognostic factors for cirrhosis include (1) esophageal and gastric varices, (2) uncontrollable hepatic coma, (3) hyponatremia, (4) hepatorenal syndrome, and (5) HCC. In a majority of cases of HCC, there is a background of chronic liver disease caused by HBV or HCV. In particular, patients chronically infected with HCV make up 60%, so onset is often discovered through LC. Antiviral therapy is important for preventing these complications. Ever since interferon was shown to completely eliminate the virus, antiviral therapy has undergone major changes in Japan over the past more than 20 years. Combined Peg-IFN and RBV have long been the standard therapy for difficultto-treat genotype 1b types with high viral loads, but the SVR rate with this is less than 50%. In November 2011, the protease inhibitor teraprevir (TVR) became combinable with Peg-IFN and RBV in Japan. 1) Later, simeprevir appeared as an alternative to TVR. These developments increased the SVR rate to over 80%. Further, the recent use of IFN and development of direct acting antivirals have made it possible to obtain similar SVR rates in patients with low platelet or neutrophil levels. 2) If the virus is not eliminated, it causes portal hypertension leading to various extrahepatic complications. Hypersplenism is one such complication and leads to pancytopenia, a worsening of collateral circulations, and negatively impacts upon prognosis of LC 3,4) . However, PSE was first introduced by Spigos et al. in 1979 5) , a recent study has suggested that PSE has improved outcomes with respect to thrombocytopenia and portal hypertension 6) .
Yet, only retrospective case-series have been reported on the effectiveness and safety of IFN therapy after PSE for thrombocytopenia accompanying hepatitis C cirrhosis. 7,8) There is the safe report by Kondo et al., but is not the number of enough reports. 9) In our experience in this case, although PVT occurred due to PSE, thrombocytopenia improved and com bined therapy of Peg-IFN and RBV, which was mainstream at the time, could be administered. Although neutropenia led to the dose being lowered and intermittent administration, treatment was completed without any major complications. The SVR has continued up to the present, about 2 years. As improvements in platelet count and EV were observed more than 6 years after the initial examination in this case, this suggests the importance of close observation during treatment and of not giving up on multidisciplinary therapy. In the future, we would also like to say that even if HCC occurs, treatment can be administered without hindrance.
|
2019-03-16T13:03:47.678Z
|
2016-04-01T00:00:00.000
|
{
"year": 2016,
"sha1": "f4c68298b14956a13d2d45ddd0faac3cd6253718",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/numa/75/2/75_95/_pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "d05e8d9673552c3d416027dd757d835b7f02837b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
259246978
|
pes2o/s2orc
|
v3-fos-license
|
Assessing the implementation of physical activity-promoting public policies in the Republic of Ireland: a study using the Physical Activity Environment Policy Index (PA-EPI)
Background Government policy can promote physical activity (PA) as part of a multilevel systems-based approach. The Physical Activity Environment Policy Index (PA-EPI) is a monitoring framework which assesses the implementation of government policy by drawing on the experience of national stakeholders. This study is the first to assess the extent of policy implementation in the Republic of Ireland using the PA-EPI tool, and to provide information on how policy implementation can be improved, with the intention of maximizing its impact on population levels of PA. Methods This mixed-methods research study, comprising eight steps, was carried out in 2022. Information documenting the evidence for implementation of PA policy, across all 45 PA-EPI indicators, was collected via systematic document analysis, and validated via survey and interview with government officials. Thirty-two nongovernment stakeholders rated this evidence on a five-point Likert scale. Aggregated scores were reviewed by stakeholders who collectively identified and prioritized critical implementation gaps. Results Of the 45 PA-EPI indicators, one received an implementation rating of ‘none/very little’, 25 received a rating of ‘low’ and 19 received a ‘medium’ rating. No indicator was rated as fully implemented. The indicators that received the highest level of implementation related to sustained mass media campaigns promoting PA and PA monitoring. Ten priority recommendations were developed. Conclusions This study reveals substantial implementation gaps for PA policy in the Republic of Ireland. It provides recommendations for policy action to address these gaps. In time, studies utilizing the PA-EPI will enable cross-country comparison and benchmarking of PA policy implementation, incentivizing improved PA policy creation and implementation. Supplementary Information The online version contains supplementary material available at 10.1186/s12961-023-01013-6.
Introduction
Physical activity (PA) contributes to reduced mortality [1], improved mental health outcomes [2,3] and a lower burden of disease from noncommunicable diseases [4] (NCDs) and from infectious diseases [5]. PA participation has also been linked to social outcomes such as increased happiness [6] and social capital [7]. Hence, enabling people to engage in PA provides an opportunity for people to exercise a greater element of control over their own health and well-being.
In light of these health and social benefits, the WHO has published global targets seeking to promote PA [8]. The Global Action Plan on Physical Activity (GAPPA) aims for a 15% relative reduction in inactivity by 2025 [8]. However, studies of trends in PA levels reveal that inactivity levels have remained stubbornly unchanged thus far in the 21st century [9,10]. This suggests that if these trends continue, the GAPPA target will not be met [11]. Therefore, new strategies for supporting healthy PA behaviours are required [9].
It has been argued that strategies targeting the socalled 'upstream' barriers to PA should be pursued [12]. In essence, this requires a strategy of building and implementing public policy [13,14] in the domain of PA. Theoretical support for this idea comes from ecological models, widely used in public health research [15], which highlight the importance of policy action (for example, Sallis and colleagues [16]). Empirical support comes from review studies which identify social and environmental factors which influence population PA levels [17][18][19][20]. These findings suggest that policy action is necessary to affect changes to these environments to empower people to engage in healthier PA behaviours. Policy actions have been defined by Kelly and colleagues as 'actual options selected by policy-makers. Public policy actions are specific actions put into place by any level of government or associated agencies to achieve the public health objective. They may be written into broad strategies, action plans, official guidelines/notifications, calls to action, legislation or rules and regulations. A policy action may have its own exclusive policy document or may be part of a larger document' . [21] [iv14] The WHO began to issue PA policy guidance documents in the mid-2000s [22], and the number of policies promoting PA has increased [23] with over 90% of countries globally having a national PA policy, though evidence exists that PA policy is not effectively operationalized [24,25].
Alongside the rise in national PA policies, there has been a concomitant rise in the number of scientific publications concerning PA policy since the mid-2000s. This indicates the development of PA policy research as a scientific field [23,26]. The maturing of PA policy research is also evidenced by the development of tools such as the WHO's Health Enhancing Physical Activity Policy Audit Tool (HEPA PAT) [27], which facilitates comparative policy research, or the Comprehensive Analysis of Policy on Physical Activity (CAPPA) framework which categorizes PA policy research according to purpose of analysis, policy level under analysis, policy sector, type of policy, stage of the policy cycle and scope of the analysis [28].
Rütten and colleagues [26] highlight that there have been relatively few studies into how policy-making processes influence PA policy interventions. An example of how the policy process can influence PA intervention is through the extent of policy implementation (in essence, the processes by which policies are put into effect [29] [p. 12]). Research is needed to examine the extent to which policies that exist on paper are implemented in practice.
The Physical Activity Environment Policy Index (PA-EPI) is a monitoring framework recently developed to assess government policies and actions for creating a healthy PA environment (defined as the 'context, opportunities and conditions that influence one's PA choices and behaviours' [30][p. 4]). The process of developing and validating the PA-EPI framework is described by Woods and colleagues [30]. The PA-EPI is conceptualized as a two-component 'policy' and 'infrastructure support' framework. The two components comprise eight policy and seven infrastructure support domains. The policy domains are education, transport, urban design, healthcare, public education (including mass media), sport for all, workplaces and community. The infrastructure support domains are leadership, governance, monitoring and intelligence, funding and resources, platforms for interaction, workforce development, and health-in-all policies. Forty-five 'good practice statements' (GPS) or indicators of ideal good practice within each domain concludes the PA-EPI. The eight-step process of conducting the PA-EPI will allow countries to identify areas of strength and weakness in the implementation of their national PA promoting policies, and potential actions needed to address critical implementation gaps. The PA-EPI results will provide data and examples of good practice in PA policy implementation, and it is envisaged that, in time, these examples will evolve to benchmarks as countries share knowledge and expertise on effective implementation processes.
The Republic of Ireland is the first country to have the extent of the implementation of its PA policies assessed using the PA-EPI. According to the Global Observatory for Physical Activity, less than half (46%) of the population of the Republic of Ireland engages in sufficient PA to meet health recommendations, and inactivity contributes to 8.4% of all deaths in Ireland [31]. Identifying implementation gaps in the PA policy response is part of the solution to increasing the proportion of the population meeting the PA guidelines, and to reducing the impact of inactivity. This study has two aims: the first is to identify critical implementation gaps by assessing the extent of PA policy implementation in the Republic of Ireland, and the second is to identify and prioritize actions that can strengthen policy implementation in Ireland.
Study design
This study is a sequential process, which combines both qualitative and quantitative methods. The INFORMAS network [32] developed the Food-EPI, on which the PA-EPI is based. They also designed a detailed eight-step process for completion of the Food-EPI, which has been accomplished in 40 countries worldwide [33]. Figure 1 shows this eight-step process, which was adapted for completion of the PA-EPI [30].
In brief, steps one to four involve the creation of an evidence document and its validation with government officials (see details below). Once complete and validated, quantitative data collection aimed at assessing the extent of implementation of the GPSs, using the evidence document, is undertaken by nongovernment PA stakeholders (step 5, see details below). From this ratings data, critical gaps in the implementation of PA policy are identified (step 6). The final two steps involve making recommendations for implementation actions (step 7) and dissemination of the PA-EPI results (step 8). Ethical approval for this study was necessary as steps 4, 5 and 7 required data collection from human subjects. Ethical approval was obtained from the Research Ethics Committee of the Faculty of Education and Health Sciences at the University of Limerick (2022_02_01_EHS).
The recruitment of a coalition of national stakeholders, two mutually exclusive groups: government officials and a panel of nongovernment PA stakeholders, is an important part of the PA-EPI process. The 'government officials' group include civil servants affiliated with governmental departments and high-ranking employees of state agencies. The inclusion of government officials is necessary to ensure that information on PA policy within the PA-EPI evidence document is comprehensive and accurate. The nongovernment PA stakeholders include researchers with knowledge of the PA environment and practitioners working for organizations promoting PA. The inclusion of nongovernment stakeholders supports engagement of civil society with the PA policy process (Fig. 2).
Study procedure
To conduct the PA-EPI process in the Republic of Ireland, the eight steps briefly described above were followed.
Step one: analysing context The first step of the process is to analyse the context of the country under study and decide which of the GPSs to utilize in the policy assessment and to begin drafting the evidence document. Some indicators of the PA-EPI may not be relevant for jurisdictions where there is substantial decision-making power devolved to subnational levels of government.
The Republic of Ireland is a unitary state with two levels of government, national and local level, established in accordance with Article 28A of the Constitution of Ireland. However, the local level of government has responsibility for a limited number of functions and its autonomy from national government is amongst the most limited in the EU [34]. Due to the level of involvement by national government for all indicators, the full list of PA-EPI indicators was retained without adaptation.
Step two: collecting relevant information Step two involves collecting examples of implemented PA policies for each of the PA-EPI indicators from anywhere in the world. These were sourced from an analysis of WHO documents (for example, [35]), the academic literature and the PA policy experts consulted as part of the PA-EPI development process [30]. These examples are presented in the PA-EPI evidence document as Best Practice Exemplars (BPEs). They allow for comparison to the evidence of implementation to the country being studied, in this case the Republic of Ireland.
To collect national evidence of PA policy implementation in Ireland, several methods were used. Searches for evidence of implementation were undertaken in 2022. The first method was an audit of Ireland's policy context for PA using the WHO Health Enhancing Physical Activity Policy Audit Tool (HEPA PAT) [36]. The HEPA PAT identified national policies pertinent to the promotion of PA, and provided information on the key policy documents (for example, 'Get Ireland Active': Ireland's National PA Action Plan) and agencies (for example, Sport Ireland, Healthy Ireland) tasked with policy implementation. The second method, supplemented the HEPA PAT evidence, with internet searches of webpages of government departments and state agencies. A third method, which occurred simultaneously with internet searches, was extensive snowballing using the documents already identified. This involved reference checks of the included documents as well as searches using the titles of the documents to identify related documents such as action plans or implementation reports. Details of how the three methods were combined are displayed in Additional file 1.
The following was considered suitable evidence for inclusion in the evidence document: excerpts from formal written policy documents, (including statutes, guidelines and curricula), information from the websites of government departments or state agencies, information from websites identified with initiatives or programmes cited in written documents and academic literature describing PA policy implementation in the Republic of Ireland.
The following evidence was excluded from the evidence document: evidence of policies of local government and policies of nongovernmental bodies unrelated to public policy. Decisions on whether to include evidence in the evidence document were also informed by the wording and scope of the indicators, which is outlined in the evidence document.
Step three: evidence-grounding the actions The third step was to extract information from the policy documents identified to populate the 'Evidence of implementation in Ireland' sections of the PA-EPI evidence document. Documents were scanned for lists or tables (for example, lists of actions) and keyword searches were performed within the documents based on the wording of each GPS. This evidence of implementation identified was summarized in short paragraphs and presented as tables for each of the 45 GPSs. As per protocol, draft one of the Irish PA-EPI Evidence Document was reviewed repeatedly by the research team before being prepared for validation by government officials.
Step four: validating evidence with government officials A purposive sample of government officials from different departments and agencies of the civil service was identified based on their roles, and/or prior collaborations with the PA research community in Ireland. The government officials were civil servants who had acted as representatives for their departments and agencies at PA events and whose role was identified from publicly available information. The research team reached out to the government officials via email and asked them to ensure the completeness of the evidence document. The email contained a link to an online questionnaire developed using Qualtrics software. Participants were provided with information about the study by a video embedded on the first page of the questionnaire, followed by a request for informed consent. If participants required further information, researchers answered their questions over a phone conversation. The questionnaire presented the government officials with the 45 GPSs of the PA-EPI, each on separate pages, above the evidence of implementation corresponding to the GPSs. Beneath the evidence of implementation was a questionnaire item which allowed the government officials to indicate amendments that needed to be made to make the evidence of implementation comprehensive. An example of the questionnaire layout is provided in Fig. 3a and b. Six government officials were contacted and four (two male, two female) provided feedback on the evidence document, resulting in 72 individual comments being made. The research team reviewed the comments and any relevant information identified as missing was carefully considered and added to a final draft of the evidence document.
Step five: rating the government policies and actions using the PA-EPI The fifth step was to assess the extent of implementation of the PA-EPI GPSs in the Republic of Ireland. A similar process to that used in the validation step was utilized for acquiring informed consent from participants in this step. Nongovernment PA stakeholders were identified either from their roles as researchers who have published on the topic PA in the Republic of Ireland or from their roles as PA promoters operating in Ireland. Nongovernment stakeholders were recruited via email and asked to complete an online questionnaire. Thirty-two individuals were contacted: 13 were academics (41%) and 19 of the nongovernment stakeholders were practitioners (59%). Practitioners included persons with a role promoting PA for local government or for nongovernmental organizations. Sixteen nongovernment stakeholders (50%) rated the extent of implementation of the GPSs of the PA-EPI in Ireland. Participants who accessed the questionnaire were asked to rate the evidence of implementation for each of the GPSs on a five-point scale. Participants were also provided with a 'cannot rate' option and the opportunity to comment on the implementation of each of the GPSs. An example of the format of the questionnaire is provided in Fig. 4.
Step six: weight, sum and calculate rating scores
The ratings scores were downloaded by the research team and the median rating was calculated for every indicator. Median was preferred over the mean as a measure of central tendency. The computed median scores where then utilized to categorize the extent of implementation as 'very little/none' , 'low' , 'medium' or 'high' . Interrater Reliability (IRR; Gwet's AC2 coefficient) was calculated for the implementation ratings using Agreestat software. The IRR for the implementation ratings was 0.554 (95% CI 0.495-0.612; percentage agreement 87%). The comments provided by the nongovernment stakeholders were also downloaded and implementation recommendations were extracted from these comments.
Step seven: qualify, comment and recommend
The seventh step involved a 1 day workshop to recommend policy implementation actions. All stakeholders were invited to attend in-person or online through Microsoft Teams. Six nongovernment stakeholders and two government officials participated in the workshop. Attendees were presented with the median rating scores for the implementation of the GPSs in the Republic of Ireland and the implementation recommendations extracted from the comments in the previous phase and asked to contribute further recommendations. Attendees debated the wording of implementation recommendations. Some implementation recommendations were removed and wording of other implementation recommendations was revised by the research team considering attendees' recommendations. The revised list of implementation recommendations was circulated to all workshop attendees by the research team via email for confirmation. Following the finalization of wording, a questionnaire was sent around to all nongovernment stakeholders asking them to select five implementation recommendations from the policy domains and rank them based on the criteria of importance, achievability and equity. These criteria are an adaptation of the criteria described by Vandevijvere and Swinburn [37] (in a protocol developed to guide researchers on how to use the Food-EPI, mentioned previously). These criteria are displayed in Additional file 3. Participants were also asked to select five implementation recommendations from the infrastructure support domains and rank them based on importance and achievability. Fifteen nongovernment stakeholders (47%) voted on the implementation recommendations generated at the workshop. The scores for importance and achievability were inverted (so the top ranked recommendation from an individual rating received a score of 5 and the fifth ranked recommendation received a score of 1) and summed together. The five implementation recommendations with the highest summed score were selected as the 'priority' implementation recommendations. The process of summation was conducted for recommendations on both the 'policy' and 'infrastructure support' components of the PA-EPI, yielding a total of ten priority implementation recommendations.
Step eight: translate results for government and stakeholders
An in-person dissemination workshop was conducted, and all participants were invited to attend. The workshop was a joint event organized in collaboration with other research teams involved in health promotion research in Ireland, including research utilizing the Food-EPI tool. The workshop featured guest speakers with expertise in researching healthy diet and PA promotion and a panel discussion between prominent food and PA policy stakeholders. The research team presented research underpinning the development of the PA-EPI and the implementation and prioritization findings. A dissemination report presenting the findings was published and copies were provided to all workshop attendees. Electronic versions of the dissemination materials were uploaded to the internet on a website associated with the project (www. jpi-pen. eu).
Results
The process generates three outputs: (i) the evidence document that contains information describing the implementation PA-promoting public policy in Ireland, (ii) an implementation scorecard presenting the rating of the implementation status of PA policy in the Republic of Ireland (according to expert opinion) and (iii) a list of implementation actions for improving the healthiness of the PA environment in the Republic of Ireland. The evidence document is available in the Additional file 2, the results of the implementation rating exercise is described in Sect. 3.1 'Level of implementation of physical activity environment policy in Ireland' and the prioritization exercise is described in 3.2.
Level of implementation of physical activity environment policy in Ireland
The 'policy' subdomains of the PA-EPI framework contains 21 of the 45 GPSs. Twelve of the 21 GPSs (57%) received a low implementation score and 8 (38%) received a medium implementation score. One indicator (5%) received a 'very little/none' implementation rating from the expert panel. Three of the policy domains, Transport, Urban Design and Healthcare, were rated as having a 'low' level of implementation on every indicator. Two of the policy domains, Community and Sport were rated as having a 'medium' level of implementation on every indicator. These results are displayed in Fig. 5.
The 'infrastructure support' subdomains contain 24 of the 45 GPSs. Thirteen of the GPSs received a low score and 11 received a medium implementation score. One of the infrastructure support domains, Health in all Policies, was rated as having a 'low' level of implementation on every indicator and one, Platforms for Interaction was rated as having a 'medium' level of implementation on every indicator. These results are displayed in Fig. 6.
None of the indicators received the highest categorization of implementation status. The highest scoring indicator in the policy domains was the first indicator in the 'Mass Media' subdomain, which pertains to public policies for sustaining mass media campaigns. The action of promoting PA through media campaigns is mentioned in several policy documents including the National Sports Policy [38] and NPAP [39]. Further, the Republic of Ireland has various media campaigns that promote PA, including the 'Let's Get Back' campaign which encouraged the Irish public to be physically active during the COVID-19 emergency. The highest scoring indicator in the infrastructure support domains was the first indicator in the 'Monitoring and Intelligence' subdomain, which pertains to the monitoring of PA levels across the life course. The Republic of Ireland has several surveys which collect data on PA levels, focusing on different stages of the life course. The Children's Sport Participation and Physical Activity [40] (CSPPA) study, for example, examines sport and PA participation in children aged 10-19 years while the Irish Longitudinal Study on Ageing [41] (TILDA) includes data collection on PA in an older population. However, the other indicators in the monitoring and intelligence subdomain, (that is, the monitoring of PA environments, the monitoring of links between PA outcomes and NCDs, the monitoring of the outcomes of PA policy and the monitoring of inequality-related determinants of PA) all received a low rating.
The low implementation scores for the indicators related to Transport, Urban Design, Healthcare and Health in all Policies identifies a need for heightened efforts to address the implementation gaps in these domains.
Prioritization of implementation actions
The top five implementation recommendations for policy and infrastructure support based on importance and achievability are presented in Tables 1 and 2. Regarding policy domains, the expert panel recommended that positions with responsibility for promoting PA be established in school, and health and social care settings. They also recommended increasing the capacity of health and social care staff to promote PA, replacing standalone PA campaigns with a long-term coordinated effort to promote PA opportunities in the media and the establishment of minimum criteria for inclusion before application for the sport capital grant are considered.
Regarding the infrastructure support, the panel recommended increased funding for long-term PA projects for the monitoring programme outcomes. They also recommended ensuring representation across lifespan, genders and socioeconomic backgrounds in the decision-making process and to dissociate physical activity from unhealthy brands. The most highly rated recommendation, both in terms of importance and achievability, was to update the Irish PA guidelines to reflect recent advances in PA guideline development (Figs. 7, 8). Table 1 Implementation actions to support healthy physical activity environments relating to the policy domains
Leadership in schools [EDU8]
Allocate a post of responsibility for a physical activity lead in every school, at both primary and post-primary levels
Coordinated media campaign [MEDI1]
Foster cross-governmental sustainable resourcing to replace standalone individual physical activity campaigns with a comprehensive, coordinated, multisector long-term multimedia/mode campaign using clear evidence informed consistent messaging over several years
Minimum inclusivity standards [SPOR6]
Establish a set of minimum inclusion and accessibility standards to be incorporated into the scoring system of the Sports Capital and Equipment Programme
Connected community programmes [COMM2]
Improve connection between communities and healthcare services in regard to physical activity participation by increasing the resourcing and/or staffing, with a go-to person for physical activity in the community
Capacity of healthcare staff [HEAL2]
Build capacity of staff across health and social care settings to promote awareness of physical activity benefits and opportunities Table 2 Implementation actions to support healthy physical activity environments relating to the infrastructure support domains
Update guidelines [LEAD1]
Update the Irish Physical Activity Guidelines in line with revised international guidelines
Representation in decision-making [GOVER3]
Have representation across the lifespan, genders and socioeconomic backgrounds in the development and decision-making processes related to physical activity policies
Funding for outcome monitoring [FUND1]
Provide long-term funding for physical activity programmes to support tracking of evidence, outcomes and implementation
4.Research programme for special populations [GOVER1]
Implement a physical activity research and monitoring programme specific to special populations, in particular for disabled persons
Dissociate from unhealthy products [GOVER2]
Dissociate physical activity from unhealthy products and brands promoting unhealthy products
Discussion
This study is the first to assess the extent of implementation of government policy actions which improve the PA environment. The process of assessing government actions generated an evidence document providing an overview of the government actions in place which supported PA, revealed areas of relative strength as well as gaps in implementation, and provided priority recommendations for strengthening PA policy implementation in the future. The evidence document was praised by stakeholders who participated in the study for providing them with an overview of the available policy documents in Ireland, which is an important contribution of the work in and of itself.
Complementarity of the PA-EPI with other policy research resources
The process of generating the evidence document was supported by previous work using the HEPA PAT. The HEPA PAT has been recommended as a comprehensive tool for performing PA policy analysis [23] and it has been utilized in other European countries to conduct analyses of PA policy. However, reviewers of extant PA policy tools have noted that the PAT is 'more suitable for an audit than an assessment' [23] (p. 9) and further, that researchers should look into the possibility of complementary tools. This study highlights the complementarity of the PA-EPI tool with other instruments available to PA policy researchers, such as the HEPA PAT. It also demonstrates the additional benefit of using the PA-EPI for benchmarking and analysing the state of policy implementation. PA-EPI studies can provide unique information on implementation gaps that should be targeted to develop supportive PA environments.
Implementation strengths and gaps
The results of this study reveal that the infrastructure support domains were judged to be better implemented than the policy domains. This is a nearly universal pattern for studies utilizing the Food-EPI [42,43]. Further studies will reveal whether a similar pattern emerges for the PA-EPI as well and hopefully provide insight into the dynamics underlying these patterns. The implementation status of the indicators suggest that the Republic of Ireland can build on its relative strengths in the Mass Media and Monitoring and Intelligence domains. However, the results of the study also suggest that there are implementation gaps regarding Transport, Urban Design, Healthcare and Health in all Policies. The low implementation ratings in the Healthcare domain appears to corroborate previous research on PA promotion by healthcare professionals in Ireland. Cantwell and colleagues [44] reported that most healthcare professionals in Ireland did not provide cancer patients with PA advice that aligned with guidelines, while Cunningham and O'Sullivan [45] report that only 30% of healthcare professionals in Northern Ireland and the Republic report receiving adequate training for prescribing PA to older adults. The Republic of Ireland has a policy for promoting PA, among other lifestyle risk factors in healthcare settings, Making Every Contact Count (MECC). The findings of this study, and others which we have cited above, suggest that the implementation of MECC has not been a success. This may be explained, at least in part, by the fact that an internal report commissioned by the HSE found that the health service lacked organizational readiness for this intervention prior to its enactment, It is unsurprising, therefore, that the expert panel recommended that increasing the capacity of staff across health and social care setting to promote awareness of physical activity and better connecting community PA programmes and healthcare be implemented as a priority.
Prioritization
The panel of nongovernment experts prioritized actions in the policy and infrastructure support components of the PA-EPI. In the policy domains, the panel recommended implementation actions in the Education, Healthcare, Mass Media, Community and Sport domains. A difference between the PA-EPI and the Food-EPI is that policy domains of the PA-EPI arguably represent a greater number of independent health promoting settings than the Food-EPI. There is a potential equity concern as targeting different settings may have disproportionate benefits for different demographics. A potential method for promoting equity is to limit the number of actions prioritized per domain. Some of the highest prioritized actions corresponded to indicators that had a relatively strong implementation rating. An implementation recommendation that received a high prioritization rating was the proposal to establish a long-term coordinated effort to promote PA opportunities in the media. It is also noteworthy that stakeholders did not prioritize implementation recommendations in the Urban Design or Transport domains, despite the identified implementation gaps in these domains. Future research may explore apparent discrepancies between identified gaps and prioritized implementation recommendations.
Strengths and limitations
This study is the first to utilize the PA-EPI tool to generate insight into PA policy and hence addresses a knowledge gap regarding the assessment of government action on the issue of PA. The PA-EPI is a pioneering approach in the domain of PA policy and is based on internationally developed and validated methods used in the domain of food policy. A second strength of the study is the independence of the stakeholders involved in rating and prioritization. The research process engaged government officials to ensure that the evidence document is comprehensive and the rating of implementation was conducted by people who were not incentivized to provide positive findings as government officials tasked with performing a self-assessment. A third strength is that the PA-EPI process promotes capacity building. By engaging with government and nongovernmental stakeholders from across sectors, the PA-EPI process promotes network building around the issue of PA. Further, the evidence document is a valuable resource for policy-makers and nongovernmental PA stakeholders.
This study has some limitations. The workshop component was attended by a small sample of stakeholders (n = 7 stakeholders, representing the Education, Sport, Community and Health sectors). Attendance at the workshop may have been affected by scheduling conflicts and the legacy of the COVID-19 pandemic or rates at that time may have affected the willingness of stakeholders to participate in an in-person workshop. The small sample creates the possibility that a particular viewpoint is overrepresented in the output of this exercise. The challenge of potential selection bias has been previously reported by Yamaguchi and colleagues [46], who used the same eight step process in completing the Food-EPI in Japan. Researchers need to consider, in the early stages of the process, how to ensure that the stakeholders involved in the later stages represent a variety of perspectives with differing domains of expertise. A second limitation is that the nongovernment stakeholders involved in the prioritization exercise (n = 13) may have been presented with too many implementation recommendations. Further, the implementation recommendations were not evenly distributed across the domains, with many recommendations pertaining to the Education domain, which in turn led to focus on one part of the life course. The number of recommendations presented may have biased the results of the prioritization exercise to favour actions which target children and younger demographics. While the number of recommendations provided to nongovernment stakeholders was reduced as part of the workshop, this process should be made highly rigorous to avoid any concerns. Researchers should consider methods for limiting the number of recommendations presented for prioritization both in total and per domain. A third limitation is the availability of information on best practice exemplars used for comparison in the evidence document. Early studies utilizing the Food-EPI tool noted that policies put forward as BPEs were often not evaluated for real-world impact and hence not ideal 'gold standards' [47]. A benefit of conducting further assessments utilizing the PA-EPI is that it will provide concrete examples of good practice for review and replication by other countries to address implementation gaps.
Recommendations for future studies
A study of the relative contributions of the GPSs and policy subdomains is needed to develop a weightings system for the PA-EPI. The weighting system would assign a relative importance for each of the GPSs for creating healthy PA environments and allow the calculation of a single PA-EPI score for implementation at step six of the progress. This score facilitates a cross-comparison of national PA-EPI implementation ratings and advances the use of the PA-EPI as a PA policy benchmarking tool. Though the ratings provided by the expert panel in this study suggest that there is substantial scope to improve implementation status of PA policy in Ireland, future studies can confirm whether the Republic of Ireland is a pioneer on this issue. The benchmarking feature of the PA-EPI tool addresses a noted gap in the PA policy research literature [48]. Scoping reviews have demonstrated that PA policy research is overwhelmingly conducted in a few highincome countries [26,49], indicating that the field of PA policy research needs to diversify. Further, inactivity is increasing in developing countries as the dynamics that drive inactivity in developed countries emerge or are adopted [50]. Therefore, testing the PA-EPI process in low-and middle-income countries should be a priority for future research.
Conclusion
This study is the first to undertake a process of PA policy assessment using the PA-EPI tool. The study had two aims: (i) to assess PA policy implementation in the Republic of Ireland and (ii) to prioritize implementation actions for the future. Regarding the former, the extent of implementation was assessed for each of the 45 indicators of the PA-EPI and the results of these assessments suggests that PA policy in the domains of Transport, Urban Design and Healthcare have a low level of implementation in Ireland. By contrast the domains of Mass Media and Monitoring and Intelligence were perceived by nongovernment PA stakeholders to be better implemented in Ireland. Regarding the latter, priority actions were suggested by prioritization workshop attendees and a short list of recommendations, targeting different domains of the PA-EPI, are highlighted in this article. This study contributes to understanding of why public policy may fail to achieve the environment necessary for sustained improvements in population PA. It also provides a roadmap for improved policy implementation in Ireland. The utilization of nongovernment stakeholders has the potential to increase civil society's input to the PA policy agenda.
|
2023-06-26T13:51:05.052Z
|
2023-06-26T00:00:00.000
|
{
"year": 2023,
"sha1": "25c0c42ea835292df36faacdac73f1db48f163de",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "25c0c42ea835292df36faacdac73f1db48f163de",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
1017057
|
pes2o/s2orc
|
v3-fos-license
|
High Expression of Protein Tyrosine Kinase 7 Significantly Associates with Invasiveness and Poor Prognosis in Intrahepatic Cholangiocarcinoma
Background The incidence, prevalence, and mortality of intrahepatic cholangiocarcinoma (ICC) are increasing worldwide. Protein tyrosine kinase-7 (PTK7) is upregulated in many common human cancers. However, its expression in ICC has not been studied. The present study aimed to explore the underlying mechanism of PTK7 in ICC. Materials and Methods The role of PTK7 was studied in vitro by suppressing PTK7 expression in ICC cell lines. The in vivo effect of PTK7 was evaluated using a nude mouse model inoculated with a human ICC cell line. We also examined the role of PTK7 in human ICC samples. Results Cells with high PTK7 expression exhibited higher proliferation, DNA synthesis, invasion, and migration abilities than did cells with low PTK7 expression. The knockdown of PTK7 with small interfering RNA (siRNA) in high PTK7 expressing cells resulted in impairment of invasion, migration, and DNA synthesis through the regulation of several cell-cycle-related proteins. It also induced cell apoptosis and decreased phospho-RhoA expression. In a xenograft nude mouse model, PTK7 siRNA resulted in a reduction of the tumor size, compared with scrambled siRNA injection. PTK7 expression was higher in human ICC than in the normal bile duct. Patients with low expression of PTK7 had a longer disease-free survival and overall survival than those with high expression. Conclusions PTK7 expression plays an important role in the invasiveness of ICC cells and leads to a poor prognosis in ICC patients. Thus, PTK7 can be used as a prognostic indicator, and the inhibition of PTK7 expression could be a new therapeutic target for ICC.
Introduction
Intrahepatic Cholangiocarcinoma (ICC) may arise through the malignant transformation of cholangiocytes in any part of the biliary tree. Biliary epithelial cells undergo genetic and epigenetic alterations in various regulatory genes, which accumulate and lead to the activation of oncogenes and the dysregulation of tumor suppressor genes, generating irreversible changes in the physiology of the cholangiocytes [1].
The high mortality and poor outcome of this disease are attributed to the lack of available tools for its early diagnosis and treatment. Surgery represents the only curative treatment for ICC, however, surgery is only feasible at an early stage and is characterized by a high rate of recurrence [2]. Recent therapeutic options include brachytherapy and photodynamic therapy, although their effects have not yet been established.
Protein tyrosine kinase-7 (PTK7) is a relatively new and lessstudied member of the receptor tyrosine kinase superfamily. It was originally identified as a gene expressed in a colon cancer-derived cell line, but it is not expressed in human adult colon tissues [3]. PTK7 expression is upregulated in many common human cancers, including colon cancer, lung cancer, gastric cancer, and acute myeloid leukemia [3][4][5][6][7][8].
Recently, PTK7 was identified as a novel regulator of non-canonical Wnt or planar cell polarity (PCP) signaling [9]. These PCP signaling pathways control cellular polarity, cell mobility, and signal, resulting in a modification of the cytoskeleton [10].
Previously, we have found that PTK7 was associated with a poor prognosis in patients with intrahepatic cholangiocarcinoma using cDNA mediated annealing, selection, extension and ligation CHiP study (unpublished data).
The aim of this study was to explore the role of PTK7 in ICC. To our knowledge, this is the first insight into the role of PTK7 in ICC and the underlying mechanism of its involvement in ICC both in vitro and in vivo.
Ethics Statement
The paraffin-embedded tissues surgically resected from the patients were used as a retrospective study after its use for diagnosis. The tissue samples were released from the diagnostic archive, and did not identify the patients. The need for written informed consent was waived by the Institutional Review Board of Seoul National University Hospital. The present study was conducted in accordance with the ethical standards of the Helsinki Declaration in 1975, after approval of the Institutional Review Board of Seoul National University Hospital (H-1011-046-339).
The animal experiment was approved by the Seoul National University Hospital Institutional Animal Care and Use Committee (SNUH-IACUC No: 13-0051). All surgery was performed under anesthesia, and all efforts were made to minimize suffering.
Cell lines and cell culture
The cholangiocarcinoma cell lines, JCK, SCK, Choi-CK, and Cho-CK were kindly provided by the Division of Gastroenterology and Hepatology of Chonbuk National University Hospital (Jeonju, Republic of Korea) [11]. HuCCT1 and OZ cell lines were purchased from the Japanese Collection of Research Bioresources Cell Bank. The HepG2 cell line was used as a positive control for PTK7 expression. The HuCCT1, OZ, and HepG2 cells were maintained in RPMI 1640 medium, William's E medium, and Eagle's minimal essential medium, respectively. The JCK, SCK, Choi-CK, and Cho-CK cell lines were all maintained in Dulbecco's Modified Eagle Medium. Each medium was supplemented with 10% fetal bovine serum and 1% penicillinstreptomycin.
Small interfering RNA (siRNA) and transfection
Transfection of three PTK7-specific siRNA and scrambled negative siRNA (Integrated DNA Technologies, Iowa, USA) at a final concentration of 60 nM was performed using G-fectin transfection reagent (Genolution Pharmaceuticals, Seoul, Korea) ( Table 1). Cells were maintained under normal culture conditions and were harvested for analysis at 72 hours after transfection.
Cell proliferation and DNA synthesis ability analysis
Cells were seeded into a 96-well plate for 18 hours. Next, cell viability was evaluated using a MTT cell proliferation kit (Roche, Mannheim, Germany) and DNA synthesis was detected using an ELISA bromodeoxyuridine (BrdU) colorimetric kit (Roche, Mannheim, Germany).
Western Blotting
Protein samples were boiled, loaded onto sodium dodecyl sulfate gels and electrotransferred onto polyvinylidene difluoride membranes (Millipore Corporation, MA, USA) [12]. The membranes were incubated with appropriate primary and HRPconjugated secondary antibodies (Zymed, CA, USA) for the required times.
Reverse-Transcription Polymerase Chain Reaction
Reverse-Transcription Polymerase Chain Reaction (RT-PCR) of PTK7 and GAPDH was performed as described [13]. The PCR primer and probe sets are listed in Table 1.
Annexin V staining and FACS analysis
After incubation with or without siRNA-PTK7 for 72 hours, an apoptosis kit (Medical&Biological Laboratories Co., Nagoya, Japan) was used to assay for siRNA-induced apoptosis. A volume of 5 mL of Annexin V-FITC and 2.5 mL of propidium iodine were added before analyzing using a BD FACS caliber flow cytometer (BD Biosciences, MA, USA).
Invasion assay
The invasiveness of cells was assayed using transwell membranes coated with 500 ng/mL of Matrigel (BD Biosciences, San Jose, CA, USA) [14]. Penetrated cells were counted in 5 microscopic fields by using a digital camera with a 4006 objective lens (Olympus, Tokyo, Japan).
Wound healing assay
Cells were seeded into 96-well plates. After 2-4 days, each culture plate reached a confluent cell monolayer. A small area was disrupted by scratching a line through the cell layer with a 200-mL pipette tip. Photographs were taken with a digital camera at 0, 12, 24, 36, and 48 hours after scratching.
Xenograft nude mouse model
Ten male BALB/C nude mice, 7 weeks old and 20-25 g in weight, were inoculated subcutaneously with HuCCT1 cells (1610 7 ) into the dorsal area. After 5 weeks when the tumor volume reached 180 mm3 on average, nude mice were randomly divided into 2 groups (five mice in each group). PTK7 siRNA (40 mmol/L) dissolved in 200 mL of AteloGeneH (KOKEN, Tokyo, Japan) was administered directly into the tumour every 3 days for three times. A scrambled siRNA was used as control. Tumors were removed 10 days after the final siRNA treatment and were fixed in formalin and embedded in paraffin blocks for further study. Tumor volume was assessed with the formula:
Selection of patients and tissue specimens
Samples of ICCs were collected from 194 patients who had undergone surgical resection at Seoul National University Hospital in Seoul, from 1992 to 2010. We divided the patients into test set (all the 78 cases of patients in 1992-2001) and validation set (the 116 cases of patients in 2002-2010).The hematoxylin and eosin (H&E) stained pathological slides and clinicopathological medical records of all the cases were reviewed. Follow-up periods ranged from 1 to 196 months (median follow-up duration: 30.0 months). The patients' age at the time of diagnosis ranged from 37 to 80 years (median age: 61.5 years). Tumor size ranged from 0.3 to 26.0 cm (mean tumor size 6 SD: 5.5360.25 cm). Disease-free survival (DFS) was defined as the time to local or distant progression. Overall survival (OS) was defined as the time to ICC-related death. All 194 patients had no evidence of postoperative residual malignancy. Fifty-one of the patients examined, had received adjuvant chemotherapy. With regard to the underlying liver disease, 15 patients had chronic hepatitis, 12 of whom had hepatitis B virus infection and 3 had hepatitis C virus infection; 3 patients had clonorchis sinensis; and 3 patients had hepatolithiasis. Tumor differentiation was categorized based on the grading system described by the World Health Organization classification [15]. To use as controls, normal bile duct tissues were collected from patients with hepatolithiasis, who had undergone surgical resection.
Construction of tissue microarray
Suitable areas with two representative tumor areas for each case were marked on the H&E stained sections, and core tissue specimens (2 mm in diameter) were collected from individual paraffin-embedded tissues and rearranged in new tissue array blocks by using a trephine apparatus (SuperBioChips Laboratories, Seoul, Korea).
Immunohistochemical staining and evaluation of clinical samples
The b-catenin staining was considered positive if both a loss of membrane staining and an aberrant expression of cytoplasmic and/or nuclear staining were detected. PTK7, Ki67, and TUNEL expression was evaluated as defined in a previous study [16] using Aperio ImageScope (Aperio Technologies, CA, USA).
The positivity percentage of PTK7 was calculated using the average of positive intensities divided by the total numbers of stained pixels. All cases were scored using a histological scoring (HSCORE) method. Specimens with a HSCORE.60 were regarded as PTK7 positive, whereas those with a HSCORE#60 were regarded as PTK7 negative [17].
Statistical analysis
In vitro data and clinical results were compared using the Student's t-test. Significance of in vivo data was assessed by Mann-Whitney test. DFS and OS were calculated by the Kaplan-Meier method and compared with the log-rank test. The Cox proportional-hazard regression model was used to explore the effects of the clinicopathologic variables and PTK7 expression on survival. The results were considered to be statistically significant when the P values#0.05. All tests were performed using the SPSS 17.0 software (SPSS, Chicago, IL, USA).
Different expression of PTK7 in six cholangiocarcinoma cell lines
Firstly, six human cholangiocarcinoma cell lines (HuCCT1, SCK, JCK, Cho-CK, Choi-CK, and OZ) were tested with the Figure 1A). We further excluded out the Choi-CK cell line because it was a hilar type cholangiocarcinoma cell line. During the cell culture, the SCK and Cho-CK cell lines were slightly changing their original morphologies, so we also excluded these 2 cell lines out of our further experiment.
Proliferation, DNA synthesis, invasion, and migration abilities are higher in HuCCT1 and JCK cells than in OZ cells Considering that HuCCT1 and JCK cells show higher expression levels of PTK7 than OZ cells, we assumed that the different behavior was according to their different PTK7 expression levels. We found that the HuCCT1 and JCK cells Figure 1B, P,0.01). DNA synthesis rate was also higher in HuCCT1 and JCK cells ( Figure 1C, P,0.01). Additionally, the invasion and migration abilities of HuCCT1 and JCK cells were stronger than those of OZ cells ( Figure 1D and 1E, P,0.01).
PTK7-specific siRNA successfully knocks down the PTK7 expression in HuCCT1 and JCK cells
Following transfection with the PTK7-specific siRNAs, the expression of PTK7 mRNA (Figure 2A and 2C) and protein ( Figure 2B and 2D) was successfully suppressed in both cell lines.
On the contrary, PTK7 siRNA did not change CTF1 or CTF2 expression levels in HuCCT1 cells ( Figure 2B).
PTK7-specific siRNA treatment decreases the invasion and migration abilities and impairs DNA synthesis ability in HuCCT1 and JCK cells
Compared with the scrambled siRNA-treated group, the knockdown of PTK7 decreased the cell mobility into the wound ( Figure 3A). The cell population that migrated through the Matrigel-coated transwell was higher than that in the scrambled siRNA group ( Figure 3C). The same results were seen in JCK cells ( Figure 3B and 3D). Since the HuCCT1 cells and JCK cells were presenting the same characteristics, we only used the HuCCT1 cells for further mechanism studies. We detected several proteins related to the cell cycle process. Cell-cycle-related proteins such as cyclin A2 and cyclin E were not affected. However, Cdk2, Cdk4, Cdk6, and cyclin D1 levels were slightly decreased, whereas p16, p21, and p27 levels were increased by PTK7 silencing ( Figure 4A).
PTK7 silencing induces cell apoptosis in HuCCT1 cells
Cell apoptosis was induced by PTK7-specific siRNA transfection ( Figure 4B). In addition, BAX the tumour suppressor genes p53 and RB were increased, followed by a decrease of BCL-2. Moreover, the apoptotic cascade was activated by the PTK7specific siRNA, with an increase in the levels of cleaved caspase-3 and caspase-9. However, caspase-8 and Fas-associated death domain (FADD) were not affected ( Figure 4C).
Effect of PTK7 silencing on the planar cell polarity signaling pathway in HuCCT1 cells
We considered that the PTK7-dependent abilities of invasion and migration are associated with the PCP pathway, which activates phosphor-RhoA and JNK, and leads to cytoskeleton reorganization of the cell membrane. When cells were transfected with PTK7-specific siRNA, JNK phosphorylation increased together with a decrease in the phospho-RhoA level ( Figure 5A).
Besides the non-canonical Wnt/b-catenin pathway, there is also the canonical Wnt/b-catenin pathway. The data showed that PTK7 does not influence the canonical Wnt/b-catenin pathway ( Figure 5B, lower panel), which was further confirmed by the immunohistochemical staining of b-catenin, showing no nuclear translocation ( Figure 5B, upper panel).
PTK7 silencing reduced tumor formation in xenograft nude mouse model
To examine the possible activity of PTK7-specific siRNA on tumorigenesis in vivo, a xenograft nude mouse model was used. Mean tumor volumes in PTK7-specific siRNA-treated mice were reduced in comparison with those of the control mice ( Figure 6A, left panel). Silencing of PTK7 dramatically suppressed tumor formation in the xenografts of the nude mice ( Figure 6A, right panel, P,0.01). Figure 6B showed the PTK7 were successfully silenced by PTK7-specific siRNA. Tumor sections of the xenografts were analyzed by hematoxylin and eosin staining followed by TUNEL and Ki67 staining ( Figure 6C). The group treated with the PTK7 siRNA tended to have more TUNEL positive and less Ki67 positive cells than the scrambled siRNA group ( Figure 6D).
PTK7 was strongly expressed in human ICC than normal bile duct
According to the results of TMA-based immunohistochemistry, PTK7 was mainly expressed in the cytoplasm. The positive rates of PTK7 in the cytoplasm were 75.9% (88/116) and 6.8% (3/44) in the ICC and normal bile duct, respectively ( Figure 7A, P,0.01).
PTK7 protein expression for predicting patient outcome
Kaplan-Meier univariate survival analysis revealed that PTK7 overexpression was associated with a poor disease-free survival (DFS) ( Figure 7B, P = 0.008) and overall survival (OS) ( Figure 7C, P = 0.046). Multivariate Cox proportional hazards regression analysis revealed that patients with a high PTK7 expression had a 2.3-fold greater risk of disease recurrence and a 1.8-fold greater risk of disease-related death (P = 0.015 and 0.036, respectively, Table 2). In this model, tumor size and angiolymphatic invasion were also identified as potential predictors of DFS and OS. The similar results were confirmed in the validation set ( Figure 7D and 7E; Table 2). Different PTK7 expressions showed no significances among the clinicopathological variables (Table S1).
Discussion
Our present study found that a high PTK7 expression contributed to the proliferation, invasion, and migration abilities of ICC cells, through the PCP signaling pathway. PTK7 was highly expressed in the tissue samples of human ICC but not in normal bile duct samples.
Cellular proliferation can be suppressed either by interruption of the cell cycle or by cell apoptosis. Firstly, we investigated the cell-cycle-related proteins. A previous study proved that the silencing of PTK7 can lead to the inhibition of cell proliferation and apoptosis in colon cancer cells [18]. In this study, we first demonstrated that silencing of PTK7 slightly decreased Cdk2, Cdk4, Cdk6, and cyclin D1 and increased p16, p21, and p27 expression.
Two distinct but convergent pathways, the extrinsic and intrinsic, can initiate apoptosis. Our results showed that silencing of PTK7 did not have an effect on FADD and cleaved caspase-8, suggesting no effect in the extrinsic apoptotic pathway. In contrast, pro-apoptotic BAX was increased by PTK7 silencing, followed by a decrease of anti-apoptotic BCL-2. The apoptotic cascade was also activated by PTK7-specific siRNA, with an increase of cleaved caspase-3 and caspase-9. These results demonstrated that PTK7 silencing leads to apoptosis in HuCCT1 cells via the intrinsic mitochondrial pathway. Our data showed that PTK7specific siRNA increases p21 levels. In addition to growth arrest p21, which was discovered via a senescent cell-derived inhibitor, can mediate cellular senescence. Thus, p21 here serves not only as a cell proliferation inhibitor but also as an apoptosis initiator. In addition, the tumor suppressor genes p53 and RB were also increased by the knockdown of PTK7. This is the first time that cell-cycle-related proteins and tumor suppressor genes have been studied in a PTK7-dependent manner in ICC cell lines. We also found that PTK7-specific siRNA significantly decreased the abilities of invasion and migration in HuCCT1 cells. The majority of deaths from carcinoma are caused by secondary growths that result from tumor invasion and metastasis. Recently, PTK7 was identified as a novel regulator of the non-canonical Wnt or PCP signaling pathway [9]. Since embryonic processes, which are pivotally related to PCP signaling, share many similarities with cancer development, it is noteworthy to investigate further its role in ICC.
The non-canonical PCP signaling is activated by ligands such as Wnt5a or Wnt11. Signaling is transduced by the Frizzled receptor and the adaptor protein Dishevelled and activates the RhoA and Rac GTPases and their respective targets, Rho-associated kinase and JNK. The functional assays of Peradziryi et al. [19] showed that PTK7 activates the non-canonical Wnt signaling and inhibits the canonical Wnt signaling. We found that b-catenin was localized in the cell membrane, regardless of whether a PTK7specific siRNA was present, which implies that PTK7 is not involved in the canonical Wnt signaling, thereby confirming Peradziryi's finding [19].
There has been debate about the role of JNK in the Wnt/PCP signaling pathway. Some researchers have proposed that JNK serves as a downstream event of RhoA and is involved in the cytoskeleton rearrangement of Wnt/PCP [20,21]. However, recent studies have suggested that JNK activation in Wnt/PCP has pro-apoptotic action rather than changing the cytoskeleton structure [22]. In the present study, phospho-JNK expression was increased by PTK7 silencing. In addition to the result that PTK7specific siRNA can induce cell apoptosis, we hypothesized that the action may be partially related to JNK activation, since the role of the JNK in apoptosis is both cell-type-and stimulus-dependent. In addition, the role of JNK in apoptosis depends on the activity of other cellular signaling pathways [23].
RhoA was initially considered to be involved in the regulation of the actin cytoskeleton [24]. The RhoA/ROCK pathway regulates numerous endothelial cellular functions such as migration and adhesion [25]. We found that PTK7 silencing impaired the migration and invasion abilities with a downregulation of activated phospho-RhoA, which is in agreement with the results from other studies.
Previously, Na et al. had reported that a soluble100-kDa fragment of PTK7 inhibits the tube formation, migration, and invasion of endothelial cells and angiogenesis [13]. However, the shedding of PTK7 is cell-type-dependent and has not been observed in cholangiocytes so far. The PTK7 fragments would diffuse out into the extracellular space without a significant concentration in cancer tissues. In contrast, PTK7-CTF2 is able to be effectively concentrated in the nucleus, and thus activate signaling pathways that promote tumorigenesis and metastasis. In this study, we found that PTK7-specific siRNA did not affect the intracellular cleavage of PTK7-CTF2, but the full length of PTK7 was knocked down. As a result, the migration and invasion abilities of the ICC cells were inhibited, providing evidence that the intact PTK7 molecule is oncogenic in the HuCCT1 cell line. The animal experiment confirmed the role of PTK7 in ICC tumorigenesis.
Finally, we assessed the PTK7 expression in surgically resected ICC specimens. As expected, PTK7 was highly expressed in ICC but not in normal bile duct tissue. The overexpression of PTK7 was associated with poor DFS and poor OS. This is the first report of the functional role of PTK7 in ICC. Our results show that high PTK7 expression may play an important role in ICC cell invasion and lead to a poor prognosis. Thus, PTK7 can be used as a prognostic indicator and the inhibition of PTK7 expression could be a new therapeutic target for ICC.
|
2016-05-12T22:15:10.714Z
|
2014-02-28T00:00:00.000
|
{
"year": 2014,
"sha1": "aea34e72055ef00939d861f725f2b40efff51180",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1371/journal.pone.0090247",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "aea34e72055ef00939d861f725f2b40efff51180",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
4571826
|
pes2o/s2orc
|
v3-fos-license
|
Contralateral breast dose from partial breast brachytherapy
The purpose of this study was to determine the dose to the contralateral breast during accelerated partial breast irradiation (APBI) and to compare it to external beam‐published values. Thermoluminescent dosimeter (TLD) packets were used to measure the dose to the most medial aspect of the contralateral breast during APBI simulation, daily quality assurance (QA), and treatment. All patients in this study were treated with a single‐entry, multicatheter device for 10 fractions to a total dose of 34 Gy. A mark was placed on the patient's skin on the medial aspect of the opposite breast. Three TLD packets were taped to this mark during the pretreatment simulation. Simulations consisted of an AP and Lateral scout and a limited axial scan encompassing the lumpectomy cavity (miniscan), if rotation was a concern. After the simulation the TLD packets were removed and the patients were moved to the high‐dose‐rate (HDR) vault where three new TLD packets were taped onto the patients at the skin mark. Treatment was administered with a Nucletron HDR afterloader using Iridium‐192 as the treatment source. Post‐treatment, TLDs were read (along with the simulation and QA TLD and a set of standards exposed to a known dose of 6 MV photons). Measurements indicate an average total dose to the contralateral breast of 70 cGy for outer quadrant implants and 181 cGy for inner quadrant implants. Compared to external beam breast tangents, these results point to less dose being delivered to the contralateral breast when using APBI. PACS number: 87.55.D‐
I. INTRODUCTION
In 2011 over 230,000 women in the United States were newly diagnosed with invasive breast cancer. (1) Many of these women will undergo lumpectomy, followed by radiation therapy. While the necessity of radiotherapy has been debated in the past, multiple studies have demonstrated a significant reduction of in-breast tumor recurrence with adjuvant whole-breast radiation. (2,3) For those patients who opt for radiation therapy a typical treatment regimen involves daily treatment five days per week for five to six weeks. Patients who have to travel some distance for treatment or for those with full time commitments, this can be an onerous task. One possible solution to this hardship is the use of accelerated partial breast irradiation (APBI). APBI involves the treatment of the at-risk tissue surrounding the lumpectomy cavity, rather than the whole breast, using a 10 fraction, twice daily treatment protocol. (4) This can be delivered either with external-beam radiation therapy or high-dose-rate (HDR) brachytherapy. This allows the patient to complete her course of treatment in a single week, as opposed to the five or six weeks required by whole-breast external beam therapy. In 2002, the FDA approved the use of the MammoSite single-entry balloon (Hologic, Bedford, MA) which greatly increased the interest in brachytherapy-based APBI. The original MammoSite protocol used a single catheter with a single central dwell point to treat a 1 cm rind of tissue surrounding the lumpectomy site. (5) A phase III clinical trial was initiated to study the efficacy of APBI as compared to conventional whole-breast irradiation. (6) As brachytherapy APBI has evolved, more types of single-entry implant devices have been introduced, utilizing multiple catheters. These new devices use the treatment planning computer to optimize dwell times and positions to deliver the best dose coverage to the target volume while minimizing the dose to adjacent critical structures, such as the skin and the chest wall. One of the concerns over this form of treatment has been the dose to the contralateral breast both from the treatment and the associated CT scans needed for the twice daily quality assurance (QA) of the device placement. Radiation treatments for breast cancer have been shown to increase the incidence of a second breast cancer in the contralateral breast. (7) In this study, the dose to the contralateral breast for patients implanted with the Strut Adjustable Volume Implant (SAVI) device (Cianna Medical, Aliso Viejo, CA) was investigated. This measured dose was then compared to published values obtained for whole-breast external beam contralateral breast dose. The relative contributions from the treatment delivery and the CT implant verification were measured and discussed. Measurements in this study were achieved using thermoluminescent dosimetry packets placed on the patient, as opposed to phantom measurements. This was done to retain the variability of patient size and geometry which was an important part of this study. This study will provide clinicians with the ability to determine the contralateral breast dose for their given brachytherapy APBI treatment and verification protocol.
II. MATERIALS AND METHODS
In this study, the dose to the contralateral breast from APBI brachytherapy for 12 randomly selected patients treated at the University of Texas MD Anderson Cancer Center was measured. All patients were treated to a total dose of 34 Gy in 10 fractions over five treatments days (BID), using a SAVI single-entry multicatheter applicator. Additionally, all patients were enrolled in the institution's clinical protocol studying the acute and late toxicity of APBI, which had been approved by the institutional review board. The patient flow for our APBI patients was first to visit our department for a cavity evaluation, then to proceed to patient simulation and planning, followed by the HDR treatments which include pretreatment verification scans. For the cavity evaluation and all subsequent CT scans, the patient was placed on a Philips Brilliance Big Bore CT scanner (Philips Healthcare, Andover, MA) for an AP and lateral scout view (120 kV, 50 mA). This was followed by a limited axial scan (miniscan) (120 kV, 250 mAs) encompassing the lumpectomy cavity.
A planning CT in the radiation oncology department was performed within 48 hrs of device placement by the surgeon in the operating room. The patient was placed supine on the CT couch and a limited CT scan was obtained through the device. The miniscan was evaluated by the radiation oncologist to confirm conformance of the struts of the SAVI to the edges of the lumpectomy cavity. For our criteria, the volume of seroma or air adjacent to the cavity (which could push target tissue out of the treatment volume) divided by the target volume to be treated (1 cm rind surrounding the cavity) must have been less than 10% of the targeted volume, which is in conformance with the NSABP protocol. (6) Once device conformance was confirmed, a custom-formed cradle was manufactured to reproduce daily patient positioning. AP and lateral scout views, followed by a full planning CT scan, was acquired including the whole breast with 2 cm margins both inferiorly and superiorly using 1.5 mm slices. The scout views were reviewed to determine the maximum strut width in both the AP and lateral view.
These widths were compared with subsequent pretreatment verification scouts to ensure the SAVI was expanded to the same extent with each treatment (within 2 mm). Patients were then given skin marks corresponding to SAVI strut location. These marks allowed for a daily check of applicator rotation. The CT images were transferred to a Nucletron Oncentra (Elekta Corp., Stockholm, Sweden) planning system. The patient was planned using methodology outlined in the literature. (8,9,10) The patient was now ready to begin treatment.
Prior to each treatment, the patient was brought into the CT simulation suite to perform device positioning quality assurance. A typical treatment verification would commence with the patient being placed on the CT simulator couch in their custom cradle and aligned with skin laser marks. Subsequently, AP and lateral scout views were obtained. These scans were reviewed by the radiation oncologist and the physicist to check for rotation, correct expansion of the SAVI, and correct placement. If there were any concerns, a miniscan would be obtained through the device to confirm the device was in the same position and conformed to the lumpectomy cavity as originally planned. Once CT QA was completed, the patient was moved from the simulator to the HDR vault. The treatment device used was a Nucletron V3 HDR afterloader using Iridium-192.
Contralateral breast dose measurements for this study were made with packets of TLD-100 (Quantaflux, Dayton, OH) containing approximately 30 mg of LiF powder and measuring approximately 1 cm by 1 cm by 0.1 cm. Patients in this study had the contralateral breast dose measured for 2 or 3 of the 10 fractions treated, with the CT QA verification and the radiation HDR treatment measured separately. The TLD were placed at a measurement point on the patient's skin at the most medial aspect of the contralateral breast tissue (3 o'clock right breast, 9 o'clock left breast) as determined by the radiation oncologist. An ink mark was placed at this point to ensure continuity of subsequent measurements. Three TLD packets were taped to the patient's skin for each measurement clustered around this point. For CT QA TLD measurements, the packets would be placed on the patient for the AP and lateral scout scans and exposed during these scans. If it was decided to include a miniscan of the lumpectomy cavity at that time, the TLD packets were left in place and the miniscan was performed. As a result, the dose to the contralateral breast from the orthogonal scouts was included in any miniscan CT dose that was measured. After the QA was finished, the exposed TLD packets were set aside and the patient was escorted to the HDR treatment vault. The patient was positioned on the treatment table and three new TLD packets were taped to the measurement point. After treatment these packets would be removed from the patient and set aside for reading.
The last step of the measurement process was to expose three TLD to a known dose to be used as standards in the reading process. This was done on the same day as the patient TLD exposure. For this study, 100 cGy from 6 MV X-ray Varian (Varian Medical Systems Inc., Palo Alto, CA) TG-51-calibrated accelerator was used as a standard. The TLD were set on a large plastic water block, covered with 1.5 cm of bolus at an SSD of 100 cm (to the top of the bolus), 10 × 10 cm field size, and exposed to 100 MU.
The TLD were read by the Department of Radiation Physics at the University of Texas M.D. Anderson Cancer Center. TLD batch characteristics (e.g., linearity, fading) were measured with a Cobalt-60 source that was traceable to the National Institute of Standards and Technology. The TLD used has an uncertainty of 5% in the range of 150 to 300 cGy. (11) This uncertainty may increase with lower doses. The output of the reading process was corrected for energy response using energy correction factors. For the Ir-192-exposed TLD, there is reasonably good agreement in the literature. (5,6,7) An energy correction factor of 0.962 was used based upon the work of Davis et al. (12) Energy correction factors for CT energy values however are more difficult to come by. A value of 0.952 was taken from Davis using a mean energy of 56 KeV.
III. RESULTS
During the course of a patient's treatment, she typically underwent an average of four axial CT scans. The first scan occurred at the cavity evaluation where the radiation oncologist determined which size SAVI device and direction to implant. The second axial scan was an initial miniscan on planning day to determine extent of nonconformance and suitability of device placement to proceed with treatment planning. The third axial scan was the actual planning scan. For this study, the longer planning scan was assumed to contribute the same dose as a miniscan. An additional miniscan was usually performed during the course of treatment to verify correct positioning. Table 1 shows the average dose to the contralateral breast as measured by TLD from the orthogonal scout views, miniscan, and HDR treatment per fraction, along with the standard deviation (SD) and range. Five of the patients in this study had TLD used to measure one or more of their miniscans and ten patients had TLD used to measure one or more of their scout scans (without miniscan).
Using the above data, a total estimated dose to the contralateral breast was calculated. For this study, the average patient was assumed to have been treated for 10 fractions, received nine scout scans (dose from one of the scout scans is included in a miniscan fraction) and four miniscans. This treatment regimen results in a dose of 116 ± 73 cGy ((10 × 10.41) + (4 × 2.58) + (9 × 0.2) = 116 cGy)) to the most medial point of the contralateral breast for the entire course of treatment, based on TLD measurements. Using the data in this table, it would be easy to estimate the contralateral breast dose for institutions using different APBI treatment regimens (i.e., a different number of scouts or miniscans). Table 2 looks at total dose as a function of implant site. Five of the implants were recorded as inner quadrant, five as outer quadrant, and two were recorded as central implants. As would be expected, the inner quadrant implants result in more doses to the contralateral breast than the outer quadrant implants. The high standard deviations in Table 2 may be attributed to differences in exact implant site, patient geometry, which size SAVI was implanted, and the depth of the implant.
IV. DISCUSSION
The data from this study indicates that APBI brachytherapy with the SAVI device results in low doses to the contralateral breast. The average dose of 116 cGy is a worst-case result, given that the measurements were taken at the most medial point on the contralateral breast. An important outcome of this study is to compare the measured contralateral breast dose from SAVI brachytherapy treatment to measured contralateral breast doses from external beam therapy reported in the literature. On review, there is a large range of reported doses to the contralateral breast for a wide range of clinical whole-breast radiotherapy techniques. Many studies report only the median dose to the contralateral breast, while other studies published data throughout the contralateral breast. As this research seeks to determine the worst-case dose to contralateral breast tissue, data were pulled from studies listed in Table 3 using the high of the range data for inner quadrant measurements. Reported doses in Table 3 were for the entire course of treatment. While a wide range of doses are reported, it should be noted that the lowest dose in Table 3 indicates a dose substantially above the study average of 116 cGy. Even in the case of inner quadrant implants, the study data still appear favorable to all but a few of the results listed in Table 3. If median dose to contralateral breast were the determinate dose to be considered for the contralateral breast, it is likely that the APBI brachytherapy advantage would be more substantial, given the point source nature of this form of treatment.
An additional concern with APBI brachytherapy has been the need for twice-daily CT imaging for QA purposes and the resulting dose to the contralateral breast. Table 1 shows that minimal dose is delivered to the contralateral breast from these scans. An average number of scout images (9) per course of treatment would contribute approximately 3 cGy to the overall average of 116 cGy total dose. An average number of miniscans (4) would contribute approximately 10 cGy to the total dose. Thus, for the average patient, the twice-daily CT scans contribute about 10% of the total dose to the patient's contralateral breast. It is worth noting that CT doses from external beam planning are not considered in the doses reported in Table 3. A typical external beam breast patient may undergo two to three CT planning scans, depending upon treatment technique (primary planning, boost, breath hold).
Future research in this area could include evaluating a median dose to the contralateral breast from APBI brachytherapy and looking at the contralateral breast dose from other single entry brachytherapy devices. Additionally, measuring the dose to the contralateral breast with evaluation of more variables such as SAVI size, implant distance from midline, and implant depth, could prove useful in guiding clinicians with their treatment decisions. Table 3. External beam dose to contralateral breast (most medial).
V. CONCLUSIONS
The data from this study indicates that APBI brachytherapy with the SAVI interstitial applicator results in a lower dose to the contralateral breast, as compared to external beam whole-breast irradiation. Additionally, twice-daily CT scans for quality assurance account for approximately 10% of the total dose to the contralateral. Both of these findings help to support the use of single-entry multicatheter device-based partial breast irradiation as an alternative to external beam whole-breast irradiation for patients who qualify.
|
2018-04-03T06:19:36.924Z
|
2015-11-01T00:00:00.000
|
{
"year": 2015,
"sha1": "e7f26a42ce78e4e8c0acd0034183a36dd7d9e033",
"oa_license": "CCBY",
"oa_url": "https://aapm.onlinelibrary.wiley.com/doi/pdfdirect/10.1120/jacmp.v16i6.5296",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e7f26a42ce78e4e8c0acd0034183a36dd7d9e033",
"s2fieldsofstudy": [
"Medicine",
"Physics"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
244116289
|
pes2o/s2orc
|
v3-fos-license
|
Physical activity during the SARS‐CoV‐2 pandemic is linked to better mood and emotion
Abstract The SARS‐CoV‐2 pandemic may negatively impact mood and emotion. Physical activity may protect against mood disturbance and promote positive affect. This study asked if physical activity before, during, or the change in physical activity with the pandemic, impacted affect and mood during the pandemic. US adult residents (18–74 years; N = 338) were surveyed from 29 April to 3 June 2020. Physical activity before and during the pandemic was assessed with the Physical Activity Rating survey. The Positive and Negative Affect Schedule measured affect and the Profile of Moods Questionnaire assessed mood. Comparisons between physically inactive and active participants by Analysis of Covariance found greater vigour in participants classed as physically active before the pandemic. Positive affect, vigour and esteem‐related affect were greater in participants physically active during the pandemic. Multiple linear regression revealed relationships between the change in physical activity and mood. Change in physical activity positively associated with positive affect (b = 1.06), esteem‐related affect (b = 0.33) and vigour (b = 0.53), and negatively associated with negative affect (b = −0.47), total mood disturbance (b = −2.60), tension (b = −0.31), anger (b = −0.24), fatigue (b = −0.54), depression (b = −0.50) and confusion (b = −0.23). These data demonstrate that physical activity during the pandemic, and increased physical activity relative to before the pandemic, related to better mood.
| INTRODUCTION
The global pandemic caused by SARS-CoV-2 is a source of ongoing psychological stress. This stress may result from anxieties related to the virus itself, from social isolation, professional development uncertainty, or from employment and other financial concerns (Babore et al., 2020). Each of these stressors may interact and contribute towards depressed mood and emotions. Other public health emergencies have been documented to negatively affect mental health (Kamara et al., 2017;Neria & Shultz, 2012). The SARS-CoV-2 pandemic appears similar, as initial surveys indicate increased symptoms of anxiety, depression, fatigue, mood disturbance and stress among individuals worldwide due to the pandemic (Qiu et al., 2020;Salari et al., 2020). Critically, psychological stress is a risk factor for mental illnesses, including depressive, bipolar and psychotic disorders (Kessler et al., 2008). Long-lasting psychological stress is also thought to foment disease by inducing a state of chronic inflammation, a known risk factor for many health conditions, such as cardiovascular and metabolic diseases (Dantzer et al., 1999;Mooy et al., 2000;Rohleder, 2014 there is a clear need to identify means to mitigate the negative effects of psychological stress associated with the SARS-CoV-2 pandemic, including those on mood and emotion. Physical activity is movement of the body produced by skeletal muscles that results in energy expenditure (Caspersen et al., 1985).
Higher rates of physical activity are associated with reduced odds of developing depressive symptoms in epidemiologic surveys (Camacho et al., 1991;Farmer et al., 1988). Ecological momentary assessment studies have demonstrated that higher physical activity levels are associated with subsequent greater positive affect (Dunton et al., 2014;Schultchen et al., 2019;Wichers et al., 2012) as well as lower negative affect (Dunton et al., 2014;Schultchen et al., 2019). Cardiorespiratory fitness, the ability of the circulatory and respiratory systems to supply oxygen during sustained physical activity, is negatively associated with psychological distress, depression and anxiety (Babyak et al., 2000;Grasdalsmoen et al., 2020;Kandola et al., 2018). Both greater levels of cardiorespiratory fitness and higher rates of physical activity are associated with lower physiological responses to acute psychological stress, including moderated blood pressure, cortisol, heart rate and heart rate variability.
As physical activity may attenuate the negative impacts of psychological stress, such as depressed mood, we sought to test the hypothesis that adults who reported greater levels of physical activity before and during the SARS-CoV-2 pandemic would have a greater positive affect and lower mood disturbance. We were also interested in whether a change in physical activity levels was related to mood and emotions, as the impact of change in exercise volume on psychological parameters has rarely been considered outside of the athlete-context (Kageta et al., 2016;Pierce, 2002). We hypothesized that participants who increased their physical activity during the pandemic relative to before the pandemic would have a greater positive affect and lower mood disturbance, compared to participants who maintained and decreased physical activity levels. We also hypothesized that those who maintained their physical activity level during the pandemic would have a greater positive affect and lower mood disturbance than those who decreased their physical activity levels.
| Research design and participants
This was an observational, survey-based study that sought to understand if physical activity levels relate to mood and affect in a pandemic. Adults between the ages of 18 and 75 years were recruited for this study (N = 354). Additional inclusion criteria included access to an electronic device to complete the questionnaires, currently living in the United States, and able to read and respond in English. Responses from participants that did not consent (n = 1), were not in the targeted age range (n = 4), were a duplicate response (n = 1), were out of the recruitment period (n = 3), incorrectly responded to or skipped validity questions (n = 6), or did not respond to either physical activity questionnaire (n = 2) were not included in the dataset (included n = 338). All included participants consented to the study by submitting a 'yes' response to the consent page of the electronic survey. The survey was expected to take 5-10 min to complete. The study was approved by the institutional review board at the University of Houston.
| Recruitment and screening
Participants were recruited using social media posts, email distribution lists, previous research participants databases and word of mouth. Participants were invited to take part in a research study about physical activity and psychological stress; specific hypotheses were not mentioned. Interested participants were directed to an anonymized online electronic survey form (Microsoft Forms). After providing consent and confirming eligibility, the participant was directed to the survey questions. Responses to the survey were anonymous, and participants accessed the survey via an anonymized link. Responses to the survey were collected over a 6-week period from 29 April 2020 to 3 June 2020.
| Surveys and questionnaires
The survey included two validation questions, where the participants were asked to select a specific number from the list. Participants who did not select the correct number (n = 6) were not included in the dataset and analyses to exclude responses from participants not carefully reading the survey. Responses included general demographic information, SARS-CoV-2 pandemic-specific impacts (i.e., change in income, change in employment, close family or friend with known or suspected infection), physical activity and exercise practised before and during the SARS-CoV-2 pandemic, and feelings and emotions felt both in the moment (Profile of Moods Questionnaire short form) and the 7 days immediately prior (Positive and Negative Affect Schedule) to taking the survey.
Physical activity was measured using the Physical Activity Rating survey (PA-R), which asks participants to identify their overall level of physical activity on a scale from 0 ('avoid walking and exertion; e. g., always use elevator, drive when possible instead of walking') to 10 -491 (Kolkhorst & Dolgener, 1994). Regression models built from responses to this questionnaire have been shown to be a valid means of estimating cardiorespiratory fitness (George et al., 1997). Participants were first asked to select the sentence that best described their overall level of physical activity in the three months prior to the SARS-CoV-2 pandemic ('before'; December 2019 to early March 2020). Participants then selected the physical activity description they felt best described their overall level of physical activity during the pandemic ('during'; mid-March 2020 to date survey was completed). Responses to the PA-R were used to categorize participants as physically inactive (selected 0-5, corresponding to less than 60 min per week of vigorous activity) or physically active (selected 6-10, corresponding to at least 1 h per week of vigorous activity). Two items (ashamed, embarrassed) for esteem-related affect were reverse scored. Total mood disturbance (TMD) was calculated by subtracting totals for positive subscales (esteem-related affect, vigour) from the totals for negative subscales (tension, anger, fatigue, depression and confusion). Some participants did not answer all survey questions; these participants were included in the reporting of completed POMS subscales but not in total mood disturbance (n = 20).
The Positive and Negative Affect Schedule (PANAS) was used to measure positive and negative affect over the week immediately prior to taking the survey (Watson et al., 1988). The Cronbach's alpha for the present study was calculated for positive (0.732) and negative (0.646) subscales. Participants were asked to rate on a five-point scale (1, 'Very Slightly or Not at all' to 5, 'Extremely') the extent to which they experienced each of 10 positive feelings and emotions and 10 negative feelings and emotions over the prior week. Items were scored as indicated by the survey to calculate a positive affect score (possible range 10-50, higher scores represent higher levels of positive affect) and a negative affect score (possible range 10-50, lower scores represent lower levels of negative affect). Some participants did not answer all survey questions; these participants were included where possible (only positive (n = 5) or negative affect score (n = 4)).
| Statistical analyses
Data were screened by visual inspection of histograms and Q-Q plots for outliers and normality. Primary dependent variables included scores on the seven dimensions of mood, TMD, positive affect, and negative affect. To identify potential covariates for inclusion in the models, the relationships between the dependent variables with categorical demographic variables and SARS-CoV-2 pandemic specific impacts (gender, age, race, ethnicity, income, children at home, infection, change in income) were assessed by Analysis of Variance (ANOVA). To assess reliability, Cronbach's alpha was calculated for all subscales of POMS and PANAS.
Independent variables included participant physical activity categorization before (Aim 1), during (Aim 2) and the change in physical activity level from before to during the pandemic (Aim 3).
Differences between physically inactive and physically active participants on the dependent variables were assessed by Analysis of Covariance, which included the covariates gender and age. Tukey post hoc adjustments were used to control overall Type I error. Linear models were used to determine the effect of change in physical activity on dependent variables for Aim 3 and included gender and age as covariates. Statistical significance was determined a priori at p < 0.05. All statistical analyses were conducted with R (version 4.0.2).
| Participant characteristics
Data from 338 participants were included in these analyses. Table 1 summarizes participant characteristics. At the time of completing the survey, 117 (34.6%) had experienced a decline in income since the beginning of the pandemic, and 184 (54.4%) reported a change from working outside of the home to working from home since the beginning of the pandemic. At the time of completing the survey, 99 (29.3%) reported a confirmed or suspected case of SARS-CoV-2 infection in a close friend or a family member. Only age and gender were found to be significantly related to moods and affect (all p < 0.05); outcome variables were adjusted for age and gender.
When examining any change in physical activity level (difference between During PA-R response and Before PA-R response) a plurality of participants (31.8%) did not report a change in physical activity level during the pandemic relative to before; equal numbers (n = 80, 23.7%) reported small increases in physical activity level as reported small decreases in physical activity (�2 levels). Overall, similar numbers of participants reported maintaining their physical activity level (n = 107) as reported an increase (n = 111) or a decrease (n = 119).
| Impact of physical activity before the pandemic
We investigated whether mood and emotions reported during the SARS-CoV-2 pandemic differed between individuals categorized as physically inactive or physically active in the three months before the pandemic. Vigour was greater in physically active participants relative to physically inactive participants (7.05 � 0.32 vs. 5.65 � 0.30; p = 0.001) ( Table 2). No other measured mood or emotion differed between participants categorized as physically active or physically inactive before the pandemic (all p > 0.05; Table 2).
| Impact of change in physical activity during the pandemic
Multiple regression models were used to assess associations between change in physical activity and mood and emotions during the pandemic. Gender and age were included as covariates in each model. Note: Positive and negative affect derived from Positive and Negative Affect Schedule. High values for positive affect and low values for negative affect indicate better mood. Phys Inactive (Physically Inactive, selected rating of 0-5) and Phys Active (Physically Active, selected rating of 6-10) derived from the Physical Activity Rating survey. TMD (total mood disturbance), TEN (tension), ANG (anger), FAT (fatigue), DEP (depression), ERA (esteem related affect), VIG (vigour) and CON (confusion) reflect scores calculated from the Profile of Moods Questionnaire short form (POMS). Bold highlight results that reach statistical significance (p < 0.05).
important for mood and emotions during the pandemic. Those who decreased their physical activity during the pandemic reported worse mood and emotions compared to those who maintained or increased their physical activity. Collectively, our data imply that in addition to regular physical activity having many desirable physical health impacts, it may be of particular importance for mental health to meet physical activity recommendations and maintain physical activity during periods of disruption and potentially heightened psychological stress.
| Physical activity before versus during the pandemic
The time at which physical activity was reported impacted the observed relationships of physical activity with mood and emotions.
Although we hypothesized that physical activity before the pandemic would be related to several mood states and emotions, being physically active before the pandemic was related to a single subscale, vigour. That is, people who were physically active before the pandemic reported higher levels of vigour during the pandemic. Our hypothesis was based on studies linking previous exercise to current mood and emotion. For example, that stationary bicycling led to better emotional recovery after a stressor (Bernstein & McNally, 2017). There are numerous studies relating the positive effect of exercise on vigour, but typically change in vigour is reported as a result of current exercise training or following a single exercise session (Dishman et al., 2010;Hoffman & Hoffman, 2008;Puetz et al., 2008). The data here indicate that past physical activity may also yield increased feelings of vigour. In contrast to our findings of the lack of relationships between physical activity before the pandemic and mood and emotion, physical activity during the pandemic was associated with several mood and emotion subscales.
Specifically, physically active individuals reported greater positive affect, esteem-related affect and vigour, compared to the physically inactive. Total mood disturbance was also lower in the physically active, reflecting lower negative mood attributes relative to positive mood attributes. Negative moods and emotions (negative effect, tension, anger, fatigue, depression, confusion) were not related to physical activity. This suggests that physical activity is more closely related to positive moods and emotions than negative moods and emotions. The greater number of relationships between mood and emotion with physical activity during the pandemic (i.e., positive affect, esteem related affect, vigour, total mood disturbance) compared to physical activity before the pandemic (i.e., only vigour) was somewhat surprising. For example, young adults who regularly exercise reported better mood states, as measured by the POMS, at rest and during exercise compared to those who do not regularly exercise (Hallgren et al., 2010). This previous study led us to hypothesize that pre-pandemic physical activity would protect against pandemic-related mood impacts; this hypothesis was only partially supported. Results differing from our hypothesis may be due to inaccuracies in reporting physical activity from an earlier time period (3 months before the pandemic began), where recall may be more accurate with more recent physical activity (during the pandemic).
Other studies have also failed to find a relationship between selfreported regular physical activity and psychological responses to an acute stressor (Mücke et al., 2018). Our results could also mean that physical activity has the most benefit for improved mood and emotion during mentally and emotionally challenging periods. That is, the proximal mood and emotional benefits of exercise may not be observed in baseline conditions but become apparent in stressful T A B L E 3 Effects of change in physical activity on mood and emotions adjusted for age and gender Note: Change in physical activity reflects difference in self-report physical activity (Physical Activity Rating survey) during the pandemic from before the pandemic. Positive and negative affect derived from Positive and Negative Affect Schedule. High values for positive affect and low values for negative affect indicate better mood. TMD (total mood disturbance), TEN (tension), ANG (anger), FAT (fatigue), DEP (depression), ERA (esteem related affect), VIG (vigour), and CON (confusion) reflect scores calculated from the Profile of Moods Questionnaire Short Form (POMS). Bold highlight results that reach statistical significance (p < 0.05).
situations or periods of mood disturbance. In support of this, an exercise intervention among caregivers of chronically ill family members found aerobic exercise training resulted in significantly reduced perceived stress (Puterman et al., 2018). Further, in a systematic review of 35 exercise interventions in people with depression, the participants in the exercise groups had a moderate decrease in depression (Cooney et al., 2013). Evidence from acute exercise studies also support this assertion. For example, a single session of exercise prior to a laboratory stressor were related to better emotional recovery (Bernstein & McNally, 2017) and enhanced emotional resilience (Bernstein & McNally, 2018). Collectively, this underscores the priority people should place on maintaining or increasing their physical activity during periods of mental stress.
| Change in physical activity
When assessing the associations between change in participants' physical activity level from before to during the pandemic with mood (Brand et al., 2020;Chang et al., 2020). Similar to the current results, both report that maintaining physical activity level during the pandemic is associated with better mood. Specifically, in a cross-sectional study that included survey respondents from 18 countries, those who decreased the number of days per week they exercised had a worse mood state than those who maintained or increased the number of days per week that they participated in exercise (Brand et al., 2020). The relationship between maintenance of physical activity frequency and better mood state was also reported in a study that only included participants from Taiwan (Chang et al., 2020).
| Study strengths
Although this study was conducted during the SARS-CoV-2 pandemic, it may be applicable to other periods of disruptions and uncertainty. One of the strengths of this study was that rather than manufacturing a stressful situation in a controlled laboratory setting or enrolling participants as they encountered stress and uncertainty in their lives, all participants experienced the same overarching stressful event-the SARS-CoV-2 global pandemic. In support of this, Roth et al. (1987) randomized young adults who reported negative life events to exercise training, relaxation training, or no treatment. After 5 weeks, the participants in the exercise training group reported reduced depressive symptoms but the participants in the other two groups did not (Roth & Holmes, 1987).
Further, a 1-year intervention that increased moderate to vigorous physical activity among women also reported reductions in depression symptoms and perceived stress (Mendoza-Vasconez et al., 2019).
Several other studies on physical activity and exercise during the SARS-CoV-2 pandemic were recently published (Brand et al., 2020;Chang et al., 2020;Constandt et al., 2020;Ingram et al., 2020;Wood et al., 2021). However, the present study contributes in two distinct ways. One is that the physical activity survey instrument used here does not focus on the frequency (days per week) of physical activity, but rather collects information about total weekly physical activity. It is possible for someone to change total amount of weekly physical activity without changing the frequency of physical activity within the week. Importantly, ACSM and WHO exercise and physical activity recommendations are related not only to the frequency but also the total exercise and physical activity per week (American (Brand et al., 2020). This implies that the participants in the current study were facing the same national response to the SARS-CoV-2 pandemic. Although there were regional differences in the infection rate, the economic and broad personal challenges (i.e., food and personal item shortages) were thus more similar across our participants. Our study is also strengthened by the relatively short data collection time span. This was by design, in an attempt to collect data when there was little variation in the national pandemic response. Since the data collection period, our knowledge of SARS-CoV-2 and its treatment has changed markedly. However, the death and economic tolls have also climbed as the pandemic continues. We did not attempt to differentiate between sources of disruptions arising from the pandemic, and so the results, although resulting from a narrow collection window, are likely still relevant.
| Study limitations
A limitation to this study is the study design. This is a cross-sectional study, and as such we cannot determine the direction of the relationship. For example, we found that those who were physically active during the pandemic, or those who increased physical activity 496during the pandemic, reported lower fatigue and higher vigour. This may indicate that being physically active prevented fatigue or preserved vigour, but it is just as possible that those who had low fatigue and high vigour were more likely to be physically active. The latter possibility is supported by literature finding that psychological distress/heightened psychological stress decreases physical activity (Olive et al., 2016;Stults-Kolehmainen & Sinha, 2014). However, intervention studies manipulating physical activity during psychological stress also support the possibility that exercise during a stressful period-such as a pandemic-can lead to positive mood and emotion (Puterman et al., 2018). A second limitation of this study is the low diversity in the sample population. Our sample may not be representative. Although income and age were distributed, our sample largely reported being woman/trans woman and white.
Further, it is important to note that we did not directly measure stress in the current study, and it is possible that not all participants were experiencing heightened stress at the moment of taking the survey. However, we posit that wide-spread 'stay at home' orders and health-specific uncertainties in place during the time of the survey (May 2020) at a minimum were disruptive to pre-pandemic ways of life. Nonetheless, we are unable to explore potential relationships between physical activity, mood and emotion, and stress.
We also acknowledge limitations related to our reliance on self-reported physical activity. Self-reported physical activity generally only shows moderate associations with direct and indirect physical activity measures (Durante & Ainsworth, 1996); more objective measures (e.g., accelerometers) were not possible in the current study. Other physical activity questionnaires, such as the International Physical Activity Questionnaire were also considered.
We chose the PA-R in the current study as it includes descriptions of both incidental physical activity (using stairs, walking vs. driving), house and yard work, as well as exercise-based physical activity. The PA-R has also typically been used to ascertain physical activity in the prior six months (George et al., 1997;Jackson et al., 1990;Kramer et al., 2020); data in the current study fell within this window. However, we acknowledge that there may be differences in the accuracy of physical activity recall between that reported before and during the pandemic, as accuracy may be greater when less time has elapsed (Durante & Ainsworth, 1996). Additionally, it may be that other variables not measured in the current study, such as level of education attainment, urbanicity, alcohol and drug use, and ongoing inflammatory disease, influence physical activity and/or mood. We also did not survey the reasons for change in physical activity in those participants reporting increased or decreased physical activity during relative to before. A recent study reported a small yet significant negative predictive effect of the experience of daily hassles (stressors) on physical activity levels post-lockdown in New Zealand (Hargreaves et al., 2021). It is intriguing to wonder if a similar result would have been found here, especially given bidirectional relationships between the experience of daily stressors and physical activity (Stults-Kolehmainen & Sinha, 2014). Finally, our results do not preclude the potential effects of physical activity before and during the pandemic on other aspects of mental health besides mood and affect that were not measured here.
| CONCLUSIONS
In conclusion, physical activity was associated with better mood and emotion during the SARS-CoV-2 global pandemic, a period of mental stress and uncertainty. The positive effects of physical activity were particularly apparent when surveying activity levels during the pandemic, rather than in the months leading up to the pandemic. Increasing or maintaining the same level of physical activity during the pandemic was important for positive mood and emotion.
|
2021-11-16T06:23:00.884Z
|
2021-11-14T00:00:00.000
|
{
"year": 2021,
"sha1": "3ab08dedec11b99db1e4608c9c01871c0fdb2824",
"oa_license": null,
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8646766",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "7c280de96615a11b886e1c978023671217be5f27",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
8487551
|
pes2o/s2orc
|
v3-fos-license
|
Role of the pleckstrin homology domain of PLCγ1 in its interaction with the insulin receptor
A thiol-reactive membrane-associated protein (TRAP) binds covalently to the cytoplasmic domain of the human insulin receptor (IR) β-subunit when cells are treated with the homobifunctional cross-linker reagent 1,6-bismaleimidohexane. Here, TRAP was found to be phospholipase C γ1 (PLCγ1) by mass spectrometry analysis. PLCγ1 associated with the IR both in cultured cell lines and in a primary culture of rat hepatocytes. Insulin increased PLCγ1 tyrosine phosphorylation at Tyr-783 and its colocalization with the IR in punctated structures enriched in cortical actin at the dorsal plasma membrane. This association was found to be independent of PLCγ1 Src homology 2 domains, and instead required the pleckstrin homology (PH)–EF-hand domain. Expression of the PH–EF construct blocked endogenous PLCγ1 binding to the IR and inhibited insulin-dependent phosphorylation of mitogen-activated protein kinase (MAPK), but not AKT. Silencing PLCγ1 expression using small interfering RNA markedly reduced insulin-dependent MAPK regulation in HepG2 cells. Conversely, reconstitution of PLCγ1 in PLCγ1 −/− fibroblasts improved MAPK activation by insulin. Our results show that PLCγ1 is a thiol-reactive protein whose association with the IR could contribute to the activation of MAPK signaling by insulin.
Introduction
The pleiotropic actions of insulin are initiated by binding of the hormone to the extracellular domain of the insulin receptor (IR) and activation of its intrinsic tyrosine kinase activity. Insulin signal transduction requires IR autophosphorylation and phosphorylation of a number of intracellular molecules, including insulin receptor substrate 1 (IRS-1) and Shc proteins (Saltiel and Pessin, 2002). Many of these molecules contain modular domains (e.g., Src homology 2 [SH2] domain and/ or phosphotyrosine-binding domains) that allow interaction with the tyrosine-phosphorylated IR. The phosphotyrosinebinding domain of IRS-1 has been shown to bind to the NPXpY motif of the IR after insulin stimulation, which leads to the recruitment of various cytosolic signaling intermediates to the cell surface (Virkamaki et al., 1999). The SH2-containing protein tyrosine phosphatase (PTP) 2 binds to the COOH-terminal phosphotyrosines of the activated IR (Rocchi et al., 1996), whereas Grb10 isoforms play a negative role in insulin signaling by binding with the tyrosine kinase loop of the activated IR via the BPS (between the pleckstrin homology [PH] domain and the SH2 domain) region (He et al., 1998;Kasus-Jacobi et al., 2000). Thus, the level of IR autophosphorylation may serve a crucial function in controlling both the phosphorylation of endogenous substrates and the interaction between the IR  -subunit and a number of proteins that regulate receptor-based signals.
The cytoplasmic domain of the IR  -subunit contains reactive cysteine thiol(s) that can modulate the receptor catalytic activity (Li et al., 1991;Bernier et al., 1995;Schmid et al., 1998). The importance of the IR cytoplasmic cysteines for the association between this receptor and intracellular effectors has been investigated in intact cells using 1,6-bismaleimidohexane (BMH), an irreversible thiol-specific homobifunctional cross-linking reagent (Garant et al., 2000). This approach has led to the identification of a complex between the human IR and a thiol-reactive membrane-associated protein (TRAP). The IR-TRAP complex migrates as an ف 250 kD protein on SDS-PAGE under reducing conditions and does not contain the receptor ␣ -subunit as assessed by immuno-blot analysis. In the same report, point-mutation analyses have shown that cysteine 981 of the cytoplasmic domain of the human IR  -subunit is the nucleophilic thiol responsible for the covalent binding to TRAP after BMH-induced crosslinking (Garant et al., 2000).
To further our understanding of the biological importance of TRAP in insulin signaling, we purified the IR-TRAP complex and identified TRAP as PLC ␥ 1 using matrix-assisted laser desorption/ionization (MALDI) analysis.
Here, our coimmunoprecipitation assays demonstrated constitutive and insulin-inducible association of PLC ␥ 1 with the IR in a number of cultured cell lines and a primary culture of rat hepatocytes, which reflects the potential for physiological significance. Structurally, the catalytic region of PLC ␥ 1 contains an insert with two SH2 domains and an SH3 domain. It has been proposed that the two SH2 domains are essential for association of PLC ␥ 1 with activated growth factor receptor tyrosine kinases (Middlemas et al., 1994, Ji et al., 1999, whereas the SH3 domain directs PLC ␥ 1 to bind to the cytoskeleton (Park et al., 1999). Whether these and other motifs play an important function in the recruitment of PLC ␥ 1 to the IR remains unknown.
The dynamic association between PLC ␥ 1 and the IR must depend on specific domains within both proteins. In an attempt to identify some of these motifs, we have expressed mutant forms of PLC ␥ 1 and analyzed the pattern of IR-PLC ␥ 1 association in intact cells. Now, we report on the identification of a domain of PLC ␥ 1 containing the PH and EF-hand (PH-EF) that is required for interaction with the IR. Overexpression of the PH-EF fragment or reduction of PLC ␥ 1 expression using small interfering RNA (siRNA) abrogates MAPK regulation by insulin, strengthening the notion that PLC ␥ 1 plays an important role in insulin signaling (Kayali et al., 1998;Eichhorn et al., 2002).
Results
Insulin promotes formation of the IR-TRAP complex CHO cells expressing the human IR were incubated with insulin and then subjected to a cross-linking reaction with BMH before cell lysis and Western blotting with an antibody against the IR  -subunit ( Fig. 1 A, top). The IR-TRAP complex was detected in lysates from unstimulated cells upon BMH addition. Insulin increased the recruitment of TRAP to the IR with a concomitant reduction in the free IR  -subunit ( Fig. 1 A, lane 4 vs. lane 3), but not in the IR ␣ -subunit ( Fig. 1 A, bottom). A second protein band was detected just below the IR-TRAP complex ( Fig. 1 A, lane 4); however, it contained a much smaller amount of the conjugated IR  -subunit. Thus, TRAP can interact with the IR both in a constitutive and insulin-inducible manner. Of significance, association between the activated IR and TRAP was also observed in NIH3T3-IR cells and the human HepG2 cell line after incubation with BMH (unpublished data). The formation of the IR-TRAP complex was then assessed in anti-IR immunoprecipitates. Metabolically labeled CHO-IR cells were left untreated or incubated with insulin to induce IR autophosphorylation, followed by cross-linking reaction with BMH. Analysis of IR immunoprecipitates from BMH-treated cells demonstrated insulin's ability to increase the extent of IR-TRAP covalent association with concomitant decrease in the amount of free IR  -subunit ( Fig. 1 B).
To ascertain whether the length of the cross-linker spacer arm dictates the extent of IR-TRAP covalent association, insulin-stimulated CHO-IR cells were incubated either with bismaleimidoethane (BMOE), bismaleimidobutane, or BMH, which are three related thiol-specific homobifunctional cross-linking reagents whose maleimido groups are separated with flexible spacer arms of 8.0, 10.9, and 16.1 Å, respectively. Insulin promoted IR-TRAP complex formation irrespective of the cross-linker used (unpublished data), indicating that the nucleophilic thiols (on TRAP and the IR  -subunit) may be separated by at least 8 Å.
Characterization of TRAP
The silver-stained gel of the anti-IR immunoprecipitates resolved four major bands that corresponded to the TRAP/  -subunit complex, IR proreceptor ( ␣ -dimer), and ␣and  -subunit, respectively, with apparent molecular masses ranging between ف 100 kD (  -subunit) and ف 275 kD (A) CHO-IR cells were serum starved before stimulation with 100 nM insulin for 5 min at 37ЊC. After a cross-linking reaction with 100 M BMH, cell lysates were prepared and then immunoblotted with anti-IR -subunit or anti-IR ␣-subunit antibodies. (B) CHO-IR cells were labeled with [ 35 S]Met/Cys for 16 h before stimulation with 100 nM insulin for 5 min at 37ЊC. After a cross-linking reaction, endogenous IR was immunoprecipitated with anti-IR antibodies, resolved by SDS-PAGE, and detected by autoradiography. Right margin, ␣and -subunits of the IR; left margin, M r ϫ 10 Ϫ3 ; asterisk, TRAP/IR -subunit complex.
Insulin receptor-PLC ␥ 1 interaction | Kwon et al. 377 (TRAP/  -subunit) (Fig. 2). The IR  -subunit and TRAP/  -subunit protein bands were subjected to in-gel digestion with trypsin, followed by peptide mass fingerprinting and MALDI analysis of the eluted peptides to provide tentative identification of each protein species. 17 and 15 peptide masses covering 17 and 9% of the IR  -subunit, respectively, were found in both protein bands (estimated z value of 2.16 and 2.38, respectively), whereas 12 peptide masses within the TRAP/  -subunit band matched the 155-kD PLC ␥ 1 (estimated z value of 2.39), corresponding to 10% of the molecule. These peptides covered various regions of PLC ␥ 1. Analysis of recombinant GST-tagged PLC ␥ 1 SH2/SH3 domain fusion protein by MALDI returned 18 peptide masses (estimated z value of 1.82), many of which were strong matches with those found in the TRAP/  -subunit protein band. Subsequent immunoblot analyses revealed the presence of PLC ␥ 1 in the IR-TRAP complex (see below).
The cross-linking of PLC ␥ 1 with the IR upon cell treatment with BMH indicates that both proteins must contain reactive cysteines. Therefore, the ability of PLC ␥ 1 to react with maleimidobutyrylbiocytin was investigated in HEK293 cells transfected with vector alone or HA-tagged PLC ␥ 1. In this thiol-specific biotinylation assay (Bernier et al., 1995), recombinant as well as endogenous PLC ␥ 1 were readily modified (unpublished data), supporting the notion that PLC ␥ 1 contains reactive thiol group(s).
Immunodetection of the PLC ␥ 1-IR complex
The association of PLC ␥ 1 with the IR was evaluated in CHO-IR cells that were left untreated or exposed to a saturating concentration of insulin (100 nM) for 3-30 min. Immunoblotting the anti-IR immunoprecipitates with anti-PLC ␥ 1 antibody showed a time-dependent increase in PLC ␥ 1 association with the IR in response to insulin that persisted throughout the 30 min of the experiment (Fig. 3 A). The interaction is stimulated by insulin in a dose-dependent manner, with detectable levels at 5 nM insulin (Fig. 3 B). Similar results were obtained by probing anti-PLC ␥ 1 immunoprecipitates with anti-IR antibody (unpublished data). When immunoprecipitation was performed with a control IgG, no cosedimentation of the IR with PLC ␥ 1 was detectable (unpublished data).
PLC ␥ 1 is a member of the phosphoinositide (PI)-specific PLC family whose phosphorylation by many activated nonreceptor and receptor tyrosine kinases results in its subsequent activation (Rhee 2001). To test the predictions that PLC ␥ 1 tyrosine phosphorylation could occur with insulin, CHO-IR cells were left untreated or treated with insulin for 15 min both in the absence or presence of orthovanadate, a PTP inhibitor. The extent of PLC ␥ 1 phosphorylation at Tyr-783 was then determined in total cell lysates by Western blot analysis using a commercially available phosphospecific antibody. In the absence of vanadate, the levels of tyrosine-phosphorylated PLC ␥ 1 were barely detectable under basal conditions and after insulin stimulation (unpublished data). However, PLC ␥ 1 tyrosine phosphorylation was increased in a dose-dependent manner after the addition of insulin to vanadate-pretreated CHO-IR cells (Fig. 3 C), peaking within 10-30 min (unpublished data). Thus, insulin was found to induce tyrosine phosphorylation of PLC ␥ 1, and this effect was clearly sensitive to PTP inhibition.
To further evaluate the role of insulin in mediating tyrosine phosphorylation and association of PLC ␥ 1 with the activated IR, anti-PLC ␥ 1 immunoprecipitates from vanadate-treated CHO-IR cells were probed with anti-IR. Insulin stimulation led to a significant increase (6.4 Ϯ 1.6-fold; n ϭ 6) in IR cosedimentation with PLC ␥ 1 and in phos-phoPLC ␥ 1 levels (Fig. 3 D).
Physiological significance of the IR-PLC␥1 association
Next, we investigated the role of insulin in the recruitment of PLC␥1 to the endogenous IR in insulin-responsive HepG2 cells. These cells were pretreated with orthovanadate and then left untreated or exposed to 100 nM insulin for 15 min. Fig. 4 A shows the results of a typical experiment analyzing PLC␥1 immunoprecipitates that were blotted with the IR -subunit. In agreement with our previous results with CHO-IR cells from this report, a constitutive and insulin-inducible cosedimentation of the IR with PLC␥1 was observed, suggesting that insulin could promote the recruit-ment of PLC␥1 to the IR in a number of cell types. Higher tyrosine phosphorylation of PLC␥1 was also noted in response to insulin when PLC␥1 was immunoprecipitated and then visualized with either anti-phosphoPLC␥1 (pTyr-783) or phosphotyrosine (clone RC20) antibody ( Fig. 4 A). Next, we determined that endogenous PLC␥1 interacted with the IR in primary culture of rat hepatocytes (Fig. 4 B). These results strongly support a physiological role for the PLC␥1 association to the IR in insulin signaling.
PLC␥1 colocalizes with the IR at the plasma membrane
Immunofluorescence microscopy was used to test whether the subcellular localization of the IR, PLC␥1, and tyrosinephosphorylated PLC␥1 is affected after stimulation of CHO-IR cells with insulin. The IR was primarily found at the plasma membrane when cells were left untreated or incubated with insulin for 10 min (Fig. 5, left panels). The distribution of PLC␥1 throughout the cytosolic space was not affected by the addition of insulin (Fig. 5, right panels). In contrast, a strong tyrosine-phosphorylated PLC␥1 signal was found at the plasma membrane of insulin-stimulated cells (Fig. 5, middle panels). Confocal sectioning showed that the ventral side of the cells (point of attachment to the substratum) was largely devoid of IR and PLC␥1 (unpublished data), whereas the apical side was decorated with both the IR and tyrosine-phosphorylated PLC␥1 in the form of small clusters surrounding the cell membrane that are likely to be derived from the cortical cytoskeleton (Fig. 5, bottom panels).
Role of PI 3-kinase in mediating PLC␥1 recruitment to the IR
Recently, it has been shown that the generation of PI 3,4,5trisphosphate by PI 3-kinase may serve to target PLC␥1 to the plasma membrane via its PH domain (Falasca et al., 1998). Therefore, we sought to examine the potential role of the PI 3-kinase pathway in the modulation of PLC␥1 bind- Anti-PLC␥1 PH immunoprecipitates were blotted with the indicated antibodies. Equal loading was confirmed by reprobing the membranes with anti-PLC␥1 PH . Results shown are representative of three independent observations. (B) A primary culture of rat hepatocytes was treated (or not treated) with 100 nM insulin for 10 min. Lysates were incubated with anti-PLC␥1 antibody or a control mAb (C), and the immunoprecipitates were blotted with anti-IR ␣-subunit antibody. Figure 5. Cellular localization of IR and tyrosinephosphorylated PLC␥1. CHO-IR cells were left untreated or were stimulated with 100 nM insulin for 10 min before fixation and permeabilization. Cells were stained with antibodies against IR -subunit (left panels), pPLC␥1 (middle panels), and total PLC␥1 (top two right panels). Bound primary antibodies were detected with Alexa ® 488-conjugated (green) or Alexa ® 568-conjugated (red) secondary antibody, and DNA was stained blue by TO-PRO ® -3. In some instances, cells were stained only for F-actin using Alexa ® 568-conjugated phalloidin (red). Confocal sectioning in mid area (bar, 10 m) and apical surface (bar, 5 m) of representative cells is shown. Similar results were obtained in at least three independent experiments. Arrows indicate localization of tyrosine-phosphorylated PLC␥1 to the plasma membrane. Arrowheads indicate punctate signal coalescence at the plasma membrane.
ing to the IR. To address this issue, CHO-IR cells were pretreated with wortmannin, a pharmacological inhibitor of PI 3-kinase, followed by the addition of insulin. Blocking insulin-dependent phosphorylation of AKT on Ser 473 with wortmannin failed to inhibit PLC␥1 association with the IR (Fig. 6 A). Moreover, anti-phosphoPLC␥1 (pTyr-783) immunoprecipitates did not display a reduction in BMHinduced IR-PLC␥1 cross-linking after pretreatment of cells with wortmannin ( Fig. 6 B), suggesting that PLC␥1 recruitment to the ligand-activated IR is independent of the PI 3-kinase pathway.
SH2 domain-independent association of PLC␥1 with the IR
In addition to its catalytic subdomains, PLC␥1 has a region that contains two adjacent SH2 domains and an SH3 domain (Rhee, 2001). It has been proposed that the two SH2 domains are prerequisite for the association of PLC␥1 with activated receptors for PDGF and EGF. The R586K and R694K mutations within the rat PLC␥1 SH2 domains (N Ϫ C Ϫ ) block the ability of PLC␥1 to associate with activated PDGF receptors and to become tyrosine phosphorylated (Ji et al., 1999). To test the importance of SH2 domains in mediating PLC␥1 association with the IR, HEK293 cells were transiently cotransfected with the IR or EGF receptor along with either the HA-tagged PLC␥1 wildtype or N Ϫ C Ϫ double SH2 domain mutant. After stimulation with insulin or EGF, total cell lysates were prepared and analyzed by immunoblotting. Both the wild-type and mutant PLC␥1 proteins were expressed at comparable levels ( Fig. 7 A, middle panels). As anticipated, the mutant PLC␥1 protein was not tyrosine phosphorylated upon the addition of insulin or EGF, despite marked autophosphorylation of these receptors (Fig. 7 A). However, the expressed N Ϫ C Ϫ PLC␥1 mutant was coprecipitated with the IR, but not with the liganded EGF receptor (Fig. 7 B).
To further test the selectivity of PLC␥1 interaction with these receptors, we transfected CHO cells stably expressing both the IR and EGF receptors (CHO-EI) with wild-type PLC␥1 or the N Ϫ C Ϫ mutant. Stimulation of CHO-EI cells in response to insulin or EGF resulted in the cosedimentation of wild-type PLC␥1 with activated IR or EGF receptors (Fig. 7 C). In contrast, the N Ϫ C Ϫ PLC␥1 mutant was recruited to the liganded IR, but not to EGF receptors (Fig. 7 C). Together, these data show the SH2 domain-independent association of PLC␥1 with the IR.
A number of IR-interacting proteins, including Gab-1 and IRS, contain a PH domain that allows their membrane association. To assess the importance of this domain in the recruitment of PLC␥1 to the IR in intact cells, various experiments were performed using HA-tagged PH-EF domain (aa 1-301) of rat PLC␥1. This construct was readily de- tected as a 40-kD protein upon transient transfection in HEK-293 cells and upon immunoprecipitation using anti-HA or an antibody against the PLC␥1 PH domain (Fig. 8 A). Expression of the HA-tagged PH-EF construct led to a 60 Ϯ 11% decrease (P Ͻ 0.01; n ϭ 4) in the ability of insulin to stimulate recruitment of cellular PLC␥1 to the activated IR (Fig. 8 B, top left). To determine if a PLC␥1 mutant lacking the PH-EF motif could also interfere with this interaction, an NH 2 -terminal truncation of 301 amino acids was performed to generate the ⌬PH-EF PLC␥1 mutant. HEK293 cells expressing HA-tagged ⌬PH-EF displayed no reduction in the binding of endogenous PLC␥1 with the IR (Fig. 8 B, top right), but markedly abrogated the PLC␥1-EGF receptor interaction (Fig. 8 C, middle right). Importantly, expression of the PH-EF construct did not block PLC␥1 association with the activated EGF receptor in HEK293 cells (Fig. 8 C, middle left) or HepG2 cells (unpublished data). Ligand-mediated phosphorylation of the EGF receptors was normal in all conditions tested (Fig. 8 C, top panels). These results are consistent with the PH-EF domain being required for PLC␥1 interaction with the IR. Overexpression of PH-EF had no effect on the stimulation of IR and IRS tyrosine phosphorylation in response to insulin (Fig. 8 D, top), and it did not inhibit insulin stimulation of AKT phosphorylation. However, the levels of p42/ 44 MAPK (ERK) phosphorylation elicited by insulin were reduced by ectopic expression of the PH-EF construct. (Fig. 8 D, third panel).
To further test the requirement of PLC␥1 for insulin signaling, we used PLC␥1 Ϫ/Ϫ fibroblasts reconstituted with the IR alone or together with wild-type PLC␥1. After serum withdrawal, cells were stimulated in the absence or presence of insulin, then in the phosphorylation of endogenous ERK, and AKT phosphorylation was measured in total cell lysates using phosphospecific antibodies. Insulin-stimulated ERK phosphorylation was activated to a greater extent in cells reconstituted with wild-type PLC␥1, whereas there was only an %02ف increase in AKT phosphorylation levels by insulin Lysates were immunoprecipitated with either anti-HA or anti-PLC␥1 PH antibody and blotted as indicated. Asterisk, endogenous PLC␥1. (B) HEK293 cells transfected with either control pcDNA vector, PH-EF, or ⌬PH-EF PLC␥1 mutant were treated with 200 M orthovanadate before stimulation with 100 nM insulin for 10 min. Cosedimentation of endogenous PLC␥1 in anti-IR immunoprecipitates was detected with anti-PLC␥1 PH antibody, and the membrane was then reprobed with anti-IR  subunit antibody. (C) HEK293 cells transfected with either pcDNA, PH-EF, or ⌬PH-EF PLC␥1 mutant were serum starved and then stimulated with 20 nM EGF for 10 min. Endogenous EGF receptors were immunoprecipitated and then blotted with anti-RC20 and PLC␥1 PH antibodies. An aliquot of total cell lysates was probed with anti-HA antibody to confirm expression of each construct (B and C). (D) Serum-starved HEK293 cells transfected with either control pcDNA vector or PH-EF were treated with 200 M orthovanadate before stimulation with 100 nM insulin for 10 min. Cell lysates were blotted with the indicated antibodies. Shown are representative experiments that were repeated at least three times. ( Fig. 9 A). Lastly, the role of PLC␥1 in insulin action was determined using siRNA methodology. HepG2 cells transfected with a control siRNA duplex had no reduction in PLC␥1 expression (Fig. 9 B, second panel). However, with a PLC␥1-specific siRNA duplex targeting to the 2979-2999 region of the human PLC␥1 mRNA-coding sequence, the expression of PLC␥1 was dropped to %03ف of the levels of siRNA controls. Exposure of these cells to insulin activated the phosphorylation of IRSs and AKT to levels equivalent to those in insulin-stimulated cells transfected with control siRNA (Fig. 9 B). More significantly, incubation with PLC␥1 siRNA attenuated ERK phosphorylation elicited by insulin ( Fig. 9 B, fifth panel). These results demonstrate the efficiency of the siRNA template and indicate the pathway of insulin signaling that PLC␥1 may relate to.
Discussion
We have identified and characterized a signaling complex between the IR and PLC␥1 in a number of cultured cell lines and in a primary culture of rat hepatocytes. The results originate from our initial efforts aimed at identifying a thiolreactive protein that covalently associates with the IR upon cell treatment with the cross-linking agent BMH. The IRassociated protein was found to be PLC␥1 by mass spectrometry analysis, and was independently confirmed by reciprocal immunoprecipitation experiments. Insulin increases the binding of PLC␥1 to the activated IR in an SH2 domain-independent manner. Using various PLC␥1 constructs, we found that the NH 2 -terminal region of PLC␥1 encompassing the PH and EF-hand domain is necessary for binding the IR. Additional experiments demonstrated that PLC␥1 and its interaction with the IR play an important role in ERK activation in response to insulin.
Increase in PLC␥1-mediated PI(4,5)-bisphosphate hydrolysis has been reported in anti-IR immunoprecipitates from insulin-stimulated 3T3-L1 adipocytes (Eichhorn et al., 2001). However, whether the binding of PLC␥1 to the IR was direct or through an accessory protein remains unclear. It should be noted that c-Cbl tyrosine phosphorylation by insulin requires the adaptor protein APS, which coordinates interaction between c-Cbl and the activated IR (Liu et al., 2002). Our data show the direct interaction between PLC␥1 and the IR using cross-linking methodology in intact cells. A significant conformational change of the cytoplasmic region of the receptor -subunit occurs as the result of IR autophosphorylation (Baron et al., 1992;Lee et al., 1997). Hence, the mechanism by which PLC␥1 is recruited to the IR in response to insulin may involve change in conformational flexibility at the interface between the two proteins, which brings the pair of reactive thiols (Cys 981 of the IR [Garant et al., 2000] and that of PLC␥1) in close proximity. The inter-thiol distance could be separated by as much as 8 Å, as the BMH analogue (BMOE) was efficient at promoting the formation of a covalent IR-PLC␥1 complex.
Our results show insulin-stimulated phosphorylation of a positive regulatory residue (Tyr-783) on PLC␥1 both in CHO-IR and HepG2 cells, as well as in HEK293 cells and PLC␥1 Ϫ/Ϫ fibroblasts transiently expressing wild-type PLC␥1. A commercially available phosphoPLC␥1 antibody (pTyr-783) was used, and the results were confirmed with anti-phosphotyrosine. By contrast, no PLC␥1 tyrosine phosphorylation was detected upon addition of insulin in 3T3-L1 adipocytes (Eichhorn et al., 2001). It has been suggested that kinases of the Src family have the ability to phosphorylate and activate PLC␥1 (Nakanishi et al., 1993). Src-related kinases are abundant in caveolin-rich raft preparations of adipocytes (Mastick and Saltiel, 1997;Müller et al., 2001) and CHO-IR cells (unpublished data), and are believed to play a role during insulin signaling (Sun et al., 1996). Because the IR appears to be incapable of directly phosphorylating PLC␥1 (Nishibe et al., 1990), it is possible that upon insulin stimulation, PLC␥1 is repositioned for phosphorylation by raft-associated Src-family kinases. PLC␥1 contains several tyrosine residues that are targets of receptor and nonreceptor tyrosine kinases and whose phosphorylation may contribute to positive or negative regulation of PLC␥1 (Kim et al., 1991;Plattner et al., 2003). However, a subset of these phosphotyrosine moieties may function as a docking site for SH2 domain-containing proteins during signal transduction (Pei et al., 1997) rather than participating directly in the regulation of PLC␥1.
PLC␥1 accumulates preferentially to cortical actin structures in EGF-stimulated A431 cells (Diakonova et al., 1995), where it binds to actin-binding proteins via its SH3 domain (Park et al., 1999). Furthermore, interaction between the COOH-terminal SH2 domain of PLC␥1 and the actin cytoskeleton has been demonstrated in an in vitro binding assay (Pei et al., 1996). Our data show that upon insulin stimulation, the IR and tyrosine-phosphorylated PLC␥1 colocalize with the actin clusters that ringed the plasma membrane. These results are consistent with the important role played by PLC␥1 in cytoskeletal reorganization and membrane ruffling after cell activation (Yu et al., 1998). Similarly, PI 3-kinase is linked to cytoskeletal reorganization (Vanhaesebroeck et al., 2001) and for full activation of PLC␥1 in some models (Rhee, 2001). Inhibition of PI 3-kinase activity by wortmannin has provided an opportunity to assess the mechanism of PLC␥1 binding to membrane-associated IR in response to insulin. We found that the insulinstimulated formation of PI (3,4,5)-trisphosphate does not act as a targeting signal for PLC␥1 interaction with the IR.
A principal conclusion of this report is that SH2 domains have little role, if any, in promoting PLC␥1 recruitment to the IR. In contrast, disabling both SH2 domains was found to prevent the N Ϫ C Ϫ PLC␥1 mutant to associate with ligand-activated receptors for PDGF (Ji et al., 1999) and EGF (this paper). In this regard, Grb14 has been proposed to interact with the IR in an SH2-independent manner, with the BPS domain being the main interacting region (Kasus-Jacobi et al., 2000). It is noteworthy that the binding of the N Ϫ C Ϫ PLC␥1 mutant to the IR occurs even though the mutant is not phosphorylated at Tyr-783 in response to insulin, indicating that efficient PLC␥1 association with the IR may not require this phosphorylation event. We established that the NH 2 -terminal region of PLC␥1 encompassing the PH-EF domain is able to bind to the IR, as is the full-length protein, thereby selectively blocking recruitment of endogenous PLC␥1 to the activated IR, but not EGF receptors. Importantly, our data show that a truncated PLC␥1 mutant lack-ing the PH-EF region fails to bind to the IR, which is consistent with the notion that the PH-EF-hand domain is necessary for PLC␥1 association with the IR. Mutations in the PH domain of PLC␥1 did not affect recruitment of PLC␥1 to the EGF receptor (Matsuda et al., 2001). It is now believed that PH domains can interact specifically with a subset of signaling molecules rather than exerting promiscuous effects. For example, the IRS-1 PH domain has recently been shown to bind to a protein ligand referred to as PHIP (Farhang-Fallah et al., 2000), and interaction of F-actin with proteins that contain PH domains directs them to sites of cytoskeletal rearrangement at the plasma membrane (Yao et al., 1999). On the other hand, the -adrenergic receptor kinase PH domain must bind to heterotrimeric G-protein ␥ subunits and with PI (4,5)-bisphosphate to promote effective membrane targeting (Pitcher et al., 1995). The importance that EF-hand alone has in modulating IR-PLC␥1 association will be the subject of future investigations.
Our findings suggest that PH-EF overexpression may exert selective effects in insulin action through alteration in PLC␥1 signaling. Expression of PH-EF has been found to inhibit endogenous PLC␥1 association with the IR with concomitant reduction in ERK (but not AKT) phosphorylation in response to insulin. Similarly, increase in ERK phosphorylation by insulin was markedly reduced after blocking PLC␥1 expression in HepG2 cells using siRNA methodology. Additionally, reconstitution of PLC␥1 in PLC␥1 Ϫ/Ϫ fibroblasts significantly elevates the ability of insulin to promote ERK activation. PLC␥1 has been implicated in the regulation of MAPK activation in some systems (Zhang et al., 2000;Jacob et al., 2002). Together, our results support the hypothesis that PLC␥1 association with the IR is necessary for ERK regulation in response to insulin. This may be of physiological significance, as the unique structure of PLC␥1 with its PH, SH2, and SH3 domains may allow scaffolding of effector proteins harboring phosphotyrosine residues or proline-rich domains near the activated IR. The SH3 domain of PLC␥1 has been shown to be involved in SOS-mediated Ras activation (Kim et al., 2000) and to interact with c-Cbl (Tvorogov and Carpenter, 2002). The finding that the activated hybrid receptor encompassing the tyrosine kinase domain of the IR requires PLC␥1 for efficient calcium mobilization is potentially important (Telting et al., 1999). On the other hand, a PLC␥1 mutant lacking the lipase activity can induce DNA synthesis (Smith et al., 1994), indicating that the products of PLC␥1 activation and its associated mobilization of intracellular calcium may not be required for all aspects of PLC␥1 signaling. In view of the fact that PLC␥1 can fulfill functions that are not necessarily dependent on its enzymatic activity, this raises the possibility of a unique activation mechanism whereby PLC␥1 acts as an adaptor protein. To what extent the findings reported here relate to the role of PLC␥1 in insulin action remains to be elucidated.
Materials
The anti-human IR mAbs for immunoprecipitation (clones 29B4 and CII 25.3) were purchased from Calbiochem. The anti-IR -subunit antibody as well as HRP-linked phosphotyrosine (clone RC20) antibody for West-ern blot were purchased from Transduction Laboratories. The phospho-p42/44 MAPK and phospho-AKT antibodies were purchased from Cell Signaling Technology. The anti-phosphoPLC␥1(Tyr-783) antibody for immunoprecipitation and immunofluorescence experiments (sc-12943R), PLC␥1 SH2/SH3 domain fusion protein (residues 530-850), and anti-IR ␣-subunit antibody were purchased from Santa Cruz Biotechnology, Inc. The anti-phosphoPLC␥1(Tyr-783) antibody for Western blot was purchased from Biosource International. The anti-PLC␥1 PH mAb (generated against a 19-aa sequence within the PH domain) was purchased from CHEMICON International, and a mixture of anti-PLC␥1 mAbs (05-163) was obtained from Upstate Biotechnology. The HA epitope antibodies were purchased from Covance. Alexa Fluor ® secondary antibodies, Alexa Fluor ® 568-conjugated phalloidin, and TO-PRO ® -3 were purchased from Molecular Probes, Inc. FuGENE™ 6 and LipofectAMINE™ 2000 were purchased from Roche and Invitrogen, respectively. Recombinant human insulin and EGF were purchased from Calbiochem and Upstate Biotechnology, respectively. BMH, BMOE, and bismaleimidobutane were purchased from Pierce Chemical Co. Wortmannin, sodium orthovanadate, and DMSO were purchased from Sigma-Aldrich. The commercial sources for electrophoresis reagents, culture media, sera, films, HRP-linked secondary antibodies, and the ECL detection system for immunoblot detection have been described previously (Garant et al., 2000).
Plasmids and mutagenesis
The pRK5 vector containing cDNA for the HA-tagged rat PLC␥1 wild-type and the PLC␥1 SH2 domain double mutant (N Ϫ C Ϫ ) were obtained from Graham Carpenter (Vanderbilt University, Nashville, TN). The plasmid encoding the human EGF receptor (pXER) was provided by Alexander Sorkin (University Colorado Health Science Center, Denver, CO). The HA-tagged PH-EF domain of rat PLC␥1 (1-301) was amplified from the pRK5/HA-PLC␥1 plasmid using PCR-based site-directed mutagenesis with primers to introduce a HindIII site between EF-hand and catalytic domain "X" of PLC␥1. A 2,961-bp HindIII-HindIII fragment was excised, and the linearized pRK5/HA-tagged PH-EF plasmid was then self-ligated. An HA-tagged truncated PLC␥1 mutant lacking the PH-EF domain (⌬PH-EF) was created using PCR-based site-directed mutagenesis with primers to introduce EcoRI sites both at the junction between HA epitope and PH domain and between EF-hand and catalytic domain "X". A 903-bp EcoRI-EcoRI fragment was excised and the linearized pRK5/HA-tagged ⌬PH-EF plasmid was then self-ligated. The constructs were verified by DNA sequence analysis.
Cell culture and metabolic labeling
CHO cells stably expressing wild-type human IR or both the IR and EGF receptors (CHO-EI cells) have been described previously (Kole et al., 1996). HEK293 and liver-derived HepG2 cells were purchased from American Type Culture Collection (Manassas, VA), and PLC␥1 Ϫ/Ϫ mouse embryonic fibroblasts were gifts from Dr. G. Carpenter (Ji et al., 1998). All CHO cell lines were expanded and maintained in Ham's F12 supplemented with 10% FBS, 100 U/ml penicillin, and 100 g/ml streptomycin, whereas HepG2 and HEK293 cells were maintained in DME and McCoy's 5A medium containing 10% FBS and antibiotics. Cells were incubated in a humidified atmosphere of 5% CO 2 at 37ЊC.
For metabolic labeling experiments, confluent monolayers of CHO-IR were incubated for 16 h with 60 Ci/ml Trans 35 S-label (ICN Biomedicals) in methionine-and cysteine-free RPMI 1640 medium containing 3% FCS. After a series of PBS washes, cells were serum starved for 3 h and were then subjected to treatments as described below.
Isolation and culture of rat hepatocytes
Hepatocytes were isolated from 5-mo-old male Fischer 344 rats by the collagenase perfusion method of Seglen (Ikeyama et al., 2002). The isolated cells were seeded onto Biocoat Collagen I cellware (BD Discovery Labware) in William's E medium supplemented with 5% FBS, 2 mM L-glutamine, 100 U/ml penicillin, and 100 g/ml streptomycin for 2 h in 5% CO 2 at 37ЊC to allow attachment to the dishes. The medium was then replaced with serum-free William's E medium plus the above supplements, and cells were cultured for an additional 16 h before treatment. This procedure results in Ͻ5% contamination with nonhepatocyte cells.
Transient transfection assays
HEK293 cells were cultured for 24 h until 60-80% confluence was reached. Transient transfection was performed according to the manufacturer's protocol for the use of FuGENE™ 6. In brief, empty expression vector (pcDNA3.1) and expression plasmids encoding HA-tagged PLC␥1 [wild type or N Ϫ C Ϫ ] together with recombinant human IR or human EGF receptor were mixed with the transfection reagent and directly added into the culture plates at a ratio of 1.5 g of each plasmid per 60-mm dish. Both CHO-EI cells and PLC␥1 Ϫ/Ϫ mouse embryonic fibroblasts were transfected using LipofectAMINE™ 2000 according to the manufacturer's protocol. 24 h after transfection, cells were serum starved for 18 h and then subjected to a 30-min treatment with 200 M orthovanadate followed by stimulation with 100 nM insulin or 20 nM EGF for 5-10 min at 37ЊC. Transfection efficiency was monitored using a plasmid DNA encoding eGFP.
siRNA preparation and cell transfection
The siRNA sequence targeting human PLC␥1 (GenBank/EMBL/DDBJ accession no. NM_002660) was from position 2979-2999 relative to the start codon. This PLC␥1 sequence was reversed and used as unspecific siRNA control. 21-nt RNAs were purchased from Dharmacon in deprotected and desalted form, and the formation of siRNA duplex (annealing) was performed according to the manufacturer (Dharmacon). Subconfluent HepG2 cells were transiently transfected with siRNAs using Oligo-fectamine™ according to the manufacturer's protocol (Life Technologies). In brief, 100 l Opti-MEM ® I medium and 10 l Oligofectamine™ per 60mm dish were preincubated for 5 min at RT. During the time for this incubation, 100 l Opti-MEM ® I medium was mixed with 20 l of 20 M siRNA. The two mixtures were combined and incubated for 20 min at RT for complex formation. The entire mixture was then added to the cells in one dish resulting in a final concentration of 100 nM for the siRNAs. Cells were usually assayed 48-72 h after transfection. Specific silencing was confirmed by at least three independent experiments.
IR-TRAP cross-linking in intact cells
Serum-starved cells were washed twice in PBS, and were then incubated in Krebs Ringer phosphate buffer for 5 min at 37ЊC. 100 nM insulin was added for 5 min and cells were then transferred to thermoregulated aluminum cooling plates set at 6ЊC. The cross-linking reaction was initiated by the addition of 100 M BMH or vehicle (DMSO) and quenched 10 min later with 4 mM L-cysteine. In some instances, cross-linking was performed in the presence of 100 M BMOE or BMB. For wortmannin treatment, 100 nM wortmannin was added to the cells 30 min before insulin stimulation.
Immunoprecipitation and immunoblotting
Cells were lysed in immune precipitation buffer (20 mM Tris-HCl, pH 7.5, 137 mM NaCl, 1 mM orthovanadate, 100 mM NaF, 0.1% SDS, 0.5% deoxycholate, 1% Triton X-100, 0.02% sodium azide, 0.25 mM Pefabloc-SC [Boehringer], 1 mM benzamidine, 8 g/ml aprotinin, and 2 g/ml leupeptin) for 20 min on ice, and then centrifuged at 10,000 g for 20 min at 4ЊC to sediment insoluble materials. The clarified lysates were incubated with the indicated antibodies for 16 h at 4ЊC with rocking. Then, protein A/G-agarose (Oncogene Research Products) beads were added and the incubation was continued at 4ЊC for 2 h. The beads were pelleted by centrifugation and washed twice in the same buffer and twice in 50 mM Hepes, pH 7.4, and 0.1% Triton X-100 before solubilization in Laemmli sample buffer supplemented with 5% 2-mercaptoethanol. In some experiments, cells were lysed directly in Laemmli sample buffer containing 5% 2-mercaptoethanol and 1 mM orthovanadate. After heating at 70ЊC for 10 min, proteins were separated by SDS-PAGE and were electrotransferred onto polyvinylidene difluoride membranes. Detection of individual proteins was performed by immunoblotting with specific primary antibodies and visualized by ECL. Signals were quantitated by densitometry coupled with the ImageQuant software (Molecular Dynamics). Where indicated, membranes from 35 S-labeling experiments were dried and autoradiography was performed.
Purification of the IR-TRAP complex
10 ϫ 150-mm dishes of CHO-IR cells were incubated with 100 nM insulin for 5 min and were then subjected to cross-linking reaction with BMH as shown above. After immunoprecipitation of the cell lysates with anti-IR antibodies prebound to protein G-agarose, the immune pellets were washed extensively and then incubated with 1 ml 1.5ϫ Laemmli sample buffer without 2-mercaptoethanol for 60 min at RT. The eluted proteins were then concentrated down to 50 l using an Ultrafree ® centrifugal filter (molecular weight cut-off of 100 kD, Millipore). The concentrated material was incubated with 2-mercaptoethanol (7.5% final concentration) for 10 min at 70ЊC, and was then resolved by SDS-PAGE.
TRAP identification by MALDI mass spectrometry
Colloidal blue-stained bands were cut out of the gels for in-gel digestion as follows. The gel pieces were equilibrated for 20 min in 200 l 25 mM ammonium bicarbonate, 50% acetonitrile. The supernatant was decanted and the same procedure was repeated until full decoloration of the gel. The gel pieces were dried, rehydrated for digestion with 5 g/ml porcine trypsin (Roche) in 25 mM ammonium bicarbonate, and incubated at 37ЊC overnight. The reaction was stopped by adding 1 vol of 50% acetonitrile, 0.5% trifluoroacetic acid. The peptides were extracted from the gel matrix by sonication for 0.5-1 h. Peptide mass fingerprinting was performed using a mass spectrometer (Voyager-DE STR; PerkinElmer) operating in delayed reflector mode at an accelerating voltage of 20 kV. The peptide samples were cocrystallized with matrix on a gold-coated sample plate using 1 l matrix (␣-cyano-4-hydroxy-transcinnamic acid) and 1 l sample. After internal calibration with protein standards (renin, angiotensin, and adrenocorticotropic hormone), the monoisotope peptide masses were assigned and then used in database searches with ProFound (http://prowl.rockefeller. edu/profound_bin/webProFound.exe). Cysteines were modified by acrylamide, and methionine was considered to be oxidized. One missed cleavage was allowed.
|
2016-05-12T22:15:10.714Z
|
2003-10-27T00:00:00.000
|
{
"year": 2003,
"sha1": "2d2b13e4e4179e890f46d8367cdd60e6fa8fe2e5",
"oa_license": "CCBYNCSA",
"oa_url": "http://jcb.rupress.org/content/163/2/375.full.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "3fde5ac319110cc1947c20ae1ff417174bd4bb0c",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
58889474
|
pes2o/s2orc
|
v3-fos-license
|
Review on the comparison of effectiveness between denosumab and bisphosphonates in post-menopausal osteoporosis
Objectives Osteoporosis is a rapidly rising cause of concern for elderly patients. Various classes of drugs are available in the market. Bisphosphonates are considered as a first-line therapy for the prevention and treatment. Denosumab is an antiresorptive agent which is a RANK ligand inhibitor. There is a scarcity of comparison between these two classes of drugs. The aim of this study is to compare efficacy of Bisphosphonates and Denosumab in various parameters. Methods Literature search was done for randomized controlled trials (RCTs) comparing bisphosphonates with denosumab. RCTs with a treatment period of at least one year with a baseline bone mineral density (BMD) and bone turnover markers (BTM) and follow up values at one year were included in the study. All included studies were also analysed for complications. The study has also been registered in PROSPERO International prospective register of systematic reviews. Results A total of five RCTs were identified providing data on 3751 participants. In all five studies, the BMD changes at both hip and spine were statistically significant in favour of denosumab. Result was similar in three studies that studied BMD changes at the wrist. Denosumab also produced significant reduction in BTM as early as one month, but at one year there was no difference compared to the bisphosphonates. There was no statistically significant differences in the complication rates. Conclusions Though both bisphosphonates and denosumab were effective with similar side effects, the latter was statistically superior in increasing the BMD and reducing the BTM.
Introduction
Postmenopausal osteoporosis is a disease with features of reduction in the mass of bone, and microscopic changes in the architecture that results in impaired strength of the bone [1]. After menopause, osteoclastic activity exceeds osteoblastic activity. This results in increased bone resorption which leads to an overall reduction of bone mass. This in turn increases skeletal fragility and risk of developing fractures [2]. Therefore the objective of treatment is to increase bone mass by altering the balance of bone remodelling. Most currently available drugs used to treat osteoporosis such as calcitonin, raloxifene and bisphosphonates, acts as inhibitors to bone resorption.
The two main properties of bisphosphonates resulting in their efficacy are the ability to strongly bind to bone mineral and the inhibition of mature osteoclasts [3]. Once the bisphosphonate is strongly attached to bone, this results in selective uptake by the bone mineral. After this, the bisphosphonates act at the sites of bone resorption by entering and inhibiting the mature osteoclastic cells.
Receptor activator of nuclear factor kappa-B ligand (RANKL), a cytokine secreted by bone marrow stromal cells, osteoblasts and T cells, is essential to induce osteoclast differentiation [4]. In post-menopausal osteoporosis with estrogen deprivation there is raised expression and production of RANKL, resulting in increased osteoclast activation and increased bone resorption. Reducing the number of osteoclasts by decreasing differentiation of precursor cells is one of the treatment modalities of hyper-resorptive bone diseases. Denosumab is one such fully human monoclonal antibody that can bind and inhibit RANKL.
There are numerous studies on the efficacy of bisphosphonates and other medications available for osteoporosis including denosumab. But there are very few randomised controlled trials (RCT) directly comparing bisphosphonates and denosumab. The aim of this systematic review was to identify studies that simultaneously compared bisphosphonates and denosumab and to analyse the efficacy in various parameters.
Materials and methods
Search Strategy: A search was done in several databases such as Pubmed Central, Cochrane CENTRAL and MED-LINE. The search was restricted to articles in English language. The search terms used were osteoporosis, postmenopausal, denosumab, bisphosphonates, bone mineral density and C-telopeptide. A filter for RCTs was also used. The Cochrane handbook of systematic reviews of interventions was referred to identify any discrepancies and biases in randomization, allocation concealment, blinding and missing data in the included RCTs [5].
Inclusion criteria: All RCTs directly comparing bisphosphonates with denosumab in post-menopausal osteoporosis were included. Only fully published reports with initial and final bone mineral density (BMD) and bone turnover markers (BTM) were included. CONSORT check list was used to critically appraise the included studies and all the studies fulfilled the criteria.
Statistical analysis: Data extracted included study design, selection criteria, population demographics, type of intervention, initial and final BMD, initial and final BTM as well as complications if any. Results of all the included studies were described in a table format. Key outcomes were percentage changes in BMD, BTM and complications.
Results
A total of six RCTs were identified. In one study, the participants had used denosumab for a long period and then stopped before restarting the therapy [6]. This RCT was excluded from the current study. A total of five RCTs were identified with a total of 3751 participants. The characteristics of the included studies are summarized in Table 1. Three studies compared denosumab with alendronate [7e9] and one study each for denosumab vs. risedronate [10] and denosumab vs. ibandronate [11]. All studies were checked to identify any discrepancies and biases in randomization, allocation concealment and blinding based on CONSORT checklist. No possible bias was found.
In one included RCT, subjects received variable doses of denosumab, viz. 6, 14 or 30 mgs subcutaneously (s/c) every three months or 14, 60, 100 or 210 mgs s/c every six months [7]. In all the other studies, subjects received denosumab in a dose of 60 mg s/c every six months.
Bone mineral density
Baseline BMD in each of the study was noted for both the groups of subjects. All the five included studies recorded BMD changes at the lumbar spine and hip. In addition to this, four of the studies recorded BMD changes at the femoral neck and three studies at the distal radius. All the five studies reported improvement in BMD at the lumbar spine and hip after treatment in both groups but the improvement was statistically significant in favour of denosumab. Four studies reported statistically significant improvement in BMD at the femoral neck in favour of denosumab. Three studies also reported statistically significant improvement in BMD at the distal radius, again in favour of denosumab. The results are shown in Table 2.
Bone turnover markers
Four of the included trials have reported baseline values of the BTM (C-telopeptide), and the percentage in reduction at one month and six months after initiation of treatment [8e11]. All four trials have found denosumab statistically superior to bisphosphonates at one month and three studies have shown a similar superiority at six months. The results are shown in Table 3. However one trial [9] reported that at six months of treatment there was no difference between the two groups and another trial [8] reported that at 12 months of treatment there was no difference between the two groups (p ¼ 0.52).
Complications
The various reported complications include arthralgia, upper respiratory tract infections, nasopharyngitis, clinical fractures and osteoporotic fractures. However there was no statistical difference between the two groups. The results are summarized in Table 4.
One study [7], reported the incidence of dyspepsia (denosumab: 10.5% vs. alendronate: 26.1%) and nausea (denosumab: 11.1% vs. alendronate: 21.7%). Though these gastrointestinal side effects were more in the alendronate group, they were not statistically significant. One study [8] described pyelonephritis (denosumab: 0.2% vs. alendronate: 0%) and another study [11] described the incidence of urinary tract infection (denosumab: 3.4% vs. ibandronate: 4.6%). Again, there was no statistical significance between the groups. Overall fracture rates and occurrence of osteoporotic fractures have been described in four studies with no statistical significance between the groups. One study found no complications in fracture healing in both the groups [9].
There was also no statistical significance between the dropout rates because of adverse events, in three of the included studies [7e9].
Discussion
RANK receptors are present on osteoclasts and their precursor cells. Denosumab prevents the interaction of RANKL with these receptors. This results in blocking the formation, functional ability, and survival of osteoclastic cells [4]. On the other hand, bisphosphonates bind to the calcium hydroxyapatite present in bone and reduce bone resorption by affecting the function and survival of osteoclasts. But they do not affect the formation of osteoclasts [3]. For the diagnosis of osteoporosis, analysis of bone mineral density using dual-energy X-ray absorptiometry (DXA) is the gold standard [12]. All the included studies in this review showed increase in BMD at lumbar spine, hip, femoral neck and distal radius in favour of denosumab as shown in Table 2.
BMD is a commonly used marker to assess efficacy of treatment of osteoporosis. However it is not useful to repeat the BMD within an interval of 2 years because the effect of treatment is relatively small compared to the precision of the test. There is also no precise and consistent relationship between a given increase in BMD and a specific decrease in fracture risk with osteoporosis therapy. BTMs are a non-invasive way of assessing the efficacy of the treatment. Biochemical analysis can be used to monitor bone metabolism. Enzymes and proteins are released during bone formation and bone resorption results in release of products of degradation. Analysing these biochemical markers can result in a specific and sensitive assessment of the rate of bone formation and bone resorption. These are C-terminal telopeptide of type 1 collagen (CTX) for bone resorption, and procollagen type 1 N propeptide (P1NP) for bone formation [13]. These markers usually fall by around 40% within 3 months of commencing bisphosphonate therapy. This is also usually followed by a reduction in the levels of bone formation markers during the next 6e12 months [14]. If the BTM levels do not reduce after antiresorptive therapy, it could be a result of the patient not complying with therapy, failure of absorption or an undetected cause of secondary osteoporosis. Denosumab has been shown to produce a very rapid fall and suppression of resorption markers, with a slower fall in formation markers [15]. Our study confirmed this finding with all the included studies showing a very rapid fall in resorption markers.
Denosumab has been shown to be associated with a significant reduction in the risk of vertebral, hip, and nonvertebral fractures compared to placebos in postmenopausal women with osteoporosis [16]. All the included studies in this review found no statistical significance between the groups. However, the limitation of this study was that none of the included studies were powered to compare fracture rates between the groups. Previous studies in postmenopausal women have reported a greater incidence in serious adverse events of infection for denosumab compared with placebo [6,17]. This current study did not observe any statistical significance between the groups.
In conclusion, increasing BMD by decreasing bone resorption through the inhibition of RANKL is an alternative approach to the treatment of osteoporosis. Denosumab is a human monoclonal antibody that can achieve this result. Use of Denosumab results in significant increase in BMD and reduction in the BTMs compared to various bisphosphonates. There was also no statistically significant complications.
Conflicts of interest
We, the authors of this study declare that there is no financial conflicts of interest or other interests that may influence the manuscript. We have not received any funding for the work undertaken.
|
2018-12-18T22:42:36.901Z
|
2016-04-27T00:00:00.000
|
{
"year": 2016,
"sha1": "4adae953ec932df4961fadfbec175b2eaf943c01",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.afos.2016.03.003",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "297a475d554048f4d0435f21faad1fb6e35583a9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
258629080
|
pes2o/s2orc
|
v3-fos-license
|
Interleukin-15 and Tumor Necrosis Factor-α in Iraqi Patients with Alopecia Areata
Background Alopecia areata (AA) is a common form of noncicatricial hair loss of unknown cause, affecting 0.1-0.2% of the general population. Most evidence supports the hypothesis that it is disease of the hair follicle of autoimmune nature mediated by T-cells, with important cytokine role. Objective of the Study. The objective of this study is to study the association and changes in serum levels of interleukin-15 (IL-15) and tumor necrosis factor-α (TNF-α) in patients with AA in relation to the type, activity, and disease duration. Patients and Methods. Thirty-eight patients with AA and 22 individuals without the disease as controls were enrolled in this case-controlled study conducted in the Department of Dermatology in the Al-Kindy Teaching Hospital and Baghdad Medical City, Iraq, during a period from the 1st of April 2021 to the 1st of December 2021. Serum concentrations of IL-15 and TNF-α assessed using the enzyme-linked immunosorbent assay. Results The mean serum concentration values for IL-15 and TNF-α were higher significantly in patients with AA than in controls (2.35 versus 0.35 pg/mL and 50.11 versus 20.92 pg/mL, respectively). IL-15 and TNF-α showed no statistically significant differences in level in terms of the type, duration, and activity of the disease, but TNF-α significantly higher in those with totalis-type than in other types. Conclusion Both IL-15 and TNF-α are markers for alopecia areata. The level for these biomarkers was not affected by duration or disease activity, but it was affected by the type of disease, as the concentrations of IL-15 and TNF-α were higher in patient with Alopecia totalis than in other types of Alopecia.
Introduction
Alopecia areata (AA) is a common infammatory noncicatricial type of hair loss with an unexpected course and a wide spectrum of clinical manifestations. While males and females have the same chance of afection, some data have revealed that men have the chance for an earlier diagnosis in comparison to females that present in adolescence with the association of involvement of nails and autoimmune diseases. Te prevalence of disease ranges from 0.1% to 0.2% worldwide. Te exact aetiopathogenesis is unknown, but there are multiple hypotheses, including genetic, environmental, and autoimmune pathogenesis [1,2].
Te autoimmune pathogenesis is either through destruction of the hair follicle by infammatory cells, in particular cytotoxic T-cells through the production of gamma interferon, which activates interleukin 2, 7, 15, and 21, and these cytokines signal the Janus kinase signal transducer and activator of transcription (JAK/STAT) pathway or through loss of the hair follicle's immune privilege, leading to its destruction by the immune system [3].
Interleukin (IL)-15 is an infammatory cytokine that has multiple efects on diferent cell types. Both innate and acquired immune systems can be afected by IL-15, explaining its participation in infammation and immune response to infection [4]. IL-15 acts through the JAK-1 and JAK-3 pathways [5]. In AA both IL-15 and its IL-15Rβ receptor subunit levels are elevated in the afected hair follicles [6]. Te tumor necrosis factor alpha (TNF-α) is an infammatory cytokine that is involved in AA pathogenesis and several autoimmune infammatory disorders like psoriasis and systemic lupus erythematosus, for which the level of this cytokine is found to be relevant with the disease severity and activity, while in AA, only a few studies have measured the level of TNF-α, and the results were controversial [7].
Patients and Methods
Tis case-controlled study was performed in Dermatology unit in the Al-Kindy Teaching Hospital and Dermatology Center of Baghdad Medical City, Iraq, from the 1st of April 2021 to the 1st of December 2021, in line with the guidelines of the Helsinki Declaration and items Strengthening the Reporting of Observational Studies in Epidemics (STROBE) statement.
Te study involved thirty-eight (38) patients with alopecia areata of any age, gender, and type who attended the outpatient clinics. Te control group consisted of twentytwo (22) generally healthy people.
Exclusion criteria: (1) Patient with any type of alopecia areata on treatment.
(2) Any patient with associated infammatory, infectious, malignant, and autoimmune diseases affecting the skin or other systems of the body in addition to alopecia areata. A directed interview was done to get a complete history from the patients including age, gender, time of onset for AA, previous history of the same condition and the time of it, medical history, and history of AA or autoimmune disease in the patient's family.
Te scalp examination was done for evaluating the sites and number of patches of AA, detecting exclamation mark hairs presence, and for the pull test. Te body's hairy area was examined meticulously for alopecia patches. Nail involvement was detected through careful nail examination.
For the disease duration, three groups of patients were included (I) Six months or less, (II) >six months but <1 year, and (III) one year or more [8].
Venous blood was collected in a gel tube by venipuncture of fve milliliters from both patients and the healthy person under aseptic technique. Te sample centrifugation was done, and the serum was aliquot and stored in eppendorf tubes at −20°C for subsequent assays of IL-15 and TNF-α.
Te concentration of both IL-15 and TNF-α were assayed using an ELISA kit from MyBioSource ® (USA) according to the leafet instructions provided.
According to the standard curve (one for IL-15 and other for TNF-α) that was generated by plotting the concentration of the standard against the optical density of each standard concentration, the serum concentrations of both IL-15 and TNF-α (for both patients and controls) were determined.
Ethical Considerations and Ofcial Approvals.
A patient agreement was obtained before data collection, and information was unnamed by replacing the name with a code and saved on a secured laptop to be used for research purposes only.
Administrative approvals were granted from the scientifc committee of Al Kindy-College of Medicine-University of Baghdad on 30 th January 2022 with the approval code: 190.
Statistical Analysis.
Te data analysis was done by using the Statistical Package for Social Sciences (SPSS) version 26. A mean, standard deviation, and ranges were used for data presentation. Categorical data were presented as frequencies and percentages, and chi-squared test used for comparison between the data. If the frequency was <5, Fisher's exact test can be used. For comparison between continuous variables, independent t-test and analysis of variance (ANOVA) (twotailed) were used.
P value was considered signifcant if it was <0.05.
Results
Te whole number of participants was 60 and 38 patients were considered as a case group with AA, with 22 individuals without AA were considered as a control group. Te patients' ages were between 3-64 years for all participants with a mean and standard deviation (SD) of 24.9 years and ±13.9 years, respectively. Te highest percentage of age in the case group was aged <20 years (47.4%) while for the controls, it was between 20-39 years (63.6%).
Regarding gender and age comparison between the case and the control group, a nonsignifcant diference was observed (Supplementary Table 1) Te distribution of the case group by clinical characteristics are shown in Table 1. Namely, 39.5% of patients had the disease for <6 months, family history was positive in 21.1%, and the previous history was recorded in 28.9%.
Other body involvement was noticed in 7.9% of patients, exclamation marks were presented in 63.2%, and the pull test was positive in 71.1%, and nail changes were detected in 23.7%.
A comparison of the biomarkers between the two groups as shown in Table 2, revealed that IL-15 and TNF-α serum concentrations were higher signifcantly for case than the control group (2.35 ± 6.0) versus 0.35 ± 0.07 pg/mL, P � 0.048; and 50.11 ± 60.77 versus 20.92 ± 28.0 pg/mL, P � 0.014, respectively).
Dermatology Research and Practice
In the case group, the IL-15 level fell within 0.3-10.7 while the TNF-α level it was between 19-379.8. For the control, these levels fell between 0.3-0.6 and 3.9-138 for IL-15 and TNF-α, respectively.
For comparison between biomarkers and disease characteristics in case groups as in Table 3, no statistical difference was signifcant in the mean IL-15 level (P ≥ 0.05) regarding all characteristics. On the other hand, the mean level for TNF-α in totalis-type AA was higher signifcantly than other types (147.3 pg/mL, P � 0.021), while there were nonsignifcant diferences for the mean TNF-α level (P ≥ 0.05) regarding all other characteristics of the case group.
Discussion
Alopecia areata (AA) is considered a common cause of reversible hair loss. Te exact etiology is unknown, although many hypotheses suggest an association between the lymphocytic infltration of the hair follicle and the disruption of the hair cycle due to a combination of multiple factors, including cytotoxic T-cell activity, cytokine release, and apoptosis [9].
Many cytokines participate signifcantly in diseases of the skin, particularly autoimmune skin diseases. For AA, the role of these cytokines is not well approved, although there is an association between AA and changes in the level of diferent cytokines [10].
Concurrently, many cytokines such as interleukin (IL)-7, IL-15, tumor necrosis factor-α (TNF-α), and interferon-c (IFN-c) are overexpressed in AA patients [11]. It is well known that lymphocytes development are enhanced by the action of interleukin-15 (IL-15) which has a role in certain diseases such as rheumatoid arthritis and multiple sclerosis that have an autoimmune etiology. IL-15 induces the synthesis of certain cytokines that participate in autoimmunity like, for example, TNF-α and IL-1β, by enhancing the maintenance of CD-8 memory T-cell through the inhibition of self-tolerance [12]. Keratinocytes of the epidermis san synthesize TNF-α which is an efective proliferation inhibitor. Additionally, a study showed that TNF-α causes a vacuolation in hair follicle, follicular melanocytes inactivation, and keratinization abnormalities for both the hair follicle bulb and inner root sheath [9].
In this current study, 60 patients were enrolled, comprising 38 patients who had AA (case group) and 22 healthy participants (control group).
In comparison for the level of both IL-15 and TNF-α, they were signifcantly higher in the case than in the control group (P < 0.05).
Tese results agreed with those of Ragab [15] concluded that the level of IL-15 with AA were signifcantly higher in the case of control groups (P < 0.001).
In this study, there were no statistical diferences in the mean levels of IL-15 (P ≥ 0.05) for all characteristics of the case group. Tese results agreed with those reported in Aşkın et al.'s study in 2021, which concluded that no statistical diference was found between males and females (P < 0.178) nor duration or severity of the disease (P > 0.05) regarding IL-15 serum levels for patient and control groups [14].
In the same way, Ragab et al.'s in their study in 2020 reported a similar fnding, where the relation of IL-15 to patients' age and gender was assessed. Tey observed that there was no association between serum IL-15 and patients' gender. Even though autoimmune diseases are generally related to the gender of patients, no gender ascendancy is seen in AA (P � 0.9). Likewise, it has been shown that there was no association between IL-15 level and patients age, (P � 0.14), recurrence of disease, or history of AA in the same family of the patient (P > 0.05) [13].
Diferent fndings were reported in a study by Salem et al. in 2019, in which among the case groups, they observed that serum level for IL-15 was higher for those with totalis type in comparison to patient with one or two patches and signifcantly higher in those with both scalp and body involvement compared to those with either scalp or body involvement (P < 0.05), while IL-15 levels between case groups were not related to age, gender, recurrence, or family history (P > 0.05) [15].
Dermatology Research and Practice
Te diferences reported above may be related to different sample sizes, the presence of other autoimmune conditions, and the duration and stage of the disease, which all afect the level of IL-15.
IL-15 trans-presentation can be blocked by targeting the cytokine receptor subunit IL-2/IL-15Rβ, and this is achieved through the use of Hu-Mik-β1 monoclonal antibody which is used now in clinical trials for treating those with autoimmune diseases [18,19].
Regarding the mean TNF-α level in the present study, those with totalis-type had signifcantly higher mean TNF-α level than those of other types (P � 0.021). No statistically diferences in the mean TNF-α levels were found when comparing all other characteristics (P ≥ 0.05).
Tese results agree with those of the Omar et al. study, in which there was no statistically signifcant diference in the serum levels of TNF-α in adults and children with AA (P � 0.857). Moreover, there is no signifcant correlation in serum levels and severity (P � 0.115). Patients with alopecia totalis/universalis have a higher serum concentration than in those with alopecia of patchy type, but without any signifcant correlation (P � 0.39). Regarding the activity of the disease, there were no statistically signifcant diferences in the serum cytokine level in patients with the active disease (56, 77.8%) compared with those with the inactive disease (16,22.2%) (P � 0.097) [16].
Moreover, in comparison between duration and severity of disease, the results agreed with those of Kasumagic-Halilovic et al., who reported that TNF-α levels between patients when considering the duration and severity of the disease were insignifcant, even in comparison between those with localized and extensive disease (P � 0.2272). Furthermore, those with a long disease duration had a high concentration of TNF-α, but without signifcant association (P > 0.05) [7]. Other studies found diferent conclusions. Atwa et al.'s study showed a signifcant correlation between TNF-α level and severity of disease (r � 0.247, P � 0.031). No statistical signifcance had been found in the mean TNF-α concentration in comparison between those with diferent clinical types and those who had AA with or without atopy (P > 0.05) [17].
Te diferences reported among the above studies may be related to their diferent study designs as well as to disease severity, duration, comorbid conditions, and the presence of other autoimmune diseases. It had been hypothesized that similar TNF-α levels in those with diferent types of AA give a clue toward the lack of immune reactions, while decreased TNF-α levels in the mild type may indicate a tendency for immunodefciency in those with severe disease types [20]. Other studies found that serum levels of type-I TNF-α receptor were raised in those with the disease in comparison to nonafected individuals. Tese fnding propose that T-cells and keratinocytes activation are characteristic immune mechanisms in AA [21]. Te participation of TNF-α in AA is through e elicitation of the catagen phase, as well as the breakdown of the keratinization of hair follicle [22].
Recent studies found a protective role for TNF-α in AA through the inhibition of plasmacytoid dendritic cell synthesis and IFN-α production and via prevention of the presentation of human leukocyte antigen.
Tis fnding explains why TNF-α inhibitors may fail to cure patients with AA and may even be able to develop new cases in genetically predisposed individuals [22].
In conclusion, both IL-15 and TNF-α are markers for alopecia areata. Te level for these biomarkers was not affected by duration or disease activity but it was afected by the types of disease as the concentration of IL-15 and TNF-α were elevated in those patients with totalis type than in other types of alopecia.
It is recommended to do a study in which the concentrations for both TNF-and IL-15 in patients with AA are measured at a basal level and at the time after the patient receives treatments.
Data Availability
Te data and materials related to the present work are included within this article.
Ethical Approval
Tis study approved on 30th January 2022 with the approval code: 190.
Conflicts of Interest
Te authors declare that they have no conficts of interest.
|
2023-05-12T15:14:27.775Z
|
2023-05-10T00:00:00.000
|
{
"year": 2023,
"sha1": "55a4b97a59f3cc83115c5a72aaa3e83de4e63808",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1155/2023/5109772",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1a63ac5f1f1e1b5e7ac7b3759dd8703d1968dcc1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
213041764
|
pes2o/s2orc
|
v3-fos-license
|
Practice and Analysis of Space Service in Higher Vocational College Library—A Case Study
Taking the library of Shanghai Civil Aviation College as an example, this paper analyzes its space service practice, and finds that there are some problems in the space service of higher vocational college library: insufficient new space area, backward space service facilities, insufficient understanding of space service by librarians, and insufficient space service evaluation mechanism. Based on this, this paper puts forward the countermeasures of library space service in higher vocational colleges and provides reference for other higher vocational colleges to develop space service.
Introduction
In the 2019 Chinese government work report, it was first mentioned that it is necessary to implement enrollment expansion in higher vocational colleges, which shows that the state attaches importance to higher vocational education.
As an important part of general higher education, higher vocational colleges are designed to cultivate applied talents with certain theoretical knowledge and strong practical ability. The library of higher vocational colleges is the cultural base serving teaching and scientific research and it is the second class of students. Its development orientation and talent training objectives complement each other. However, with the development of the times, the contradiction between the traditional library and the times has become more and more obvious, Jianzhong ranked "Space re-engineering" the third in his "Ten Hot Topics of Re-Discussion on Library Development" [1]. Professor Ke Ping pointed out that "space and resource are two hot spots in future library design" at the "Symposium on Library Space Reconstruction and Functional Re-engineering" held in December 2018. Thus, "Library Space Service" has been a hot topic in the field of library research under the transformation of libraries in the new century.
In the new era, the core of the talent training target of higher vocational education in China is "technical skill talents".
Practice of Space Service in the Library of Shanghai Civil Aviation College
There is no consistent definition of "space service". Xiao Long of Peking University believes that differing from traditional library featured by collecting books and related services around collecting books, the newly added space is only for service, such as creative space, learning space, communication space and leisure space aiming to provide readers with cultural places for study, research and communication. It can be considered as "space service" [3].
Based on Xiao Long's definition of space service, this paper analyzes the practice of space service in libraries of our two campuses from three aspects: space
Space Type
Based on the space function, the space type of our library can be divided into
Traditional Library Space
The space design of traditional libraries focuses on traditional service carriers (such as books, periodicals, newspapers, etc.) and the storage, lending and reading services take up a lot of space. The library of higher vocational college is a cultural institution for higher vocational students, which requires the library to provide physical space for reading and borrowing. The office area covers the basic functional departments of the library. It is a physical space for librarians to classify process, sort and store the collection resources, receive and serve readers' consultation. In terms of reading space, our library has a periodical reading
Construction of Space Service Facilities
The space service facilities of university library provide guarantee for readers to make convenient use of library resources and help readers to better experience and make use of library resources from multiple perspectives.
Self-Service Equipment
The self-service of university library is a user self-service mode and a humanized service mode with a service concept of "reader oriented". It has a positive meaning for improving service efficiency, service quality, saving personnel costs, In terms of self-service equipment, the types of space service facilities in our library include e-book readers, self-service newspapers and periodicals, touchscreen reading systems, self-printing and copying machines, self-service retrieval equipment and inquiry systems, self-service cameras, and self-service coffee machines. Our self-service newspapers and periodicals cover more than 30 kinds of electronic newspapers and 9000 kinds of electronic periodicals, and the e-book reader stores more than 6000 books. Readers can choose to read online as well as install mobile client to scan the periodical or book QR code for offline reading. Pudong campus library is equipped with a self-service printing copier.
Readers can choose printing and copying functions as required. The libraries of the two campuses are equipped with self-service retrieval machines, and readers can search books and periodicals by inputting content such as title, responsible person, subject words, etc. In addition, the electronic reading rooms of the two campuses can also browse and download online digital resources by self-service.
Space Service Reconstruction
The library of our two campuses has undergone the space service reconstruction.
The types of space reconstruction include idle space and functional reorganization space. For example, Pudong campus library integrates the service contents to create a comprehensive service desk on the first floor. In order to improve the space utilization rate, calligraphy corner, chess corner, exhibition area and table tennis room were added in the spare space on the first floor. In order to provide readers with a warm and comfortable reading environment, artificial flowers were placed in the self-study area and periodical reading area, and the use of reading 2) The construction of space service facilities is backward. Due to the shortage of funds and improper management methods, the self-service equipment in the library of higher vocational colleges is relatively deficient in types and quantities. Currently, the two campuses respectively provide 8 retrieval machines in the library lending room and periodical room, which can basically meet the needs of readers' retrieval. However, with the expansion of our school's enrollment scale
Problems of Space Service in Higher Vocational Colleges
year by year, the two campuses should timely increase and update the retrieval equipment according to the changes in the number and demand of readers.
There is a lack of clear self-service guidance instructions and usage instructions for the existing self-service equipment.
3 2) Actively build new space and highlight the characteristics of civil aviation.
Our college trains many talents for civil aviation every year. The quality of talents is directly related to the future of civil aviation. As a talent breeding base, the library should do its best to build civil aviation characteristic service space, such as civil aviation history exhibition hall, civil aviation maker space, civil aviation exchange space, etc.
Conclusion
Taking the library of Shanghai Civil Aviation College as an example, this paper analyzes its space service practice and finds that there are still some problems in the space service of the library of higher vocational college. In the information age, in order to provide users with satisfactory space services, the following aspects should be noted. First, library should strengthen the cultivation of the overall quality of librarians and improve their space service ability; second, library should actively build new space and highlight the characteristics of civil aviation; third, library should strengthen the construction of space service facilities and promote the development of space services based on information technology; fourth, library should strengthen cooperation with internal and external institutions to expand new space services; fifth, to respect the needs of users and formulate an evaluation mechanism. Since only one higher vocational college is selected as the research object in this paper, the characteristics and existing problems of space service in higher vocational colleges are not summarized comprehensively, so more samples need to be selected for further research in the future research.
|
2020-01-02T21:45:33.703Z
|
2019-12-02T00:00:00.000
|
{
"year": 2019,
"sha1": "311ef2c9e9898c087b612cf9aeb3638ec0906b8d",
"oa_license": null,
"oa_url": "https://doi.org/10.4236/oalib.1105965",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "bc7331e156de629548b8800b37c55811a0c9df59",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Business"
]
}
|
117570575
|
pes2o/s2orc
|
v3-fos-license
|
Semistable abelian varieties over Z[1/6] and Z[1/10]
Continuing on from recent results of Brumer-Kramer and of Schoof, we show that there exist non-zero semistable Abelian varieties over Z[1/N], with N squarefree, if and only if N is not in the set {1,2,3,5,6,7,10,13}. Our results are contingent on the GRH discriminant bounds of Odlyzko.
Introduction.
In 1985, Fontaine [3] proved a conjecture of Shafarevich to the effect that there do not exist any (non-zero) Abelian varieties over Z (equivalently, Abelian varieties A/Q with good reduction everywhere). Fontaine's approach was via finite group schemes over local fields. In particular, he proved the following theorem: Theorem 1.1 (Fontaine) Let G ℓ be a finite flat group scheme over Z ℓ killed by ℓ. Let L = Q ℓ (G ℓ ) := Q ℓ (G ℓ (Q ℓ )). Then v(D L/Q ℓ ) < 1 + 1 ℓ − 1 where v is the valuation on L such that v(ℓ) = 1, and D L/Q ℓ is the different of L/Q ℓ .
If G ℓ is the restriction of some finite flat group scheme G/Z killed by ℓ then Q(G) is a fortiori unramified at primes outside ℓ. In this context, the result of Fontaine is striking since it implies that the field Q(G) has particularly small root discriminant. If A/Q has good reduction everywhere, then it has a smooth proper Néron model A/Z, and G := A[ℓ]/Z is a finite flat group scheme. Using the discriminant bounds of Odlyzko [7], Fontaine showed that for certain small primes ℓ, for every n, either A/Z or some isogenous variety has a rational ℓ n -torsion point for every n. Reducing A modulo p for some prime p of good reduction (in this case, any prime), one finds Abelian varieties (of fixed dimension d) over F p with at least ℓ n rational points. One knows, however, that isogenous Abelian varieties over F p have an equal and thus bounded number of points. This contradiction proves Fontaine's Theorem.
If one considers Abelian varieties A/Q such that A has good reduction outside a single prime p, one can no longer expect non-existence results. Indeed, there exist Abelian varieties with good reduction everywhere except at p. One such class of examples are the Jacobians of modular curves X 0 (p n ), which have positive genus for every p and sufficiently large n. A natural subclass of Abelian varieties, however, are the semistable ones. By considering the modular Abelian varieties J 0 (N ), with N squarefree, one finds non-zero semistable Abelian varieties unramified outside N for all N / ∈ {1, 2, 3, 5, 6, 7, 10, 13}. A reasonable conjecture to make is that there are no semistable Abelian varieties over Z [1/N ] for N in this set. Fontaine's Theorem is the case N = 1. Recently Brumer and Kramer [1] prove this result for N ∈ {2, 3, 5, 7}, and (by quite different methods) Schoof [9] for N ∈ {2, 3, 5, 7, 13}. In this paper, we treat the remaining cases N ∈ {6, 10}. Since we shall exploit results from both Brumer-Kramer [1] and Schoof [9], we briefly recall the main ideas now.
Schoof's approach is similar in spirit to Fontaine's. Instead of working with finite flat group schemes over Z, one considers finite flat group schemes over Z[ 1 p ], where p is prime. In order to avoid group schemes arising from non-semistable Abelian varieties, one uses the following fact due to Grothendieck ([4], Exposé IX, Proposition 3.5): Theorem 1.2 (Grothendieck) Let A be an Abelian variety with semistable reduction at p. Then the action of inertia at p on the ℓ n -division points of A is rank two unipotent; i.e., as an endomorphism, for σ ∈ I p , (σ − 1) 2 A[ℓ n ] = 0.
In particular, I p acts through its maximal pro-ℓ quotient, which is procyclic.
Thus one may restrict attention to finite flat group schemes G/Z[ 1 p ] of ℓ-power order such that inertia at p acts through its maximal pro-ℓ quotient. The key step of Schoof's approach is to show that any such group scheme admits a filtration by the group schemes Z/ℓZ and µ ℓ . Using this filtration, along with various extension results (in the spirit of Mazur [6], in particular Proposition 2.1 pg. 49 and Proposition 4.1 pg. 58) for group schemes over Z[ 1 p ], one shows as in Fontaine that for each n, some variety isogenous to A has rational torsion points of order ℓ n .
The approach of Brumer and Kramer is quite different. Although, as in Schoof and Fontaine, they use discriminant bounds to control Q(A[ℓ]) for particular ℓ, they seek a contradiction not to any local bounds but to a theorem of Faltings. Namely, they construct infinitely many pairwise non isomorphic but isogenous varieties, contracting the finiteness of this set (as follows from Faltings [2], Satz 6, pg. 363). The essential difference in the two approaches, however, is that Brumer and Kramer use the explicit description of the Tate module T ℓ of A at a prime p of semistable reduction. Such a description is once more due to Grothendieck [4]. Both of these approaches fail (at least naïvely) to work when N = 6 or 10. Using Schoof's approach, one runs into a problem (when N = 6, for example) because µ 5 admits many non-isomorphic finite flat group scheme extensions by Z/5Z over Z[ 1 6 ], whereas no non-trivial extensions exist over either Z[ 1 2 ] or Z[ 1 3 ]. Using Brumer and Kramer's approach, one difficulty that arises is that the field Q(A [5]) fails to have a unique prime above the bad primes 2 or 3, as fortuitously happens in the cases they consider. We do, however, use a key theorem from Brumer and Kramer's paper, and so in the next section we recall some of their definitions and results.
Notation.
Let p ∈ Z be a prime number. Let D p = Gal(Q p /Q p ) denote the local Galois group at p. For a Galois extension of global fields L/Q, we denote a decomposition group at p by D p (L/Q). This is well defined up to conjugation, or equivalently, up to an embedding Q ֒→ Q p which we shall fix when necessary. In the same spirit, let I p = Gal(Q p /Q unr p ), and let I p (L/Q) be an inertia group at p as a subgroup of D p (L/Q) and of Gal(L/Q). One notes that I p is normal in D p . Let M be a D p module, M a D p module killed by ℓ for some ℓ = p, and M a Gal(Q/Q) module, also killed by ℓ. A "finite" group scheme G/R will always mean a group scheme G finite and flat over Spec R.
Preliminaries.
In this section we introduce some notation and results from the paper of Brumer and Kramer [1].
Let A/Q be an Abelian variety of dimension d > 0 with semistable reduction at p. Let ℓ be a prime different from p, and consider the Tate module T ℓ (A/Q p ). Let M 1 (p) = T ℓ (A/Q p ) I , and let M 2 (p) be the subspace of T ℓ (A/Q p ) orthogonal to M 1 (p)(Â) under the Weil paring: Since A is semistable, there exist inclusions: Fp be the connected component of the special fibre of A at p. It is an extension of an Abelian variety of dimension a p by a torus of dimension t p = d − a p . One has dim(M 2 (p)) = t p and dim(M 1 (p)) = t p + 2a p = d + a p .
Brumer and Kramer use this theorem to construct infinitely many non-isomorphic varieties isogenous to A. This contradicts Faltings' Theorem. Although we shall also use Faltings' Theorem, our final contradiction will come from showing that A (or some isogenous variety) has too many points over some finite field, contradicting Weil's Riemann hypothesis, much as in the approach of Schoof [9].
Results.
Our main results are the following: Theorem 2.2 Let A/Q be an Abelian variety with semistable reduction, and good reduction outside 2 and 3. Assuming the GRH discriminant bounds of Odlyzko, A has dimension 0. Theorem 2.3 Let A/Q be an Abelian variety with semistable reduction, and good reduction outside 2 and 5. Assuming the GRH discriminant bounds of Odlyzko, A has dimension 0.
The use of the GRH is impossible to avoid using our approach. The proof of Theorem 2.3 is very similar to the proof of Theorem 2.2, although some additional complications arise. Thus we restrict ourselves first to the case N = 6, and then later explain how our proof can be adapted to work for N = 10. One main ingredient is the following result, proved in section 3: ] be a finite group scheme of 5-power order such that inertia at 2 and 3 acts through a procyclic 5-group. Then G has a filtration by the group schemes Z/5Z and µ 5 . Moreover, if G is killed by 5, then Q(G) ⊆ K, where In particular, if A/Q is a semistable Abelian variety with good reduction outside 2 and 3, and A/Z is its Néron model, then for each n the finite group scheme A[5 n ]/Z[ 1 6 ] has a filtration by the group schemes Z/5Z and µ 5 . Moreover, Q(A [5]) ⊆ K. This result (and its proof) is of the same flavour as results in Schoof [9]. One such result from that paper we use is the following (a special case of Theorem 3.3 and the proof of corollary 3.4 in loc. cit.): ] be a finite group scheme of 5-power order such that inertia at p acts through a procyclic 5-group. Then G has a filtration by the group schemes Z/5Z and µ 5 . Moreover, the extension group Ext 1 (µ 5 , Z/5Z) of group schemes over Z[ 1 p ] is trivial, and there exists an exact sequence of group schemes: where M is a diagonalizable group scheme over Z[ 1 p ], and C is a constant group scheme.
In sections 2.3, 2.4 and 2.5 we shall assume there exists a semistable Abelian variety A/Z[ 1 6 ], and derive a contradiction using Theorem 2.4.
Construction of Galois Submodules.
The proof of Brumer and Kramer relies on the fact that for Abelian varieties with semistable reduction at one prime p ∈ {2, 3, 5, 7}, there exists an ℓ such that there is a unique prime above p in Q(A[ℓ]). In this case, the D p modules M 1 (p) and M 2 (p) are automatically Gal(Q/Q) modules, and so one has a source of Gal(Q/Q) modules with which to apply Theorem 2.1. This approach fails in our case (at least if ℓ = 5) since Theorem 2.4 allows the possibility that Q(A [5]) could be as big as K := Q(2 1/5 , 3 1/5 , ζ 5 ), and 2 and 3 split into 5 distinct primes in O K . On the other hand, something fortuitous does happen, and that is that the inertia subgroups I p (K/Q) for p = 2, 3 are normal subgroups of Gal(K/Q), when a priori they are only normal subgroups of D p (L/Q). Using this fact we may construct global Galois modules from the local D p modules M 1 (p) as follows. Proof. By Galois, it suffices to show that M is fixed by H. Any sum or multiple of elements fixed by H is clearly fixed by H. Thus it remains to show that any Galois conjugate P g with g ∈ G and P ∈ M is also fixed by H. For this we observe that Throughout, let M 1 (p) be the Gal(Q/Q) module generated M 1 (p), considered as a subgroup of A[ℓ] after choosing some embedding Q ֒→ Q p (this definition depend upon the embedding, but this ambiguity does not cause any problems). From Lemma 2.1, M 1 (p) is fixed by I p (K/Q) and so We now apply Theorem 2.1 with κ = M 1 (2). Let A ′ = A/κ. Then Since by construction M 2 (2) ⊆ M 1 (2) ⊆ κ, this quantity equals 2d − dim κ ≥ 0. In particular, A can not be isomorphic to A ′ unless κ = A [5]. Thus by Faltings' Theorem, after a finite number of isogenies prolongs to a finite group scheme over Z[ 1 3 ]. From Theorem 2.5, we infer that there exists an exact sequence of group schemes: Lemma 2.2 In the sequence above, m = n = d. A has ordinary reduction at 5.
Proof. The Néron model of A ′ = A/µ m 5 contains the group scheme (Z/5Z) n . Specializing to the fibre over F 5 we find that: The p-rank of the p-torsion subgroup of an Abelian variety in characteristic p is at most the dimension d, with equality only if A is ordinary at p. Thus n ≤ d. Applying the same argument to we find that m ≤ d and thus n = m = d, and A has ordinary reduction at 5.
Thus we may assume for any A with ord 5 (ΦÂ(2)) maximal (or, by a similar argument ord 5 (ΦÂ(3)) maximal) there exists an exact sequence of Gal(Q/Q) modules: We now divide our proof by contradiction into two cases. In the first case we assume that A has mixed reduction at 2 or at 3. In the second case we assume that A has purely toric reduction at both 2 and 3.
A has Mixed Reduction at 2 or 3.
Let ord 5 (ΦÂ(2)) be maximal. Then from Lemma 2.2 there is an exact sequence: If A has mixed reduction at 2 then a 2 > 0, and M 1 (2) has dimension t 2 + 2a 2 = d + a 2 > d. In particular, κ := M 1 (2) ∩ µ d 5 is non-trivial and defines a diagonalizable Gal(Q/Q) submodule of A [5]. We now apply Theorem 2.1. Let A ′ = A/κ. We find that Since κ ⊆ M 1 (2), the last two terms cancel, and ord 5 (ΦÂ ′ (2)) is also maximal. Hence we may repeat this process, thereby constructing morphisms A −→ A (n) with larger and larger kernels κ n , where κ n has a filtration by µ 5 's. Lemma 2.3 Any extension of diagonalizable group schemes of 5-power order over Z[ 1 6 ] is diagonalizable.
Proof. By taking Cartier duals, it suffices to prove the dual statement for constant group schemes: Any extension of 5-power order constant group schemes over Z[ 1 6 ] is constant. Any extension of Z/5Z by Z/5Z over Z[ 1 6 ] is defined over a 5-extension of Q, unramified over 6. From class field theory, since Z is an integral domain, such extensions are classified by (Z/6Z) * . Since this group has order coprime to 5, this proves the claim.
For all n, there exist exact sequences The varietyÂ/M ∨ contains the arbitrarily large constant group scheme κ ∨ n . This contradicts the uniform boundedness of the number of points locally for all varieties isogenous toÂ.
If A does not have purely toric reduction at 3, a similar argument applies.
A has Purely Toric Reduction at 2 and 3.
Under this assumption, for p = 2 or 3, M 2 (p) = M 1 (p), and so we write both as M(p). Again we assume that ord 5 (ΦÂ (2)) is maximal. In particular, we may assume that M(2) = A [5], that A [5] is defined over Q(ζ 5 , 3 1/5 ), and that we have an exact sequence Proof. Fix an embedding Q ֒→ Q 2 such that the image of 3 1/5 lands in Q 2 . First we show that M(2) ∩ µ d 5 = {0}. If not, then M(2) would not surject onto (Z/5Z) d , and the elements of M(2) could not possibly generate 1 A[5] as a Gal(K/Q) module. Thus by dimension considerations, as a F 5 vector space, Then for our embedding of Q into Q 2 , Since M(2) is a D 2 module, the P i are permuted by elements of D 2 . Thus we may write the Galois action of Id d is the identity matrix, and χ is the cyclotomic character.
Let us now consider the situation locally at 3. The decomposition group at 3 is the entire Galois group G, and the inertia group Since I 3 acts faithfully on M(2) ⊂ A [5], and since Thus we are done.
We now apply Theorem 2.
On the other hand, we see from the exact sequence for A [5] that (Z/5Z) d ⊂ A ′ [5]. From Theorem 2.4 and Lemma 2.2 we infer that there exists an exact sequence: Replace A by A ′ . In particular, A [5 2 ] is unramified at 3. Thus by Theorem 2.5 there exists a filtration: where M is a diagonalizable group scheme, and C is a constant group scheme. Let q ∈ Z be a prime of good reduction. We observe that the varieties A/M andÂ/C ∨ contain constant subgroup schemes of order #C and #M respectively. It follows from Weil's Riemann Hypothesis that Abelian varieties of dimension d over F q have at most (1 + √ q) 2g points. Thus Choosing q = 7, say, then since 5 > 1 + √ 7, we have a contradiction. This completes the proof of Theorem 2.2 up to Theorem 2.4, which we prove now.
Group Schemes over Z[1/6].
First, some preliminary remarks on group schemes. Here we follow Schoof [9]. Let (ℓ, N ) = 1. Let C be the category of finite group schemes G over Z[1/N ] satisfying the following properties: 2. The action of σ ∈ I p on G(Q p ) is either trivial or cyclic of order ℓ.
For example, Z/ℓZ and µ ℓ are objects of C. As remarked in [9], this category is closed under direct products, flat subgroups and flat quotients. Thus, to prove that any object of C has a filtration by Z/ℓZ and µ ℓ it suffices to show that the only simple objects of C are Z/ℓZ and µ ℓ . [9]). For any unit ǫ ∈ Z[1/N ] there is a corresponding group scheme G ǫ of order ℓ 2 killed by ℓ. It is an extension of Z/ℓZ by µ ℓ , and is defined over Q(ζ ℓ , ǫ 1/ℓ ).
Let N = 6 and ℓ = 5. To prove that the only simple objects of C are µ 5 and Z/5Z, it suffices to show that any object of C is defined over the field K, where K = Q(ζ 5 , 2 1/5 , 3 1/5 ), because of the following result: Let L = Q(G(Q)) and suppose that Gal(L/Q(ζ ℓ )) is an ℓ-group. Then G is either Z/ℓZ or µ ℓ .
Proof. Since any ℓ-group acting on (Z/ℓZ) d has at least one (in fact ℓ − 1) nontrivial fixed points, there exists a point P of G defined over Q(ζ ℓ ). Since G is simple, P generates G as a Galois module and thus Q(G) ⊆ Q(ζ ℓ ). Since (N, ℓ) = 1, and since G is unramified outside ℓ, G prolongs to a finite group scheme over Z, killed by ℓ, and defined over Q(ζ ℓ ). Since the (ℓ − 1) th roots of unity are in F * ℓ , any simple subgroup scheme of G has order ℓ. From Oort-Tate [8], the finite group schemes of order p over Z are Z/pZ and µ p .
Let G be an object of C. To prove that Q(G) ⊆ K it clearly suffices to prove the same inclusion for any group scheme which contains G as a direct factor. Consider the field L = Q(G × G −1 × G 2 × G 3 ). One sees (from the definition of G ǫ ) that K := Q(ζ 5 , 2 1/5 , 3 1/5 ) ⊆ L. We prove that L = K. Using the estimates of Fontaine [3] we obtain an upper bound on the ramification of L at 5. Since inertia at 2 and 3 acts through a cyclic subgroup of order 5, we also have ramification bounds at 2 and 3. As in Schoof [9] and Brumer-Kramer [1], we obtain has the following estimate of the root discriminant: From the discriminant bounds of Odlyzko [7], under the assumption of GRH, one concludes that [L : Q] < 2400 and thus [L : K] < 24. In particular, L/Q is a solvable extension, and thus we can apply tools from class field theory.
Remark. Without the GRH, we are unable to bound [L : Q] since 31 exceeds the limits of current unconditional discriminant bounds. Our calculations in this section could be shortened by more reliance on computer calculation. However, for exposition we include as much class field theory as we can do by hand. This leads us to consider several group theory lemmas which allow us to do computations in smaller fields.
L/K Tame.
In this section we assume that L/K is a tame extension, of order coprime to 5, and prove that L = K. We may therefore assume that [L : K] has order coprime to 5. Proof. It suffices to note that for all groups G ′ of order less than 10, |Aut(G ′ )| is coprime to 5.
Lemma 3.4 If L/K is a tame extension of degree coprime to 5, then L = K.
Proof. Let H be the field Q(ζ 5 , 2 1/5 ). We have the following exact sequence of groups:
L/K Wild.
In this section, we assume that L/K is wildly ramified and of degree 10, 15 or 20. Proof. Let H ′ be the 5-Sylow subgroup of H. Then since 5(1+ 5) > 20, H ′ is normal. Thus we have the following exact sequence: Since H ′′ is Abelian the commutator subgroup of H is a subgroup of H ′′ . To show that G ab is not a 5-group it suffices to show that the commutator subgroup of G also lies within the 5-Sylow subgroup of H. Let τ be an element of G that maps to a generator of Z/5Z. The action of conjugation by τ on H is via an automorphism of degree 5. To show that [τ, h] ∈ H ′ it suffices to show that for any automorphism σ of degree 5 on H, σ(h)h −1 ∈ H ′ . Since all elements of order 5 lie in H ′ , H ′ is preserved by σ. Yet Aut(Z/5Z) ≃ Z/4Z, and thus σ fixes H ′ . Thus σ maps to an element of Aut(H ′′ ). Since Aut(H ′′ ) has order coprime to 5 for |H ′′ | ≤ 4, σ also acts trivially on the quotient. Hence for any automorphism σ of degree 5 on H, σ(h)h −1 ∈ H ′ and we are done.
Let H be the field Q(ζ 5 , 2 1/5 ). We have the following exact sequence of groups: By Lemma 3.5, Gal(L/H) ab is not a 5-group. Thus H admits an Abelian extension of degree coprime to 5. The non-existence of such an extension was proved in Lemma 3.4.
L/K of degree 5.
Finally, it remains to show that L/K is not wildly ramified of degree 5, or unramified over K. Assume otherwise. Gal(L/Q(ζ 5 )) is a group of order 125 that surjects onto Z/5Z ⊕ Z/5Z. There are three groups up to isomorphism with this property. All of them admit at least one morphism to Z/5Z with kernel Z/5Z ⊕ Z/5Z that factor through the map to Gal(K/Q(ζ 5 )). Thus there exists a field E/Q(ζ 5 ), contained within K, such that Gal(L/E) ≃ Z/5Z ⊕ Z/5Z. Lemma 3.6 There exists an intermediate field L/F/E such that F is not equal to K and F/E is unramified at primes above 2 and 3.
Proof. Since the root discriminant of L locally at 2 and 3 is bounded by 2 4/5 and 3 4/5 respectively, this lemma is obvious if the root discriminant for E obtains these bounds, since then any subgroup of Gal(L/E) = Z/5Z ⊕ Z/5Z not corresponding to K will produce the required F . Thus we may assume that E = Q(p 1/5 , ζ 5 ) with p equal to 2 or 3. Assume at p = 2. Since K/E is ramified at primes above 3, it suffices to find an F ⊂ L unramified at primes above 3. The tame ramification group I 3 (L/E) is of order 5, since by considering 3 exponents of the root discriminant, Thus we see that the fixed field F of I 3 (L/E) ⊂ Gal(L/E) is unramified at 3 above E. Moreover, F is not K since K/E is ramified at 3. An identical argument works for p = 3. Lemma 3.7 If E/Q is wildly ramified at 5 then either F/E is unramified at 5 or ∆ F/E = π 8 E where π E is the unique prime above 5 in E. If E = Q(ζ 5 , 24 1/5 ) then ∆ F/E divides (π E,1 . . . π E,5 ) 8 , where π E,i are the primes above 5.
Proof. Suppose that E/Q is wildly ramified. We may assume that F/E is also wildly ramified, since otherwise it is unramified, and we are done. Suppose that N E/Q (∆ F/E ) ≥ 5 10 . Then δ F,5 = δ E,5 N E/Q (∆ F/E ) 1/100 ≥ 5 23/20 5 10/100 = 5 5/4 which contradicts the Fontaine bound. On the other hand, We have the following equality regarding the discriminant ( [10], IV. Proposition 4): Thus v F/E = 4 or 8. Since we have wild ramification, v F/E > e F/E − 1, and thus v F/E = 8, and ∆ F/E = π 8 E . Suppose now that E = Q(ζ 5 , 24 1/5 ). Let π K,i be the unique prime above π E,i in O K . If ∆ L/K = (π K,1 . . . π K,5 ) v , an argument similar to the above using the Fontaine bound shows that v < 10. Thus and thus v F/E < 12. Yet, as above, v F/E ≡ 0 mod 4 and thus v F/E ≤ 8.
Corollary 3.1 If F/E is ramified, and F/Q is wildly ramified at 5 then the conductor f E/F is equal to π 2 E . If F/Q is tamely ramified, then the conductor divides (π E,1 . . . π E,5 ) 2 .
Proof. This follows from the previous lemma, and the conductor-discriminant formula.
Thus the existence of F will therefore be predicted from the ray class group of f F/E . We may calculate these groups with the aide of pari. The results are tabulated in the table in the appendix (section 5.1), and they indicate the proof is complete, after noting that in all cases when the ray class field is non-trivial, the field K/E is either unramified or has conductor dividing f F/E .
N = 10.
Let us begin by stating the analogues of theorems in section 2.2.
Theorem 4.1 Let G/Z[ 1 10 ] be a finite group scheme of 3-power order such that inertia at 2 and 5 acts through a procyclic 3-group. Then G has a filtration by the group schemes Z/3Z and µ 3 . Moreover, if G is killed by 5, then Q(G) ⊆ H, where , and H is the Hilbert class field of K, which is of degree 3 over K. ] be a finite group scheme of 3-power order such that inertia at p acts through a procyclic 3-group. Then G has a filtration by the group schemes Z/3Z and µ 3 . Moreover, the extension group Ext 1 (µ 3 , Z/3Z) of group schemes over Z[ 1 p ] is trivial, and there exists an exact sequence of group schemes: where M is a diagonalizable group scheme over Z[ 1 p ], and C is a constant group scheme.
One technical difficulty is that I p (H/Q) is not a normal subgroup of Gal(H/Q), for either p equal 2 or 5. We do however make the following observation: The primes 2 and 5 split into 3 distinct primes in K. Moreover, these primes remain inert after passing to H. The easiest way to see this is by noting that H is the compositum of K and the Hilbert class field of Q(20 1/3 ). In this field, the primes above 2 and 5 are not principal, and so remain inert in the Hilbert class field. Thus the subgroups By Grothendieck (Theorem 1.2) one finds that as an endomorphism, for σ ∈ I p ′ , (σ − 1) 2 = 0 on A [3]. Thus σ 2 = 2(σ − 1) + 1, and one sees (since D p (H/Q) and σ ∈ I p ′ (H/Q) generate Gal(H/Q)) that M is closed under the action of Galois.
We now apply this construction not to M 1 (p), as in section 2.3, but to M 2 (p). Let us assume that ord 3 (ΦÂ(p)) is maximal for some p ∈ {2, 5}. If κ = M 2 (p), then from Theorem 2.1, Since M 2 (p) ⊆ κ ∩ M 1 (p), we find that this quantity is at least 2t p − dim κ. On the other hand, from the previous lemma we see that dim κ ≤ 2t p , with equality if and only if {P 1 , . . . P t , (σ − 1)P 1 , . . . , (σ − 1)P t } are independent inside A [3]. Since ord 3 (ΦÂ(p)) is maximal, we have equality. Since the image of (σ−1) on A [3] for σ ∈ I p ′ is contained within M 2 (p ′ ) and has dimension at most t p ′ , this immediately proves that t p ≤ t p ′ , and by symmetry, that t 2 = t 5 . Moreover, equality forces M 2 (p) = κ ∩ M 1 (p), and thus by dimension considerations, as a vector space, Lemma 4.2 For ord 3 (ΦÂ(p)) maximal, Q(A [3]) is unramified at p.
Consider the decomposition A[3] = M 2 (p) ⊕ M 1 (p) \ M 2 (p). By definition, I p acts trivially on M 1 (p). Thus it suffices to show that I p acts trivially on M 2 (p) = {P 1 , . . . , (σ − 1)P t }. Since I p (H/Q) = I p (H/Q(ζ 3 )) for p ∈ {2, 5} we work over this field. Since for τ ∈ I p the image of (τ − 1) lies within M 2 (p), the action of τ ∈ I p is represented by a matrix: On the other hand, M 2 (p) is a Gal(Q/Q) module and the action of σ is given by: It suffices to prove that a = 0, since then we have shown I p acts trivially on A [3]. With this result, we may now establish Theorem 2.3 in much the same way as Theorem 2.2. Here are the extra steps required to complete the proof: 1. For ord 3 (ΦÂ(2)) maximal, the exact sequence of group schemes
A final contradiction is reached because
is not true. One might remark at this point that since A has good reduction at 3, and since A is defined over Q, the 3-torsion injects into A(F p ) [3], as follows from standard facts about formal groups.
Thus is remains to prove Theorem 4.1.
Group Schemes over Z[1/10].
Since Gal(H/Q(ζ 3 )) is a 3-group, the discussion at the beginning of section 3 shows that it suffices to prove that if L = Q(G × G −1 × G 2 × G 5 ) then L ⊆ H. One has the following estimate of the root discriminant for L: From the estimates of [7] one finds that [L : Q] < 280, and so [L : K] < 16. One sees that that K : We wish to prove that Gal(L/K) is a 3-group. The root discriminant of K is δ K = 3 7/6 10 2/3 , and so L/K is at most ramified at primes above 3.
L/K Tame
In this section we assume that L/K is a tame extension. If [L : K] ≤ 6, then either Gal(L/K) is a 3-group or it surjects onto a non-trivial group of order coprime to 3. In this case, L would contain an Abelian extension E/K tamely ramified and of degree coprime to 3.
Lemma 4.4
There are no Abelian extensions E/K tamely ramified of order coprime to 3.
Proof. We proceed via class field theory. According to pari, the class number of K is 3, its Hilbert class field being the compositum of H and the Hilbert class field of Q( √ −3, 3 √ 20). Thus it suffices to show that global units of O K generate (O K /p 1 p 2 p 3 ) * . On the other hand, since K/F is totally ramified, we have an isomorphism Hence it suffices to use global units from Then from pari we find that the 2 fundamental units of O F are given by We find that the images of −1, ǫ 1 , and ǫ 2 in O F /π 1 ×O F /π 2 ×O F /π 3 are (−1, −1, −1), (1, 1, −1) and (1, −1, 1) respectively. Since these elements generate the group (F * 3 ) 3 , we are done.
L/K Wild
We assume that L/K is wildly ramified at 3, and (for the moment) not a 3-group. If Gal(L/K) ab is not a 3-group, then there would exist a corresponding extension E/K tame of order coprime to 3. Since no such extensions exist (see the tame case), we may also assume that Gal(L/K) ab is a 3-group. Let G denote the group Gal(L/K). There should be no confusion between the group G and the group scheme G, which will not appear again. Let n = |G|. Since n < 16, n ∈ {6, 12, 15}. All groups of order 15 are Abelian. If n = 6, the only non-Abelian group is S 3 . Yet S ab 3 = Z/2Z. Thus n = 12. The only group G of order 12 such that G ab = Z/3Z is the non-trivial extension of Z/3Z by Z/2Z ⊕ Z/2Z. Yet δ L < 3 3/2 10 2/3 by the Fontaine bound, and we are done.
Before we proceed, we introduce some notation and results from Serre [10]. Let G i ⊆ G be the decomposition groups of some prime p above 3 in L/K. These groups are defined by p up to conjugacy. However, since L/Q is Galois, the orders of G i are independent of the choice of p above 3. Let P be a prime above p. Let us simplify some notation. Let v = v P (D L/K ), f = f L/K , e = e L/K , r = r L/K , We have equalities: From the previous lemma, 22 ≤ f rv ≤ 23. Moreover, f re = [L : K] = 12. Since K/L is wildly ramified, 3|e. Hence it suffices to show that e = 3, e = 6 and e = 12 all lead to contradictions. If e = 3, then f r = 4. Yet f r divides 23 or 22, which is impossible. Suppose that e = 6. Then |G 0 | = 6 must be a normal subgroup of G since it is a subgroup of index 2. If G had such a subgroup, then G ab would not be a 3-group. Thus we may assume that e = 12. If e = 12 then the 3-group G 1 would be a normal subgroup of G 0 = G. Since G has no such subgroup, we are done, and Gal(L/K) is a 3-group.
Thus we may assume that L/K is Galois of degree dividing 9, and thus Abelian. Let f L/K be the conductor of this extension. If (π K,1 π K,2 π K,3 ) 3 |f L/K , then from the conductor discriminant formula δ L exceeds the Fontaine bound. Thus it suffices to note that that the ray class field of f = (π K,1 π K,2 π K,3 ) 2 of K is Z/3Z, coming exactly from the Hilbert class field H of K.
Pari Script.
Here is the pari script for fields other than Q(ζ 5 , 24 1/5 ) and Q(ζ 3 , 2 1/3 , 5 1/3 ), where an adjustment must be made since the conductor is of a slightly different form. The calculation of the discriminant was included as a check against typographical errors in the defining polynomials.
|
2019-04-12T09:09:28.265Z
|
2001-03-03T00:00:00.000
|
{
"year": 2001,
"sha1": "92dfb30eebe3ad6a421b52056164d2f9113c818f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "92dfb30eebe3ad6a421b52056164d2f9113c818f",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
85507071
|
pes2o/s2orc
|
v3-fos-license
|
Potential of Wood Ash as a Fertilizer in BRS Piatã Grass Cultivation in the Brazilian Cerrado Soil
Utilizing wood ash as a fertilizer in agriculture is a viable alternative to the soil nutrients absorbed by the crops. The aim of this study was to assess the phytometric and productive features of Brachiaria brizantha (cv. BRS Piatã) fertilized with wood ash in the Brazilian Cerrado. The experiment was performed in a greenhouse, adopting a completely randomized design, and applying five rates of wood ash (0, 5, 10, 15 and 20 g∙dm) with five replicates. The shoot plant parts were subjected to three successive cuts 30-day intervals each. The results were submitted to the analysis of variance and regression analysis at 5% probability. The wood ash rates between 13 to 17 g∙dm clearly produced the best results for plant height (102.24, 84.42 and 63.27 cm), leaf/stem ratio (1.61, 1, 78 and 1.94), and chlorophyll index (46.66, 41.93 and 38.39), respectively, during the first, second and third evaluations. A 94% increase in the shoot dry mass (2 and 3 evaluations) and root parts was noted for the wood ash rate of 20 g∙dm, compared with the treatment involving wood ash fertilization. Wood ash affects the phytometric features, increases the chlorophyll concentration and thus the BRS Piatã grass production in the Oxisol of the Brazilian Cerrado.
Introduction
Ranked among of the principal production systems of the world, pastures ac-How to cite this paper: Bonfim-Silva, E.M., Pereira, M.T.J., Da Silva, T.J.A. and Fenner, W. (2017) Potential of Wood Ash as a Fertilizer in BRS Piatã Grass Cultivation in the Brazilian Cerrado Soil.American Journal of Plant Sciences, 8, 2333-2344.https://doi.org/10.4236/ajps.2017.810156count for around 70% of arable land [1], and are regarded as very important in several temperate and tropical zones [2].Brazil, possesses nearly 200 million of hectares of edaphoclimatic conditions and native and cultivated pastures [3], mainly the Brachiaria grasses.
Genus Brachiaria has contributed significantly to Brazil, as it enabled cattle ranching on the acidic and poorly fertility, and the basis of the cultivated pastures in the country [4].
The arable areas frequently show soils with quality disparity, revealing nutritional deficits that arise from intense and constant cultivation.Therefore, the soil fertility is normally corrected by applying chemical fertilizers, obtained from non-renewable sources, and which in turn affect the production costs.Hence, certain measures and alternatives need to be established to maintain the systems on a long-term basis and wood ash is one such alternative source [5].
Considering this, cattle ranchers require alternative and cheaper means of soil fertilization, without adversely affecting either the environment or grazing animals.Wood ash appears to rank high among the suitable options for agricultural crops, as it offers a vital means of recovering some of the nutrients lost through soil leaching and crop utilization.Wood ash is composed of a series of elements essential for plants and, as it is alkaline, it changes the soil pH, which is an important criterion for its implementation in agriculture [6].
As a good option, reusing wood ash can minimize commercial fertilizer requirements, which in turn will reduce soil acidification and raise the calcium reserves [7].Utilization of plant waste in agriculture is a good practice that has increased production and alleviated issues of solid waste disposal.From this perspective, using the ash as a fertilizer can prove to be a sustainable alternative supply of P to the agricultural systems [8].As it is versatile in nature, wood ash can be utilized along with liquid waste, to boost its nutrient supply [9].
Brazil has the enormous potential for generating energy through thermochemical conversion, produces enormous quantities of waste (ash) that must ultimately be disposed.One of the ways could be their application to pastures used for grazing, and for the recovery of degraded areas.However, scientific proof is necessary for the correct application of the wood ash [10].
The aim of this study was to assess the productive and phytometric characteristics of Brachiaria brizantha (cv.BRS Piatã) in response to the application of wood ash rates to the Brazilian Oxisol.
Material and Methods
The experiment was conducted in a greenhouse at the Federal University of Mato Grosso, Campus of Rondonópolis-MT, Brazil, using the forage grass, genus Brachiaria brizantha cv.BRS Piatã.
The soil was first collected at a depth of 0.00 -0.20 m from a region supporting Cerrado vegetation, classified as Oxisol [11].The chemical and granulometric analysis of the soil was done according to and the results are seen in Table 1 [12].The wood ash used, from a food industry boiler, had pH around 10.4 and was characterized as fertilizer (Table 2).
The completely randomized design was adopted for the experiment, comprising five rates of wood ash, with five replicates.The wood ash was applied in rates of 0, 5, 10, 15 and 20 g•dm −3 .Each experimental plot included a plastic pot of 5 dm −3 soil capacity.The wood ash became integrated into the soil, remaining there for a 30-day incubation period.
After the soil was incubated with the wood ash, sowing was done at about 2.5 cm depth.Germination began five days after sowing.The plants were thinned once the plants achieved 10 cm height, according to the criteria of size, homogeneity and arrangement within the pots, leaving only five plants per pot (Figure 1).
After irrigating the plants by the gravimetric method, soil moisture was maintained at 60% of the maximum water retention capacity.Nitrogen fertilization, using urea as a nitrogen source, was done in the experimental plots at the recommended rates of 200 mg•dm −3 , in three applications (the first performed at the time of the plant thinning, the second and third after each successive cut).
The phytometric characteristics were evaluated by performing three cuts at each 30-day interval, assessing the plant height and leaf/stem ratio, as well as the productive characteristics, shoot and root dry masses.
The chlorophyll index was checked every 30 days before to each cutting, at which 10 readings were recorded per experimental unit on the recently expanded diagnostic sheets (+1 and +2) (Figure 2), taking the average reading for each pot.The readings were done avoiding the leaf ribs, ensuring they were under suitable light intensity conditions.
At the time the plants were cut, the plant heights from the soil to the tip of the forage canopy were recorded using a graduated scale, and the average plant heights per pot were noted.
After each of the three cuttings, the plant material was harvested and weighed and the tiller and leaf masses were separately estimated.They were packed in paper bags and subjected to forced-air drying at 65˚C for 72 hours until constant mass was achieved.
At the third cutting of the grass, apart from recording the dry mass of the aerial plant parts, the plant roots were collected.They were separated from the shoot using scissors and washed under running water, through a 1.00 mm sieve to eliminate the soil.The findings were then submitted to the analysis of variance by the F test and, when significant, regression analysis was done at 5% error probability.Statistical analyses were performed using the SISVAR statistical program [13].
Results and Discussion
The chlorophyll index was adjusted to the quadratic model of regression, for all the three cuttings of the Piatã grass, revealing the degree of correction and fertilization the wood ash made to the soil.After the first and second cuts, the wood ash doses that induced the maximum readings were 13.10 and 13.99 g•dm −3 , corresponding to the chlorophyll indices of 46.66 and 41.93, respectively.After the third cut, the wood ash rate of 17.91 g•dm −3 was found to induce the highest chlorophyll index of 38.39 (Figure 3).
The provision of Fe and Mg as soil nutrients, via fertilization using the wood ash, may have affected the plant chlorophyll index, as these nutrients are vital to the composition of this structure and biological nitrogen fixation.
Mg, besides being structurally a part of the chlorophyll molecule, is also a cofactor in the ATP hydrolysis, supplying energy for fixing the atmospheric N 2 [14].The Fe acts on the nitrogenase protein complex, producing Fe-protein and molybdenum-iron-protein, which in the presence of ATP, catalyzes the reduction of atmospheric nitrogen to ammonia [15].
Wood ash as fertilizer supplied an appreciable amount of potassium to the plants, as potassium enhances the activation of the enzymes responsible for nitrogen assimilation, protein synthesis and synthesis of leaf starch [16].In this context, the combined rates of nitrogen and potassium induced a higher shoot dry mass and chlorophyll concentration in the greenhouse raised leaves [17].
Figure 3. Chlorophyll index at the first, second and third cuts of Brachiaria brizantha cv.
BRS Piatã, as result of wood ash rates in the Oxisol.CI = Chlorophyll Index; Wa = Wood ash.Researches involving the assessment of the effect of the applying wood ash on the chlorophyll index of Brachiaria brizantha cv.Marandu, showed an increase in the chlorophyll concentration [18].The application of wood ash as fertilizer in the tropical forage grasses, confirmed an improvement in the expression of the structural characteristics and raised the chlorophyll indices of the Marandu and Xaraés grasslands that had been cultivated in the Cerrado of Mato Grosso State [19].
A 1% significance was noted for plant height among the different wood ash rates, adjusted to the quadratic model of regression in the first, second and third cuts (Figure 4(a)).For the first cut, the 12.72 g•dm −3 rate induced the maximum Leaf/stem ratio Wood ash (g dm -3 ) 1ª Cut 2ª Cut 3ª Cut height (102.24cm) in the Piatãgrass, showing an 80.54% increase, compared with the treatment that was not fertilized with the wood ash.
In the case of the second cutting, the wood ash dose of 13.42 g•dm −3 induced maximum plant height (84.42 cm), producing a 92.86% increase compared with the treatment that lacked the addition of wood ash.
The third and final cutting of the plants, the 17.17 g•dm −3 rate produced the maximum plant height (63.27 cm), a 96.44% increase when compared with the shortest plant height in the treatment without the addition of wood ash (witness).
The rates that induced the maximum heights for each cut were slowly increased at each stage of growth, while this structural feature got minimized.This could possibly be ascribed to the residual effect of the wood ash fertilization.However, no reapplication of this residue was visible in the second and third cuttings of this grass.Identical results were noted in a research in which the residual effect of the application of wood ash on the structural features of Brachiaria brizantha (Marandu and Xaraés cultivars) was visible [19].
Grass height, among others, is a significant feature that was evaluated to assess the productive potential of these grasses.Plant height is regarded as a structural characteristic, pertinent for adopting adequate management [20].
In all the three cuts performed, the leaf/stem ratio was adjusted to the quadratic regression model.The 13.0 g•dm −3 rate was found to induce the highest results, achieving 1.61, 1.78 and 1.94 leaf/stem ratio, respectively, for the first, second and third cuts (Figure 4(b)).At all the three cuts, the leaf/stem ratio in the control treatment was notably below the minimum ratio considered the critical (ratio = 1) [21].However, in all the three cuts, the leaf/stem ratio of the grasses was greater than 1, when the lowest wood ash rate (5 g•dm −3 ) was applied to the soil, revealing the benefits of correcting and fertilizing the soil using this residue.
A 1% significance of probability was selected to produce the shoot dry mass in all the three cuts.In the first cut, the shoot dry mass was adjusted to the quadratic regression model, recording maximum yield (31.72 g), with the addition of a rate of 16.35 g•dm −3 of wood ash (Figure 5).
For the second and third cuts of the forage grass, the shoot dry mass was adjusted to a linear regression model, in which increments higher than 94% were noted, within the range under study.This included making a comparison of the treatment supplied with the maximum rate (20 g•dm −3 ) with the control treatment (with no wood ash application).
The response to the quadratic model of the dry mass of the Piatã grass shoots is warranted, as the rate that induced the highest yield (16.65 g•dm −3 ) was responsible for providing 296.65 mg•dm −3 of phosphorus.In fact, the phosphorus rates for the maximum yield of Piatã grass cultivated in the Oxisol are in the range of 189 -304 mg•dm −3 [22].
The linear response, evident in the second and third growth phases, can thus be ascribed to the reduction in the phosphorus concentration in the soil, although no fertilization using wood ash had been done to the soil.From several research papers, it is evident that various crops exhibited a growth and production increase in the field and greenhouse experiments.Grasses ) Wood ash (g dm -3 ) 1ª Cut 2ª Cut 3ª Cut like oats (Avena sativa L.), wheat (Triticum aestivum L.), maize (Zea mays L.), as well as a few legumes such as beans (Phaseolus vulgaris L.) and soybean (Glycine max L.) showed a biomass increase after wood ash application.Such yield in- creases were attributed mostly to the additional provision of K, P and B, present in the ash [23].Generally, when wood ash was applied, the acidic properties of the soils were affected, which resulted in increased crop productivity [24].Study with Ultisol and Oxisol in Brazil reported that the greater increases on production of Marandu grass were obtained at 15 g•dm −3 of wood ash [25], corroborating with results of this research and evidencing the potential of wood ash on crop production.
The root dry mass was adjusted to the linear regression model, with a 94% increased yield, in the range analyzed (Figure 6).Notably, the phosphorus encourages faster root growth and plant development by raising the water efficiency, as it affects the root development and tillering in grasses, particularly during the implantation stage [22].
Therefore, considering the resulting maximum yield produced in this study interval, the wood ash probably contained an inadequate concentration of this nutrient, thus highlighting the need for a longer experimental interval.
Wood ash has been frequently reported to be used as a fertilizer.Post burning, appreciable quantities of P, Ca, Mn and Mg (about 75%) have been confirmed in the residue [26].The application of wood ash for the growth of tropical plants is advantageous as it minimizes the Al and Mn toxicity, besides supplying the plants with nutrients [27].The authors also emphasized already in that time for the care with due to the nutrient imbalance in the soil correction was necessary by the addition of supplements, mainly those of N, P and K.
The main limitations for use of wood ash in large fields it is the transport of ash from industries until farm for application on soil and the creation of culture ) Wood ash (g dm -3 ) to use of alternative fertilizers.The higher rates of wood ash necessary for the better effects on crop development are other challenge to consolidation of use the wood ash in the agriculture.
Conclusion
The wood ash applied in 13 and 17 g•dm −3 rates, positively affects the phytometric characteristics and the chlorophyll index of Brachiaria brizantha (cv.BRS Piatã) in the Brazilian Cerrado Oxisol.The productive characteristics of the Piatã grass were evident with the wood ash dosage of 16 g•dm −3 , which facilitated the maximum yields during the first growth stage.During the second and third growth stages of the forage grass, fertilization with wood ash rates produced linear responses indicating the necessity to restore the level of soil fertility as the plants absorb and transport the nutrients via the aerial parts, at each cutting.
Figure 2 .
Figure 2. Employing the Clorofi LOG the chlorophyll index was read in the newly expanded leaf blades of Brachiaria brizantha cv.BRS Piatã (a); Scheme showing the separation of the aerial parts of Piatã grass (b).Source: Adapted from Lange (2007).
Figure 4 .
Figure 4. Plant height (a) and leaf/stem ratio (b) in the first, second and third cuts of Brachiaria brizantha cv.BRS Piatã, in response to the wood ash rate in the Oxisol.PH = Plant Height; LSR = Leaf/Stem Ratio; Wa = Wood ash.
Figure 5 .
Figure 5. Shoot dry mass at the first, second and third cuts (a) of Brachiaria brizantha cv.BRS Piatã, in response to wood ash rates in the Oxisol.Shoot dry mass at the first (b), second (c) and third cuts (d).SDM = Shoot Dry Mass; Wa = Wood ash.
Figure 6 .
Figure 6.Root dry mass of Brachiaria brizantha cv.BRS Piatã, in response to the wood ash rate in the Oxisol.RDM = Root Dry Mass; Wa = Wood ash.
Table 2 .
Chemical composition of wood ash.
|
2019-03-22T06:30:02.756Z
|
2017-09-04T00:00:00.000
|
{
"year": 2017,
"sha1": "c96cdae1e74f0b7d160e37589740c53b595f2b19",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=78890",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "939363a4fa7f67889a9a53d89163bec7b5c76bf7",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
9905757
|
pes2o/s2orc
|
v3-fos-license
|
Probiotics and oral health
Probiotics utilize the naturally occurring bacteria to confer health benefits. Traditionally, probiotics have been associated with gut health, and are being mainly utilized for prevention or treatment of gastrointestinal infections and disease; however, recently, several studies have suggested the use of probiotics for oral health purposes. The aim of this review is to understand the potential mechanism of action of probiotic bacteria in the oral cavity and summarize their observed effects with respect to oral health.
IntroductIon
Not all the bacteria are harmful to the human body. In fact, some microbes can have beneficial health effects on the host. Such live microbes are termed as probiotics. The term 'probiotic' was derived from the Greek word meaning "for life". [1] This term was first used in 1965, by Lilly and Stillwell for describing substances secreted by one organism which stimulate the growth of another. [2] An expert panel commissioned by the Food and Agriculture Organization (FAO) and the World Health Organization (WHO) defined probiotics as "live micro-organisms", which when administered in adequate amounts confer a health benefit on the host. The bacterial genera most commonly used in probiotic preparations are Lactobacillus and Bifidobacterium [ Table 1].
Probiotics by definition are the non-digestible food ingredient that confers benefits on the host by selectively stimulating the growth and/activity of one bacterium or a group of bacteria in the colon, and thus improve the host health. [3] Oligosaccharides in the group of fructo-oligosaccharides and galactosaccharides are the commonly studied probiotics. They escape digestion in the upper gastrointestinal tract so that they can be released in the lower tract and used by beneficial microorganisms in the colon, mainly bifidobacteria and lactobacilli.
Traditionally, probiotics have been associated with gut health, however, during the last decade, an increasing number of established and proposed health effects of probiotic bacteria have been reported, including enhancement of the adaptive immune response, treatment or prevention of urogenital and respiratory tract infections, and prevention or alleviation of allergies and atopic disease in infants. Recently, their beneficial effects on oral health have been suggested. A few reports have suggested the role of lactobacilli and bifidobacteria in the prevention of oral infectious diseases such as caries and periodontal disease. The aim of this review is to discuss the distribution of probiotic bacteria in the oral cavity, their potential mechanism of action and their observed effects in the oral cavity.
dIstrIbutIon of ProbIotIc bacterIal straIns In the oral cavIty
Some probiotic Lactobacillus, Bifidobacterium and Streptococcus strains seem to be able to colonize in the oral cavity during the time that products containing them are in active use. Salivary and gingival crevicular fluid (GCF) samples are often used to evaluate the microbial composition in the oral cavity. Hojo et al., found L. salivarious, L. gasseri, and L. fermentum to be among the most prevalent species in the mouth, but no significant difference in their number was found between groups of healthy patients and patients with periodontitis. [4] Conversely, the study suggested Bifidobacterium to be associated with periodontal health as their composition varied among these study groups. Nearly similar results regarding Lactobacillus were concluded by Koll-klais et al. [5] Lactobacilli are rarely detected in the subgingival sample and they could not be found in any patients with chronic periodontitis. L. rhamnosus GG and two different L. reuteri strains have been reported to colonize the oral cavity for a short time after their use.
potentIal MechanIsM of probIotIc effects In the oral cavIty
The general mechanism of action of probiotics can be divided into three main categories-1. normalization of intestinal microbiota 2. modulation of the immune response and 3. metabolic effect The mechanism of action of probiotic bacteria in the oral cavity could be analogous to that described in the gut. The bacterial biofilm formation in the oral cavity is considered to be the principal etiological agent in many pathological conditions in the mouth. Once oral biofilm reaches maturity a dynamic interplay between the host and microbial species is established. The inflammatory byproducts, along with bacterial endotoxin and metabolic products are mainly responsible for periodontal destruction. Probiotic therapy could be considered as a means of inhibiting oral biofilm development and reducing the cascade of harmful immune-inflammatory reactions [ Figure 1].
Though no in-vivo studies have been reported in this area in-vitro studies have claimed the antimicrobial role of probiotics. In-vitro antimicrobial activity of lactobacillus species against oral microbial species including Actinobacillus actinomycetemcomitans and Porphyromonas gingivalis has been studied. [6] Actinobacillus actinomycetemcomitans were the most susceptible species to lactobacilli under the conditions of this experiment. All the four tested strains of lactobacilli, namely L. rhamnosus 5.1a, L. rhamnosus 5.3a, L. rhamnosus 5.5a and L. rhamnosus Lc705 were found to inhibit all three periodontal pathogens investigated.
To exert their action in the oral cavity, the probiotic microorganism should be able to resist the oral environmental conditions and defense mechanisms, be able to adhere to saliva-coated surface, colonize and grow in the mouth and to inhibit oral pathogens. So there is a need to develop the probiotic strain and species that can resist oral environmental conditions and can bring their antimicrobial action.
observed effects on perIodontal dIsease
The initial studies of the use of probiotics for enhancing oral health were for the treatment of periodontal inflammation. Lactobacillus reuteri brought about a significant reduction in gingivitis in a study done by Krasse et al. The oral administration of tablet containing L. salivarius WB21 was found to be able to decrease the periodontal index and pocket probing depth (PD) in smokers specifically. Various means of probiotics' administration and their effect are mentioned in Table 2.
observed effects on carIes and carIes-assocIated MIcrobes
The fact that caries is a bacterially-mediated process
Means of administration Effects
Krasse et al. [7] L. reuteri Chewing gum Reduction in gingivitis Volozhin et al. [8] L. casei Periodontal dressing Reduction of periodontal pathogens Grudianov et al. [9] L. salivarius Tablets 'Acilact' and 'Bifidumbacterin' Reduction in signs of gingivitis and periodontitis Pozharitskaia et al. [10] L. acidophilus Tablet 'Acilact' Improvement in clinical parameters and shift in local microbiota towards Gram +ve cocci and lactobacilli Riccia et al. [11] L. brevis Lozenges Amelioration of periodontitis-associated signs and symptoms has been known for more than 115 years. Currently, the host, bacteria and nutrients are required for fermenting the product of organic acids and the subsequent demineralization activity. According to this model all the three elements must be present to initiate the disease. To overcome the limitation of traditional caries management strategies, the use of probiotics has been tried to treat caries by preventing oral colonization of cariogenic pathogens. Several studies suggest that consumption of products containing lactobacilli or bifidobacteria could reduce the number of Streptococcus mutans in the saliva. In a study by Nase et al., [12] the administration of dairy products containing L. rhamnosus reduced the risk of dental caries and lowered the level of S. mutans in patients after seven months of intake of L. rhamnosus.
Observed effects on oral candida
Only a few studies have reported about the effect of probiotic bacteria on oral candida infection. In a study by Hatakka et al., consumption of cheese containing L. rhamnosus strain GG and LC705 and Propionibacterium freudenreichii ssp. shermanii JS for 16 weeks, reduced the number of high oral yeast counts, but no changes in mucosal lesion were observed.
Observed effects on halitosis
In approximately 90% of the cases of halitosis, the cause is confined to the oral cavity. The probiotic strains in the studies for the treatment of both mouth and gutassociated halitosis are E. coli Nisle 1917, S. salivarius K12, Weissella confusa and lactic acid-forming bacterial mixture.
conclusIon Several health-promoting effects of probiotics are well documented, but their effect on oral health is not clear. Scientific evidence is poor in this area and their recommendation for oral health purposes is not yet justified. The main hurdle in this is the development of strains which can resist the oral environmental conditions and can stay there long enough to bring effect. Genetic modification of probiotic strains to suit the oral conditions is thus needed. Systematic studies
|
2018-04-03T00:11:43.775Z
|
2011-01-01T00:00:00.000
|
{
"year": 2011,
"sha1": "2f833c860f2112b8a98e5f5611a0cb65886de23c",
"oa_license": "CCBYNCSA",
"oa_url": "https://europepmc.org/articles/pmc3304224",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "ba50058ced6e6f48ac7dba3dbbfe20e5ffc21625",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
225392221
|
pes2o/s2orc
|
v3-fos-license
|
A Crane Robot of Three Axes Dimensional Using Trajectory Planning Method
This study aims to design a crane robot that has good performance with good stability, good accuracy in the gripper clamping the object at the point of balance and reach the target location well. The crane controller is installed with US-100 ping sensor and proximity infrared sensor to detect position of object. The robot crane moves on the x, y and z axes or in three dimensions using motors as actuator and it can be adjusted with motor drive. The crane moves on the x and y axes using DC motor and z axis using servo motor. The crane automatically moves when it detects an object. The crane's movement uses the trajectory determination method by maintaining speed. Finally, the average accuracy of the gripper clamping exactly at the midpoint of the object is 93%. The length of the object when it is clamped has an accuracy of 95%. The performance of the crane robot is evaluated to transfer an object to the destination location takes 11 seconds with a track length of 86.055 cm.
Introduction
Cranes are machines that are used for transporting heavy loads or hazardous materials from one place to another place. Cranes can be controlled by using several approaches for their operations, which usually involve the process of gripping, lifting, transporting the load, then lowering and ungripping the load [1]. Crane is widely used to move a heavy object from one place to another not only in manufacturing industry but also in-service industry. There are a lot of cranes that are used to move objects in service industry such as in port container terminals, port terminals, warehouses, repairing services, etc. [2]. Among these kinds of cranes, over-head crane is the most representative and commonly used crane [3].
They can be classified based on the degrees of freedom that the support mechanism offers at the suspension point. A crane consist of a hoisting mechanism and a support mechanism [4]. As one of the important transportation equipments, cranes are successfully applied in diverse elds for the heavy cargoes transportation [5]. In fact, each crane model is controlled with a dedicated control algorithm that cannot be modified, accessed, or replaced at runtime [6]. Many industries rely on cranes for efficiently executing storage and retrieval operations of goods. Areas of application are, for instance, container logistics in seaports and warehousing operations in automated storage and retrieval systems [7]. Cranes operated at warehouses are an important asset for many industries which have to temporarily store products on their way from manufacturers to consumers. Such warehouses are more of a need and a significant operational cost that must be minimized [8].
Automated storage/retrieval systems (ASRSs) are widely used in warehouses and distributions centers all around the world [9]. Automated Storage/Retrieval Systems (AS/RS) have an important role in the improvement of the performance of automated manufacturing systems, warehouses and distribution centers [10]. The pick & place task for a robot with movable platform located is present at the most of the industrial. In the most commercial robots; where navigation is required in a closed environment (for example , at the factory), ultrasonic sensors are suitable [11]. System for cranes should be equipped with relevant equipment such as actuators and controllers [12]. Concerning traditionally overhead cranes, nowadays, the dynamical modeling and control, in order to eliminate swing effects and ensure system stability [13]. As a crane is an expensive equipments it is interesting to optimize its performance to improve its utility [14]. To achieve efficient crane control and accurate positioning, the speed for each direction is predetermined according to crane system specifications such as maximum speed, acceleration and deceleration [15]. Crane control contains position controller and speed controller [16]. The task of the robot crane is to find a path free of collisions starting position moves to the target position in a environment with obstacles [17]. To increase productivity, the overhead crane transports the payload as fast as possible to its destination [18]. Basically, the crane drive system uses a motor. Control of crane operations is carried out through motor movement control [19]. There are three basic objectives in designing a robot free from collisions, maintaining a constant distance from the wall, and moving smoothly at high speeds [20].
In this study the focus on problem solving of gripper accuracy and transfer objects precisely to the destination. The accuracy of the gripper to clamping the object right on the balance point so the object does not fall when it lift. To achieve research objectives using devices such as ultrasonic sensor, infrared and rotary encoder. Object distance calculation is obtained by comparison of ultrasonic sensor data and photoelectric infrared sensor data. Servo motor as a gripper actuator pinches an object when the x and y axes are in the right position the control system uses a microcontroller as the central controller. This robot crane is designed to have the ability to work automatically to move object from one place to another. This robot crane is also able to return to the starting point after delivering the object.
Research Methodology
Research method by designing hardware and software. The design of hardware and software explained in this section.
Design of hardware
The crane robot block diagram is shown in Figure 1.
Figure 1. Block diagram input and output control system
The microcontroller system is a microprocessor as center of the device central to all systems and organizes all activities input / output system [21], [22]. Based on the block diagram in figure 1, the crane robot control system consists of an input block, a control block and an output block. Input block consists of ping sensor, photoelectric infrared sensor, rotary encoder, limit switch, toggle switch and push button. The controller block uses Arduino ATmega 2560. The output block consists of Motor driver, LCD and LED.
This project using the ping US-100 high precision ultrasonic range sensor. The US-100 Ultrasonic Sensor can measure or detect the object in the range of 2 cm to 450 cm distance and high precision up to 3 mm. This US-100 has 2.4V to 4.5V wide voltage input range. To obtain a distance measurement, set the Trig / TX pin high for at least 50 microseconds then set it low to trigger the measurement. This project also using proximity sensor/switch E18-D80NK. easy to use Infrared sensor with a long detection distance and has less interference by visible light. The sensor has specifications input voltage +5V DC, current consumption > 25mA (min) ~ 100mA (max), sensing range 3cm to 80cm (depends on obstacle surface).
The robot movement
The movement of robotic cranes is carried out to determine and consider how much distance is obtained with the number of rotations of the DC motor on the z axis. The object transfer scenario from point (30,30) to point (60,60). Based on the movement of the robot crane that moves from the starting point (0,0) as follows. When the robot crane has detected the position of the object, then picks it up through two motions. the first move on the x axis and the second move on the y axis. After the robot crane delivers the object to the destination location, the robot returns directly to the starting point (0,0). The robot moves from points (0,0), (30,0), (30,30) and to the final destination point (60,60). The gripper will pick up and carry the object at exactly the right point (30,30). The second path of the robot moves straight from point (60,60) back to the starting position (0.0). The distance between the robot crane and position of object can be calculated using the following triangle formula.
Based on the Equation 1 the actual distance robot crane to object as follows: r is the actual distance. the movement of the robot crane from point (0, 0) to (30, 30) is not like when detecting the initial load, but directly straight.
Flow chart of control program
Robot crane works by giving voltage from power supply to the microcontroller, and motor driver. The first stage robot crane moves the DC motor as an actuator in sequence from the x-axis to the yaxis. When the start button is pressed, the DC motor (x-axis) moves, the Ping sensor located on the x-axis reflects ultrasonic waves as a comparator for the calculation of the x-axis point to the point obtained by the photoelectric infrared sensor. When the load has been detected by a photoelectric infrared sensor, the rotary encoder on the motor (x axis) calculates and compares the distance of the ping sensor on the x axis. After getting the load position correctly the gripper moves to take the load and lift it. After the item is raised the gripper motor, the x-axis motor and the y-axis motor move together to move the item to the setup position. Then the motor returns to the starting position. Based on the flow chart figure 3 consist of two steps generally. The first step, the load detected and sensor sends data to the microcontroller to be processed, the second step of the microcontroller gives the command to the motor to move to the load. after the gripper is just above the load, it will be moved according to the specified location. Then the gripper returns to the starting position after delivering the load to the location. A robotic crane is designed to be able to move in 3 dimensions x, y and z. Movement on the x and y axis uses dc motor and for the z axis uses a servo motor. So totally, there are 4 actuators used in this research project. Two DC motors for moving on the x and y axes, and 2 servo motors for moving on the z axis and gripper.
Results and Discussions
Testing the control system on the robot crane is needed to find out the performance. The testing method used is to measure and record the results of the experiment.
Testing the relationship between current and the load
Testing the power supply circuit to determine the feasibility of the power supply in providing voltage and current to the control system and some microcontroller outputs and protecting the device from over-current. The energy source for supplying electrical energy to the control system and the load uses 12 volts and 3 A for current. Figure 6. the relationship between current and number of loads Figure 6 shows the relationship between current and total load. The current increases due to the increasing number of loads. Likewise, in the power consumption test as shown in Figure 7. Loads consist of microcontroller, sensors, LED, LCD, motor driver, DC motor, servo motor and gripper. Based on the test results for the power supply, the supply voltage is stable when the connected systems are active. When the system is turned on, no voltage drop occurs. The current increases when all loads are activated. The current increase is not significant, so it is a normal state. This can be said that the power supply is in good condition and is suitable for use.
Gripper accuracy test results in clamping the midpoint of the object
Gripper accuracy is how the gripper can pinch precisely at the midpoint of the object. The goal is to maintain the balance, so the object does not fall when the gripper lifts the object. Based on Figure 6, the object has a length of 8 cm and a height of 6 cm. the specified midpoint of the object is 4 cm taken from the midpoint of the object's length. To get the gripper's accuracy in pinching the midpoint of the object, in this experiment 5 times were carried out, the results as shown in Table 1.
Gripper accuracy test results for picking and lifting objects
The important thing in this robot crane is the movement of the gripper up and down to pick up and lift objects. The gripper specifications used in this project have clamp max open is 55mm, clamp total length 108mm, clamp total width 98mm when the clamp open.
Figure 9. Aluminium alloy robotic claw
The purpose of testing the rotary encoder on the gripper so that the gripper does not hit the ojek when the downward movement takes an object. The rotation of the knob on the z axis as a driving motor when raising and lowering the gripper, the encoder is needed to determine and calculate the distance of the gripper down so that it does not hit the object. Based on the Figure 9 length of gripper to clamp object 3 cm so if the gripper moves up and down totally 6 cm. Table 2 shows the results of distance measurements with a target of 6 cm. To reach a distance of 1 cm requires 8 rotations so for 50 rotation about 6 cm. Based on the data in table 2, the gripper mileage for up and down movements compared to the target distance is calculated as follows.
= 96%
Accuracy is affected by rounding of numbers, gripper grips, and object dimensions. In the experimental results table 2 shows the safe distance when the gripper took the object did not hit it because the gripper has a total length of about 10.8 cm.
Testing the movement of the robot from the point of (0,0) to (60,40)
The mechanical design in figure 10 has a length of 60 cm and a width of 40 cm. Figure 10 shows the movement of the robot crane from detecting until delivering object to the final location. q= the length of the robot's trajectory for transferring objects from (30,20) to point of (60,40). So, the total length of the robot's trajectory to deliver the object 86,055 cm. based on testing the duration of time needed to transfer an object with a path length 86,055 cm, shown in Table 3. Loading process is the process of gripper going down until lifting an object. The loading process takes 2.06 seconds before the object is transferred to the destination location. And also, for the unloading process takes 2.06 seconds. In total the time taken by the robot crane to travel 86.055 cm from point 0.0 to point 60.40 is 11 seconds. So, the average speed of the robot is calculated as follows.
Conclusions
Based on the data and analysis results, the robot has a good performance with a good precision and accuracy. The robot gripper has been able to clamp objects at the midpoint with an accuracy of 93%. The length of the object when it is clamped has an accuracy of 95%. The robot also has a high stability when carrying objects to the destination location. Robot gripper can lift right in the middle of the object, so that the object does not fall when raised. The average speed of robots to deliver object with a track distance of 86.055 cm is 7,2 cm / s, while returning to base takes 4.98 seconds with the length of track is 72.11 cm In further research, crane robot can be developed to overcome many obstacles. Support for internet connection so that it can be controlled with a mobile application. The selection of sensors for distance detection needs to be considered in terms of precision.
|
2020-08-13T10:03:12.734Z
|
2020-08-07T00:00:00.000
|
{
"year": 2020,
"sha1": "03735850ae393beac35b5a675ed56d7d55b1f9ec",
"oa_license": "CCBYNC",
"oa_url": "http://ijatec.com/index.php/ijatmmm/article/download/20/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "fe26d88b5e3b0f1d376bf888388a13ea9530976a",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
248046523
|
pes2o/s2orc
|
v3-fos-license
|
A simple and practical intraoperative ventilation technique for uniportal video-assisted thoracoscopic tracheal reconstruction: a case report
Background Cross-field endotracheal intubation is typically performed during tracheal anastomosis to maintain single-lung ventilation. To minimize obstruction of the surgical field by the cross-field tube, special equipment such as high-frequency jet ventilation (HFJV) and extracorporeal membrane oxygenation (ECMO) or advanced techniques such as non-intubated ventilation have been proposed. Here, we describe a simple and practical airway management strategy that requires only conventional ventilators and techniques. Our operation is completed under uniportal video-assisted thoracoscopic surgery (VATS). Case Description We report a case of tracheal adenoid cystic carcinoma (ACC) presenting with cough with bloody sputum in a 53-year-old man. Computed tomography (CT) and flexible bronchoscopy revealed an irregular polypoid neoplasm attached to the right wall of the distal trachea, which almost completely blocked the tracheal lumen. To relieve the symptoms, transbronchoscopic resection of the tumor, followed by curative resection via uniportal VATS under general anesthesia was performed. To maintain single-lung ventilation during tracheal reconstruction, we took advantage of a thin suction tube [internal diameter (ID) 3 mm; external diameter (ED) 4 mm], which was connected to a conventional ventilator. Specifically, by introducing the suction tube into the distal left main bronchus through the endotracheal tube and blowing 100% oxygen, we achieved satisfactory oxygenation throughout the anastomotic process; and the blood CO2 partial pressure was also acceptable. The view of the anastomotic site was far less obstructed owing to the small diameter of the suction tube, and the anastomotic process was smooth and accurate. Postoperative recovery was good, and no stenosis of the reconstructed trachea was observed at the 3-month follow-up. Conclusions Our technique proves to be safe and feasible for selected patients with tracheal tumors, and can be a practical choice for medical centers that are not equipped with HFJV or ECMO.
Introduction
Primary tracheal tumors are a rare disease with an estimated incidence of 2.6 cases per 1,000,000 patients per year (1). Surgical resection is the only curative choice for these patients and is typically performed via thoracotomy (2). With advances in surgical techniques and minimally invasive instruments, tracheal resection followed by endto-end anastomosis with video assistance through multiple small incisions, or even a single small incision, has become feasible (3,4). For both open and thoracoscopic surgeries, a clear surgical field is important for performing tracheal anastomosis. However, cross-field intubation, which is typically used to maintain ventilation during the procedure, can interfere (5). This is particularly true when a uniportal approach is applied. Herein, we describe a simple and practical airway management strategy that provides satisfactory intraoperative ventilation without disturbing the surgical field during uniportal thoracoscopic tracheal resection with reconstruction. We present the following case in accordance with the CARE reporting checklist (available at https://atm.amegroups.com/article/view/10.21037/atm-21-6215/rc)
Case presentation
A 53-year-old man with no history of smoking presented to our hospital with a 3-month history of hemoptysis. Enhanced chest computed tomography (CT) showed a 1.9 cm × 1.1 cm mass located in the distal trachea, with a clear margin and slight heterogeneous enhancement ( Figure 1A,1B). The right wall of the trachea was thickened, where the mass was attached. Multiple left tracheoesophageal groove lymph nodes were noted, with the largest having a diameter of 0.6 cm. No enlarged mediastinal or hilar lymph nodes were observed. Enhanced CT of the maxillofacial and abdominal organs showed no abnormalities. Brain magnetic resonance imaging and whole-body bone scans did not reveal any suspicious lesions. A flexible bronchoscopy showed that there was an irregular polypoid neoplasm attached to the right wall of the trachea, about 2.5-3.5 cm above the carina, and the tumor almost completely blocked the tracheal lumen ( Figure 1C). Pulmonary function examination revealed a moderately restrictive spirometric pattern; the forced expiratory volume in the first second (FEV 1 ) was 2.64 L (71.1% of the predicted value). Physical examination and routine laboratory tests revealed no abnormalities. To alleviate the patient's respiratory symptoms, transbronchoscopic resection of the tumor was performed using an electrosurgical snare ( Figure 1D). The pathological results were positive for adenoid cystic carcinoma (ACC). Since tracheal wall invasion and regional lymph node metastasis could not be excluded, we decided to perform tracheal segment resection with end-to-end anastomosis using video-assisted thoracoscopic surgery (VATS) via a single incision. All procedures performed in this study were in accordance with the ethical standards of the institutional and/or national research committee(s) and with the Helsinki Declaration (as revised in 2013). Written informed consent was obtained from the patient for publication of this case report and accompanying images. A copy of the written consent is available for review by the editorial office of this journal.
After the placement of American Society of Anesthesiologists (ASA) standard monitors and pre-induction arterial line, general anesthesia was gradually induced with 0.05 mg/kg midazolam, 2 mg/kg propofol, 0.3 µg/kg sufentanil and 0.6 mg/kg rocuronium and maintained with 4-8 mg/(kg·h) propofol and 0.05-2 µg/(kg·min) remifentanil to maintain the bispectral index (BIS) between 40 and 60. The muscle relaxant, rocuronium, was administered intermittently under neuromuscular blockade monitoring. A single-lumen endotracheal tube combined with a bronchial blocker was used for one-lung ventilation (OLV) and provided a satisfactory oxygen supply prior to tracheotomy. The patient was placed in the left lateral decubitus position. A single incision of approximately 4 cm was made in the 4 th intercostal space along the right midaxillary line and was protected with a wound protector. Thoracic examination revealed dense pleural adhesions in the chest cavity. After sharp dissection of the adhesions with electrocautery, the right lung gradually collapsed. The mediastinal pleura was then cut open, and the trachea from the level of the suprasternal notch to that of the carina was dissociated. Care was taken to protect the bronchial arteries and vagus nerves. Mediastinal lymph nodes, including the subcarinal, paratracheal, retrotracheal, and right tracheal-bronchial lymph nodes, were dissected. Intraoperative frozen-section analysis confirmed that the left tracheoesophageal lymph nodes were negative for tumor cells. The inferior pulmonary ligaments were divided. The azygos vein was dissected and transected with a linear stapler. The right vagus nerve was isolated and suspended for protection, whereas the distal trachea was looped with sterile cotton tape for traction. Since the tumor had been removed preoperatively, the resection segment was confirmed using bronchofiberscopy during surgery. Immediately before the tracheotomy, the © Annals of Translational Medicine. All rights reserved.
Ann distal tip of the endotracheal tube was retracted to the upper trachea. Simultaneously, the bronchial blocker was withdrawn from the trachea. After the distal end of the trachea, which was approximately 1.0 cm away from the base of the resected tumor, was transected, a sterile cuffed single-lumen tracheal tube [internal diameter (ID) 6.0 mm] was placed into the distal left main bronchus via the thoracic cavity. Subsequently, a cross-field OLV was initiated. Meanwhile, the patient was placed in a 15º Trendelenburg position to prevent oozing blood from flowing distally into the tracheobronchial tree. The trachea was transected at the proximal end of the tumor to completely remove the tracheal segment. The resected segment was about 2 cm in length. Both sides of surgical margins were confirmed negative by intraoperative frozen section analysis. Once the preparation for anastomosis was completed, the cross-field tube was withdrawn, and, a thin single-lumen suction tube [ID 3 mm; external diameter (ED) 4 mm] (Figure 2A) that was connected to the breathing circuit of the anesthesia machine ( Figure 2B) was inserted through the endotracheal tube to the proximal end of the tracheostomy site ( Figure 2C). The breathing mode of the anesthesia machine was set to manual, and the limit pressure of the APL valve was set to 25 cmH 2 O. Subsequently, 100% oxygen was continuously blown in, and the fresh gas flow was set to be greater than 5 L/min. This maintained the fingertip blood oxygen saturation (SaO 2 ) above 95% for approximately 5 min; when SaO 2 dropped below 95%, the suction tube was further inserted to the distal end of the tracheotomy site and the distal tip was placed approximately 2 cm away from the opening of the left bronchus under thoracoscopic guidance ( Figure 2D). Anastomosis was performed using a continuous 3-0 Prolene running suture from the posterior to anterior region. Starting from the left joint of the membranous and cartilaginous parts, the left half of the trachea was first closed, followed by the right half ( Figure 2E). Without interference from the cross-field tube, the entire end-toend anastomosis process was smooth and accurate, and the time for tracheal reconstruction was approximately 30 min. During the anastomotic phase, the aforementioned crossfield tube was prepared on the operating table for rescue use. Indeed, the fingertip blood SaO 2 of the patient was consistently above 96%, and the blood gas analysis of the patient without ventilation during tracheal anastomosis was acceptable ( Table 1). Once anastomosis was completed, the endotracheal tube was repositioned 21 cm away from the incisors. The chest incision was closed after it was confirmed that there was no active intrathoracic bleeding or air leakage, and a 15-Fr chest tube was placed posterior and toward the apex of the thoracic cavity through the incision for drainage. The patient's vital signs remained stable throughout the surgery. After surgery, the endotracheal tube was removed in the operating room and spontaneous breathing was fully restored. This operation took 335 min (117 min were spent dissecting the pleural adhesions) with approximately 100 mL of blood loss. Chest radiography on postoperative day 1 showed that the right lung was well re-expanded with no sign of pneumothorax. The patient occasionally presented with cough and sputum production. He did not have any significant symptoms until he complained of shortness of breath and fever on postoperative day 5. Chest radiography To facilitate drainage, the chest tube placed during surgery was withdrawn and replaced by a 7-Fr drainage catheter, which was inserted through the 6 th intercostal space along the scapular line under ultrasound guidance. Approximately 500 mL serous pleural fluid was drained, and his body temperature gradually returned to normal. The catheter was removed on postoperative day 14 and the patient was discharged the next day. Pathological results confirmed tracheal ACC without lymph node involvement. According to the patient, the postoperative recovery was satisfactory during the 3-month follow-up. The patient did not visit our hospital for further consultation because of his economic status. CT scans of the reconstructed trachea performed at a local hospital showed no stenosis ( Figure 2F) and no signs of local tumor recurrence.
Discussion
Tumors originating in the trachea are relatively uncommon. Surgery is typically the treatment of choice and is performed to resect the tumor and restore the airway by end-to-end anastomosis. However, this can be a challenging procedure even when performed under open thoracotomy for majority of medical centers because of the high rates of postoperative morbidity and mortality (6).
Airway management and reconstruction, as the key and most difficult issue of tracheal surgery, require careful coordination between the surgical and anesthesia teams. Several ventilation techniques have been developed to provide a sufficient oxygenation for the patient during airway excision and anastomosis. One of the classic methods is cross-field ventilation (5), in which a sterile endotracheal tube is inserted into the distal trachea by the surgeon once the trachea has been transected to directly ventilate a single lung. The major advantage is that it allows for unrestricted positive pressure ventilation and provides aspiration protection throughout the procedure. In our case, the cross-field tube was also used after the distal trachea was transected before anastomosis and for rescue purpose during anastomosis. However, the disadvantage is also explicit in that the cross-field tube can obstruct the view of the reconstruction site, requiring periodic retraction of the tube during anastomosis to improve exposure.
To minimize the obstruction of the cross-field tube to the surgical field, we used a tube with a smaller diameter to sustain the OLV. SaO 2 was maintained above 98% and the blood CO 2 partial pressure was also acceptable. Because of the small diameter of the suction tube, the view of the anastomotic site is far less obstructed, the process of anastomosis is smooth and accurate, and no retraction of the tube is needed. Indeed, a similar technique was proposed over a decade ago by Macchiarini (7). To maintain hyperoxygenation during anastomosis, a 10-F catheter was introduced into the contralateral main bronchus across the surgical field in that study. In our case, the small tube was introduced through the endotracheal tube, and tracheal reconstruction was accomplished with a single surgical port on the chest wall, minimizing trauma to the patient.
Another conventional technique is high-frequency jet ventilation (HFJV). Similar to our strategy, HFJV also uses a small-diameter catheter for ventilation; however, it requires special equipment and has potential risks of air trapping and barotrauma (8,9). It has been reported that HFJV use could be a risk factor for the development of acute respiratory distress syndrome (ARDS) (10).
Recently, the non-intubation technique has been successfully used in tracheal surgery (11) in which regional anesthesia is typically used and a spontaneous single-lung breathing status is maintained. Compared with intubated general anesthesia, a completely open surgical field is achieved, and early results suggest a faster postoperative recovery and lower overall complication rate (12). However, it should be noted that not all patients are suitable for non- intubated surgery and advanced anesthetic techniques, and that extensive experience in airway management is required to control the considerable physiological derangements during the procedure. In addition, concerns such as cough control, distal airway protection, and inability to perform air-leak tests should also be addressed (12). Cardiopulmonary bypass (CPB) and extracorporeal membrane oxygenation (ECMO) are alternative, but more invasive approaches for tracheal reconstruction (13). They provide effective respiratory support and hemodynamic stability and the surgical site is unaffected. However, they require special equipment and are associated with potential complications and high cost (5).
Although our case proves safe and feasible, the results of our study should be interpreted with caution. First, switching the anesthesia machine to the manual mode can increase the risk of hypercapnia. In our case, the blood CO 2 partial pressure increased to 78 mmHg when the tracheal anastomosis was completed within 30 min. If more time is spent on this procedure, cross-field intubation may be applied to manage hypercapnic acidosis. Second, because the 3-way suction tube was not designed to be connected to the breathing circuit, their calibers were not matched. This might have affected the connection stability, thus affecting the continuity of ventilation. We used medical tape as a temporary solution to address the discrepancy in the caliber of the two ends ( Figure 2B). Third, the flexibility of the plastic suction tube to some extent increases the difficulty of manipulation, particularly inside the bronchus. Furthermore, the single-lumen suction tube failed to provide aspiration protection. To ensure safety, rescue techniques should always be prepared for emergency use. Finally, the results of only one case are not adequate to prove its safety and feasibility for all tracheal tumors. For now, we can only speculate that this technique can be safely applied to selected patients with ASA I-II, and more cases are needed to optimize the procedure and verify the reliability and effectiveness of this technique.
Conclusions
In conclusion, our proposed ventilation strategy for tracheal reconstruction using uniportal VATS is simple and feasible. It provides satisfactory oxygenation without significantly disturbing the surgical field and should be a good practice for both thoracic surgeons and anesthesiologists with experience in intraoperative ventilation management in tracheal surgery. Moreover, for medical centers that are not equipped with HFJV or ECMO, our method can be a practical choice.
|
2022-04-09T15:22:44.388Z
|
2021-01-01T00:00:00.000
|
{
"year": 2022,
"sha1": "5eeec71b89a6d4a05580e90a29c60b2689687608",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "496cfdecd755c29df4feb8c525aca4b175bfc856",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
268282765
|
pes2o/s2orc
|
v3-fos-license
|
Diagnosis of a recurred lesion in dermatophytosis patients after 2 weeks of antifungal therapy: A prospective observational study
ABSTRACT Few researchers believe that various risk factors may complicate the course of dermatophytosis and/or develop various dermatoses unrelated to fungal infection at the previous lesion site. However, there is a paucity of studies that analyzed the diagnosis of lesions that recurred at the treated site of dermatophytosis. Materials and Methods: A prospective observational study was conducted on 157 cases of dermatophytosis with positive fungal test results. A fixed dose of 100 mg of oral itraconazole once daily was administered to all patients for 2 weeks. At the end of 2 weeks, patients were assessed for clinical cure and recurrence. Recurred cases were assessed for mycological profile using a fungal test (potassium hydroxide mount and/or fungal culture) for identifying fungal infection. Results: Only eight (5.36%) patients showed clinical cure, and 141 (94.63%) patients developed recurrence after therapy. Of the 141 cases with recurrence, only 47 (33.33%) patients were positive for fungus. Eight (5.09%) patients were lost to follow-up. Frequently encountered risk factors in the study were topical steroid use, disease in family, associated atopic dermatitis and contact with pets. Conclusion: This is the first study that described the clinical diagnosis and mycological profile of the various lesions recurring at the previous tinea infection site in patients with dermatophytosis. Such patients presented not only with recurrent lesions of fungal infection but also developed various dermatoses unrelated to fungal infection at the sites of previous tinea infection. Various factors, which could have resulted in the observed changes, are reinfection by dermatophytes at the sites of previous tinea infection, inadequate antifungal therapy or antifungal resistance; or due to the effects of various topical steroid formulations used by the patients, such as anti-inflammatory or immunosuppressive effects or shift in immunity. Hence, diagnosis of the recurrent lesion at the site of previous dermatophytosis must be individualized and should be based on 1) duration of antifungal therapy received, 2) associated risk factors, 3) response to antifungal therapy, 4) evolution of the recurrent lesion, and/or 5) fungal tests.
Diagnosis of a recurred lesion in dermatophytosis patients after 2 weeks of antifungal therapy:
A prospective observational study site of dermatophytosis has become a challenge for physicians.[22] Hence, it is important for physician to appropriately diagnose the recurrent lesions at the treated site of dermatophytosis prior to initiating therapy.
The present study analyzed the treatment outcome and diagnosis of the recurrent lesions at the treated site of dermatophytosis using fungal test (KOH and fungal culture).
Study type
A prospective observational study was conducted at the tertiary care hospital of eastern India, from June 2017 to May 18. Patients were enrolled after obtaining an ethical approval from institutional ethics committee (IEC Approval Letter No-IEC-T/IM-F/ DERMA/15/25), and prior informed consent from patients.
Materials and Methods
A total of 157 consecutive patients suffering from dermatophytosis with positive fungal test, aged above 12 years and both gender, who did not receive topical antifungals in last 1 month and oral itraconazole in last 3 months were enrolled.Patients having comorbidity (diabetes, HIV infection and hypothyroidism), aged above 70 years, on immunosuppressive drugs (corticosteroid and cytotoxic drugs in last 4 weeks), associated fungal infection of nail and hair, pregnant, and lactating women were excluded.Enrolled patients were assessed for risk factors, course of disease (number of recurrences, and response to antifungal therapy), duration of symptom free period prior to recurrence, total duration of the disease, duration of antifungal drugs received, and morphology of lesion at baseline using a predesigned proforma.
All enrolled patients were prescribed oral itraconazole 100 mg capsules once daily for 2 weeks. [23]Patients also used topical luliconazole 1% cream twice daily, and tablet cetirizine 5 mg daily (max 10 mg).
Sample size
With the anticipated frequency of the outcome factor in the population of 50% ±5, and population size as 10,00,000, confidence level as 80% and design effect of 1, a sample size of 165 patients was estimated.
Statistical analysis
The statistical analysis was done in two parts.The descriptive data were expressed as frequency or proportion.The numerical or quantitative data were expressed as mean ± standard deviation and median.Chi-square test was used for test of statistical significance between the groups.
Result
Eight hundred and seventy-three patients of dermatophytosis who attended the dermatology OPD during the study period were screened.Of these patients, one hundred and fifty-seven ( 2] Other features observed in study patients at baseline and recurrent lesions at the end of 2 week are enlisted in Table 4, which included polycyclic border, eczematous changes, smaller annular plaques over the postinflammatory hyperpigmentation of previous lesions of tinea, etc., [Table 4] [Figures 1-4] The most commonly isolated fungal species at baseline and in the recurrent lesions was Trichophyton mentagrophyte. One hundred and fifty-seven patients received oral itraconazole 100 mg once daily for 2 weeks.Eight (5.09%) patients were excluded from assessment (six patients deviated from the treatment protocol and two patients did not return for follow-up).Total 149 (94.90%) patients were assessed at the end of 2 weeks.At the end of therapy, only eight (5.36%) patients had clinical cure.One hundred and forty-one (94.63%) patients had recurrence.Recurrent lesions were due to partial response to therapy in 60 (42.55%) patients, recurrence after a short period of clinical improvement while on therapy in 63 (44.68%) patients, and symptoms aggravated in 18 (12.76%)patients.Mycological profile of the recurrent lesions at the treated site of dermatophytosis revealed that in 47 (33.33%) 5] Morphological analysis of the recurrent lesions after 2 weeks itraconazole revealed that 110 (78.01%) patients did not have clinical morphology of tinea infection as the initial erythematous border of tinea lesions recorded at baseline disappeared in them.[Table 4].
Discussion
In the present study, the average age of patients enrolled for the study was 32.26 years (range 12-61 years).A near similar age range was reported by others in various recent studies from India. [2,5]2]16,17,24] In the present study, risk factors observed were topical steroid use, disease in family members, atopic dermatitis, and contact with pets.In the patients with recurrent lesions, 66.66% were fungal test negative (developed dermatoses unrelated to fungal infection) and 33.33% had a fungal infection.Hence, it is likely that risk factors in the present study might have complicated the fungal infection and/or developed dermatoses unrelated to fungal infection. [8,9,10,11,16,17]cently few researchers reported the lack of correlation of clinical cure to antifungal susceptibility test.Similarly, there was no correlation between the recurrent lesions at the treated site of fungal infection to fungal test (i.e., treated tinea site developed adverse effects to prolonged topical steroid use, i.e., "topical steroid withdrawal syndrome" [16] and in atopic dermatitis patients with tinea infection evoked development of post-traumatic eczema/chronic eczema). [17][22] In the present study of dermatophytosis, a) clinical cure (5.36%) did not correlate with morphological cure (i.e., patients who lost initial fungal border after therapy) (78.01%) and b) all recurred 141 (94.63%) patients did not have the fungal infection (i.e., 47 (33.33%) patients had fungal infection (test positive for fungus) and 94 (66.66%) patients had dermatoses unrelated to fungal infection (test negative for fungus)).Hence, it further supports the hypothesis that recurrent lesions at the treated site of dermatophytosis may have resulted from more than one risk factors complicated tinea, altered course, and morphology of the fungal infection, and/or development of dermatoses unrelated to fungal infection at the treated site of tinea like topical steroid withdrawal syndrome and post-traumatic eczema. [6,8,9,11,12,16,17,19,24,25]st of the recent studies from India have reported a rising trend of Trichophyton mentagrophyte (T.[28] In the present study, the commonest fungal isolate was T. mentagrophyte at baseline and in recurrent lesion.Patients were not assessed for migration.However, authors hypothesized the high prevalence of T. mentagrophyte species in the study may have occurred due to chronic disease in the index case and resultant transmitted infection to close contacts, not vice versa, as very few patients developed infection at the new sites (i.e.previously uninvolved site).[Table 4] Diagnosis of fungal infection on the skin is mostly clinical (characterized by an annular plaque, relative clear center, peripheral spread with continuous thin trailing scaly border). [29]owever, there is a lack of consensus on the characteristic morphology of recurrent dermatophytosis.Hence, it often posed a diagnostic challenge in differentiating fungal infection from dermatoses unrelated to fungal infection.In the present study, we observed a wide range of changes like frequent recurrences, lesions recurred with diverse morphology, lesions recurred after varying intervals of symptom-free period and responded variably to antifungal therapy in enrolled, and patients who recurred after 2 weeks of itraconazole therapy in the study.[Tables 2 and 4] Few patients gave an interesting story of recurrent lesions (macule and papule) evolved into annular plaques over 1-2 weeks, [Figure 1] and examination revealed few patients had a predilection for fungal infection for the frictional sites.[Figure2] Near similar increased predilection of fungal infection for the frictional sites was reported by others. [17]However, they did elaborate cause for such increased predilection of lesion for the frictional site.
From the present study, authors hypothesized that the lesions that evolved into annular plaques on recurrence may be the fungal infection that had invaded the deep dermis following local immunosuppression induced by topical steroid use, and these acquired a classical morphology on its recurrence to the skin surface. [29]Second, the predilection of fungal infection for the frictional site may be due to skin barrier defect in atopic patients was aggravated by friction caused by tight cloth, which facilitated the binding of fungal arthroconidia to keratinized tissue and resulted in increased fungal infection at the frictional site. [30]mitation Fungal isolates from the recurrent lesions were not tested for antifungal susceptibility.
Conclusion
This is the first study that identified the lesions that recurred at the treated site of dermatophytosis were fungal infection and dermatosis unrelated to fungal infection.Development of fungal infection and dermatosis unrelated to fungal infection in the presents study may be due to more than one risk factors altered/ complicated the fungal infection and/or different properties of risk factor like: a) anti-inflammatory and immunosuppressive properties of topical steroid altered fungal infection, b) shift in immunity of Atopic dermatitis prolonged disease course of fungus, c) prolonged topical steroid use developed unique adverse effect "topical steroid withdrawal syndrome", d) Cutaneous trauma by fungal infection on Atopic dermatitis skin developed post traumatic eczema, e) disease in family members re-infected the treated tinea site, f) inadequate antifungal therapy, g) inherent skin barrier defect of Atopic dermatitis facilitated adhesion of fungus to keratinized tissue and/or h) fungal infection developed drug resistance.Hence, diagnosis of recurred lesion in the adequately treated dermatophytosis must be individualized and should be based on 1) duration of antifungal therapy received, 2) associated risk factors, 3) response to antifungal therapy, 4) evolution of recurrent lesion, and/or 5) fungal test.
Declaration of patient consent
The authors certify that they have obtained all appropriate patient consent forms.In the form, the patient (s) has/have given his/her/their consent for his/her/their images and other clinical information to be reported in the journal.The patients understand that their names and initials will not be published and due efforts will be made to conceal their identity, but anonymity cannot be guaranteed.
Financial support and sponsorship
All India Institute of Medical Sciences, Bhubaneswar.
Conflicts of interest
There are no conflicts of interest.
Flow chart:
Flow chart describing the recruitment of cases 3 times (range 1-9 times).[Table2] Median duration of prior oral antifungal intake by the patients was 8 weeks (range 2-21 weeks), of which 115 (73.19%) had used oral itraconazole and 12 (8.0%)had used terbinafine, while 134 (89.93%) patients had already received more than 4 weeks of oral antifungal therapy at baseline.Most common risk factors noted in the enrolled patients were topical steroid/mixed cream use, 125 (79.61%), and disease in family, 103 (65.11%).[Table3] One hundred and twenty-seven (80.89%) patients had more than one risk factors [i.e. two risk factors in 66 (42.03%), and three risk factors in 18 (11.46%)].The median duration of topical steroid use was 6 weeks (range 0-30 weeks).[Table for assessment (dropped out) n=8 (Topical steroid use two, mixed cream use four, and two patient did not come for follow-up).
Figure 3 :
Figure 3: Recurrent papule and erythema at the treated tinea site subsiding with scale
Figure 1 :Figure 2 :
Figure 1: Recurrent lesions at the treated tinea site evolved into annular plaque at axilla
Figure 4 :
Figure 4: Recurrent erythema at the treated tinea site subsiding with scale
|
2024-03-09T16:19:09.035Z
|
2024-02-01T00:00:00.000
|
{
"year": 2024,
"sha1": "7471970ba668b3f4c75bf54295d45e00019c663b",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/jfmpc.jfmpc_672_23",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2f961d1cf459b8c486698eae9403e34aa16871b9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
258655109
|
pes2o/s2orc
|
v3-fos-license
|
MICROBIAL NECROMASS WITHIN AGGREGATES STABILIZES PHYSICALLY-PROTECTED C RESPONSE TO CROPLAND MANAGEMENT
ABSTRACT The interactions of soil microorganisms and structure regulate the degradation and stabilization processes of soil organic carbon (SOC). Microbial necromass is a persistent component of SOC, and its magnitude of accumulation dependent on management and aggregate sizes. A meta-analysis of 121 paired measurements was conducted to evaluate the management effects on contributions of microbial necromass to SOC depending on aggregate fractions. Results showed that the contribution of fungal necromass to SOC increased with aggregate sizes, while bacterial necromass had a higher proportion in silt and clay. Cropland management increased total and fungal necromass in large macroaggregates (47.1% and 45.6%), small macroaggregates (44.0% and 44.2%), and microaggregates (38.9% and 37.6%). Cropland management increased bacterial necromass independent of
INTRODUCTION
Soil organic carbon (SOC), as a key indicator of soil quality, has important functions such as nutrient supply, biodiversity maintenance and climate change mitigation [1,2] .Agricultural soils contain immense carbon pools but these are under considerable threat due to unsustainable cultivation practices [3] .At the global scale, agricultural soils have lost a half to two-thirds of total SOC compared with natural or uncultivated soils [4] .Increasing the potential for agricultural soils to sequester C, therefore, requires appropriate management practices, which are particularly important for agricultural sustainable development and climate change mitigation [5] .
The SOC occlusion within aggregates is one of the most important physical preservation mechanisms because there are physical barriers between microorganisms, enzymes and their substrates [6] .Owing to different aggregate-size fractions provide spatially heterogeneous habitats, distribution of microorganisms and their activities among aggregates of different sizes are various [7,8] .Previous studies have shown that microbial products (e.g., residues or necromass) can enhance the stability of SOC through participating in soil aggregation, in turn, the degree to which microbial necromass accumulate in soil may depend on physical protection [9,10] .Generally, macroaggregates and microaggregates are hierarchically organized by organic matter from plant litters and microbial metabolites, which physically protect necromass from mineralizing within aggregates [11] .Microbial-derived C enrich in silt-clay fraction and was protected chemically via association with soil minerals [12] .These processes may further affect the distribution of microbial necromass in soil aggregates and would be regulated by management practices.Cropland management, i.e., nitrogen addition [13] , manure [10] , straw application [14] , no or reduced tillage (NT/RT) [15] , cover crops [16] , influence microbial biomass and community structure composition, subsequently microbial necromass C associated with aggregates.Nonetheless, the magnitudes of change resulting from management effects varied among different assessments.Preferential accumulation of fungalderived necromass in macroaggregates in response to no tillage was observed [17] .However, Li et al. showed bacterial necromass was highest in macroaggregates under conservation management [18] .Considering the significance of aggregates and microbial necromass to soil C pool, further research is needed to assess the proportions of microbial necromass C within soil aggregate fractions and the overall management effects.It is essential to enhance mechanistic comprehension of the vital roles of microbial byproducts and soil structure interaction drives physically C stabilization under cropland management and to formulate relevant management strategies.Amino sugars have been widely used to study microbial necromass cycling and storage [10,19] .As many as 26 amino sugars have been identified in soil microorganisms, and various amino sugars are related to specific microbial populations [20] .Glucosamine, galactosamine, mannosamine and muramic acid, as four types of amino sugars, have been quantified in most studies [21,22] .Muramic acid occurs exclusively in the bacterial peptidoglycan.Glucosamine is a major source of fungal cell wall, and it is also found in bacterial peptidoglycan bonded to muramic acid [23] .Muramic acid and glucosamine have been employed to differentiate between fungal and bacterial necromass.In this study, we collected microbial necromass or muramic acid and glucosamine from soil aggregate fractions reported in experiments with paired management in cropland.
We aimed to answer two questions: (1) How does microbial necromass distribute in different aggregates sizes in response to cropland management?(2) What are the key predictors for the accumulation of microbial necromass within soil aggregate fractions?
Data compilation
The experiments to determine the concentrations of soil microbial necromass or amino sugars within soil aggregates in cropland were found in the Web of Science and China National Knowledge Infrastructure databases.The search terms were "microbial/bacterial/fungal necromass" or "microbial//bacterial/ fungal residues" or "amino sugars" and "aggregates".All literature data were retrieved from peer-reviewed research articles published before October 2022.We focused on proportions of microbial necromass C in soil organic C within soil aggregates and the response of necromass C within aggregates to common cropland management.The following criteria were used to select suitable papers: (1) the experiments must include microbial necromass C or glucosamine and muramic acid within soil aggregates in cropland, excluding studies in grassland and forest ecosystems; (2) the experiment was implemented with a pairwise design, including cropland management and control (without respective amendment or practice); and (3) the means, standard deviation (SD), and sample sizes of the variables were available or could be obtained from the articles; if papers only included amino sugars, SD of microbial necromass was calculated by multiplying the mean by 0.1 [24] .We extracted 121 independent observations from articles that met our criteria.These data covered four aggregates classifications [10] : large macroaggregates (LM; > 2000 μm), small macroaggregates (SM; 250-2000 μm), microaggregates (MA; 53-250 μm), silt and clay (SC; < 53 μm).The management practices with quantitative data, such as manure, straw, no or reduced tillage (NT/RT), and cover crops, were included.Most studies were concentrated in China and North America.
For each study in our data set, the mean values and standard deviations of the amino sugars (glucosamine and muramic acid) content under cropland management and control were extracted directly from the tables or were extracted by the free software GETDATA GRAPH DIGITIZER for data presented only in figures within the articles.The fungal and bacterial necromass C were calculated from glucosamine and muramic acid, respectively, using the following formula based on previously established stoichiometric conversion factors, as reviewed by Liang et al. [25] .
The total microbial necromass C is the sum of fungal and bacterial necromass C. The ratios of fungal-to-bacterial necromass were used to evaluate the relative accumulation of fungal and bacterial necromass.The corresponding SOC contents within soil aggregates were extracted from the studies to evaluate the contributions of microbial-derived necromass to SOC.
Additionally, we recorded experiment location (e.g., longitude and latitude) and soil properties (initial soil pH, SOC, total N (TN), C/N and clay content).The mean annual temperature (MAT), mean annual precipitation (MAP), and aridity index were also extracted, or when not reported, extracted from the WorldClim database and the Global Aridity and PET database using latitude and longitude information.
Response metrics
A random-effect model was used to evaluate the effects of management on microbial necromass within soil aggregates and their contribution to SOC in cropland [26] .The natural log of the response ratio (RR) was calculated as the effect size, representing the management effects: where, X T and X C are the mean of the management and control groups for variable X, respectively.The variance of RR was calculated as: where, SD T and SD C are the standard deviations of the management and control groups, respectively, and N T and N C are the sample size of the management and control groups, respectively.The mean weighted response ratio (RR ++ ) was calculated from the individual pairwise comparison between management and control treatments: where, w i is the weighting factor of the ith experiment in the group.To identify significant differences in the effect sizes, the 95% confidence intervals was calculated as , where, S is the standard error of RR ++ calculated as .If the 95% confidence intervals did not overlap with zero, the management effects significantly affected the target variables.To quickly account for the management effects, the percent change was calculated based on the weighted effect size as for all variables, including total, bacterial, and fungal necromass C; contributions of total, bacterial, and fungal necromass C to SOC and ratio of fungal-to-bacterial necromass C.
Statistical analysis
The effects size and 95% confidence intervals were calculated by using rma.mvfunction of the R package "metafor".Between-group heterogeneity and the probability described statistical differences of microbial necromass responses to management between different levels of the aggregate sizes.Linear regression was used to examine the relationship between the response ratios of microbial necromass C and the response ratios of SOC within soil aggregate fractions.Pearson correlation to assess the correlations between environmental variables and microbial necromass within soil aggregates was conducted using the R package "corrplot".These variables were collected at the treatment plots with cropland management at which the microbial necromass C data within soil aggregates were obtained.Egger's regression test and fail-safe analysis with Rosenberg method were used to test publication bias in the studies [27,28] .If P was > 0.05 in Egger's regression test or coefficients were > 5N + 10 in the fail-safe analysis (N is the sampling size in this study), then the effect sizes of variables are considered statistically significant, and the observed pattern indicated no sign of publication bias (Table S1).
Microbial necromass within soil aggregate fractions
The contributions of fungal necromass C to SOC were 47.3%, 46.8%, 44.7% and 34.0% in LM, SM, MA and SC fractions, respectively (Fig. 1).The contributions of bacterial necromass C to SOC were similar in LM, SM and MA, accounting for 12.1%, 12.7% and 12.2%, respectively.However, bacterial
Response of microbial necromass within soil aggregate fractions to cropland management
Across the data set, total microbial necromass C was not consistently affected by management across aggregate fraction sizes (Fig. 2(a)).The total microbial necromass C increased by 47.1%, 44.0% and 38.9% in LM, SM and MA, respectively, but the management effect was absent in SC.Bacterial and fungal necromass C within soil aggregate fractions responded differently to cropland management.Specifically, cropland management increased bacterial necromass C regardless of aggregate fraction sizes but the responses of fungal necromass C were contingent on soil aggregates (Table 1); cropland management significantly increased fungal necromass C in LM, SM, and MA, with some minor difference, but had no significant affect in SC (Fig. 2(b,c)), which is consistent with total necromass.The contributions of total and fungal necromass to SOC increased by 10.1% and 13.5% in SM fraction, respectively (Fig. 2(e,g)).
The microbial necromass C was significantly affected by management types, aggregate fractions and their interaction (Fig. 3).Especially, the responses of microbial necromass to manure application between different aggregate sizes was significant (Table 1).Manure application increased total microbial necromass C by 50.6% and 43.7% in LM and SM, respectively (Fig. 3(a)).Manure application increased fungal necromass C by 28.6% and 26.6% in LM and SM, which was higher than those in MA at 15.0% and SC at 14.5% (Fig. 3(c)).Greater accumulation of bacterial necromass C in response to manure was in LM at 60.2% (Fig. 2(b)).Straw application did not significantly increase microbial necromass C in any aggregate size (Fig. 3(a-c)).Straw application significantly increased the ratio of fungal-derived to bacterial-derived necromass in LM (Fig. 3(d)).NT/RT had a faster accumulation of total and fungal necromass C than bacterial necromass; increased total necromass C by 64.2% and 61.6%, and increased fungal necromass C by 68.0% and 73.5% in LM and SM, respectively (Fig. 3(a,c)).NT/RT led to greater accumulation of fungal necromass than bacterial necromass with the ratio of fungal-derived to bacterial-derived necromass being significantly greater in SM and MA (Fig. 3(d)).In SM, cover crops significantly increased total and bacterial necromass C by 22.9% and 25.1%, respectively, but not fungal necromass (Fig. 3(a-c)).For the contribution of necromass to SOC, the proportions of total necromass C in SOC increased 17.8% and 23.0% in LM and SM under NT/RT, respectively (Fig. 3(e)).NT/RT also increased the contribution of fungal necromass C to SOC in SM (Fig. 3(g)).
Relationships between environmental variables and microbial necromass within soil aggregate fractions
Correlation analyses indicated that both climatic conditions and soil properties are important factors associated with microbial necromass C and necromass contribution to SOC in LM, SM and MA (Fig. 4).MAT, MAP, SOC, TN and soil clay content were most strongly associated with microbial necromass C in SC, but had no significant correlation with both necromass contribution to SOC and ratio of fungalderived to bacterial-derived necromass.The SOC, TN and soil clay content had positive relationships with microbial necromass C. MAT, MAP were negatively associated with microbial necromass in SC.Specially, microbial necromass C increased with SOC, TN, C/N and clay content in LM and MA whereas ratio of fungal-derived to bacterial-derived necromass decreased.In addition, microbial necromass within soil aggregate fractions increased with soil clay content increased.
Contribution of microbial necromass to physically stabilized C
The response ratios of total microbial necromass C were positively correlated with the response ratio of SOC within soil aggregate fractions (Fig. 5).The response ratios of bacterial necromass C were positively correlated with the response ratios of SOC in LM, SM and SC whereas there was no significant correction in MA.The response ratios of fungal necromass C were positively correlated with the response ratios of SOC in all aggregate fractions, especially in macroaggregates (Fig. 5), in which soil aggregates coupled with microorganisms (microbial necromass) physically stabilized SOC sequestration (Fig. 5 and Fig. 6).
DISCUSSION
Soil microbial necromass substantially contributes to SOM (15% to 80%) [9,25] , which include intact or burst cells or hyphae, fragments of cell walls and monomers or polymers that were in the cytoplasm, biofilm or hyphal mucilage [9] .As binding agents, necromass help form or stabilize soil aggregates [29] .In turn, aggregates physically protect microbial necromass from degradation, which promotes more necromass accumulation [30] .Various aggregates fraction have different potential to influence the contribution microbial necromass to SOC.Our results illustrated total microbial necromass C can contribute 59.4%, 59.5%, 56.9%, and 49.2% of SOC in LM, SM, MA, and SC (Fig. 1).These proportions were higher than previous studies reporting that microbial necromass account for 47.2%, 49.7%, and 38.6% of stabilized SOM for macroaggregates, microaggregates, and silt and clay fraction, respectively [12] .In the present study, we only included data from managed cropland, while data from forest and agricultural soil were included in a previous study [12] .Due to increased proportions of mineral-associated C and more rapid microbial transformation of litter in croplands, the microbial necromass contribution to SOC has been found to be larger than that in forests [25,31] .It is generally believed that microbial necromass may favorably accumulate in mineral fractions (silt and clay) due to reducing the distances between necromass and sorption sites [32] .However, the contribution of total necromass to SOC in SC fraction was lower than those in LM, SM, and MA (Fig. 1).This may be due to microbial necromass in macro-and microaggregates, apart from being attached to mineral surfaces, can also be protected in aggregates associated small pores [33] .The contribution of fungal necromass to SOC increased with aggregate sizes (Fig. 6), which highlights the role of fungi in aggregate formation and stabilization [30] .Meanwhile, high correlations between the response ratios of fungal necromass and SOC associated with aggregates were observed (Fig. 5).Bacterial necromass can directly, but non- specifically, attach to clay surfaces [34] , thus have a higher proportion in SC than that in other fractions (Fig. 1).
Therefore, the contribution of fungal and bacterial necromass to SOC largely is dependent of aggregates fraction.It will be necessary to understand the regulation of soil aggregates coupled with microorganisms on the organic matter upgrading.
Management practices can increase microbial necromass accumulation in cropland soil [35][36][37] , which depended on soil aggregates.An increase in the total microbial necromass was observed in LM, SM and MA but this was not evident in SC (Fig. 2(a)).This result answered our first question that cropland management affected total microbial necromass dependent on aggregate fractions.It is possible that saturation of microbial necromass in silt-clay occurred earlier than in greater aggregates leading to additional C only accumulating in larger aggregates [10] .However, cropland management increased bacterial necromass C in all aggregate fractions.Especially, bacterial necromass also increased in the SC fraction (Fig. 2(b)).Studies have reported a dominance of bacterial, rather than fungal amino sugars in the SC fraction [38] .Meanwhile, bacterial necromass held a higher proportion in SC than other fractions (Fig. 1 and Fig. 6).These results indicated that bacterial necromass may be an important variable influenced by management in SC fraction [17] .
The management effects were strongly depended on different practices (Fig. 3).Greater accumulation of total microbial necromass in LM and SM in respond to manure and NT/RT were observed (Fig. 3(a)).The statistical difference was absent due to the small number of studies under NT/RT (Table 1).Manure and NT/RT increased more fungal necromass in LM and SM than those in MA and SC (Fig. 3(c)), which indicates that fungi in macroaggregates was most influenced by cropland management [39] .NT/RT reduced soil disturbance, promoted aggregates formation and protected fungal hyphae, thus a preferential accumulation of fungal necromass in macroaggregates [40] .Strengthen of stable aggregates under manure can be spatially protected from microbial decomposition when microbial necromass occlude in soil aggregates [36] .Under cover crops, supply of diverse microbial substrates through litter, root exudates and rhizodeposits increase soil bacterial activity and/or growth [18] , leading to intensified production of bacterial, but not fungal, necromass (Fig. 3(b,c)).Management promoted only microbial contribution to SOC in SM, especially under NT/RT (Fig. 2(e) and Fig. 3(e)).NT/RT increased greater contribution of fungal necromass to SOC rather than bacterial necromass (Fig. 2(g) and Fig. 3(g)), which indicated that fungal-derived C is predominantly contributes to stable SOC accrual.The increased contribution of microbial necromass in SM may be because SM have more capacity for necromass accumulation, compared to microaggregates and silt-clay.Meanwhile, SM has a more stable necromass pool as SM stability is higher than LM [34] .Management influence microbial necromass by the growth of plant roots and microorganism, thus microbial necromass in macro-and microaggregates was easily controlled by management whereas those in silt-clay depend on sorption sites, independent of management [41] .Increasing microbial necromass accumulation within aggregates could be considered important management strategies that aim to accelerate C sequestration in agricultural soils.
Soil nutrients exert significant control over microbial necromass accumulation.As observed, SOC and TN had positive correlations with microbial necromass C within aggregates (Fig. 4(a,c,d)).Microbial necromass sequestration will be more efficient in nutrient-rich soil due to high C use efficiency [42] .The ratio of fungal-derived to bacterial-derived necromass decreased with nutrients increased because higher nutrients condition favor bacterial growth and subsequent increases in bacterial necromass [10] .Soil texture is also one of the key factors regulating SOC stability [43] .In general, high clay content resulted in the ability to stabilize microbial necromass C by physicochemical protection, thus increased necromass accumulation in all aggregates fraction.Climactic conditions had no obvious consistent impact on microbial necromass within aggregates (Fig. 4), therefore further study the climatic conditions favoring microbial necromass accumulation within soil aggregates is needed.
CONCLUSIONS
Cropland management practices increased microbial necromass associated with aggregates with an increase of bacterial necromass in all aggregate fraction sizes and an increase of fungal necromass, except for in SC.Manure and NT/RT increased fungal necromass in macroaggregates, and cover crops increased bacterial necromass in SM.The response ratios of fungal necromass positively correlated with SOC associated with aggregates, especially in macroaggregates.Consequently, it is necessary to consider the accumulation of microbial necromass associated with aggregates under cropland management, which could favor stable soil C formation and accrual in croplands soil.
Fig. 2
Fig. 2 The overall response of microbial necromass C (a-c), ratio of fungal-derived to bacterial-derived necromass (d) and necromass contribution to soil organic C (e-g) within soil aggregate fractions to management in cropland.TNC, total necromass C; BNC, bacterial necromass C; FNC, fungal necromass C; SOC, soil organic carbon; and FNC/BNC, the ratio of fungal-derived to bacterial-derived necromass C. The number of observations are shown in parentheses.Closed symbols indicate significant effects.
Fig. 3
Fig. 3 Percent changes in microbial necromass C (a-c), ratio of fungal-derived to bacterial-derived necromass (d) and necromass contribution to soil organic C (e-g) within soil aggregate fractions dependent on cropland management.TNC, total necromass C; BNC, bacterial necromass C; FNC, fungal necromass C; SOC, soil organic carbon; FNC/BNC, the ratio of fungal-derived to bacterial-derived necromass C; NT/RT, no or reduced tillage; LM, large macroaggregates; SM, small macroaggregates; MA, microaggregates; and SC, silt and clay.The number of observations are shown in parentheses.Closed symbols indicate significant effects.
Fig. 6
Fig.6 Concept and meta-analysis results of the responses of necromass C within soil aggregate fractions to cropland management.NT/RT, no or reduced tillage; LM, large macroaggregates; SM, small macroaggregates; MA, microaggregates; and SC, silt and clay.
Ranran ZHOU et al. Microbial necromass within aggregates to cropland management necromass
C had a higher proportion in SC than that in other fractions.Overall, total microbial necromass C contributed 59.4%, 59.5%, 56.9% and 49.2% of SOC of LM, SM, MA and SC, respectively.
Table 1 Between-group heterogeneity (Q M ) and the probability (P) showing statistical differences of microbial necromass responses to management between different levels of the aggregate sizes
: QM is the heterogeneity of the weighted effect size associated with different aggregate sizes, and P < 0.05 is bold and indicates significant differences among different aggregate sizes.TNC, total necromass C; BNC, bacterial necromass C; FNC, fungal necromass C; SOC, soil organic carbon; FNC/BNC, the ratio of fungal-derived to bacterial-derived necromass C; and NT/RT, no or reduced tillage. Note
|
2023-05-13T15:10:57.118Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "3de1c0005fee1cf048a3b11da7be7298f70a50ae",
"oa_license": "CCBY",
"oa_url": "https://journal.hep.com.cn/fase/EN/PDF/10.15302/J-FASE-2023498",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a4d8dca316bec0377d4c6c96a233501baecf0c56",
"s2fieldsofstudy": [],
"extfieldsofstudy": []
}
|
788978
|
pes2o/s2orc
|
v3-fos-license
|
Sorting out the mix in microbial genomics
The relatively small number of microbial genomes completed in the past two months (Table 1) includes, however, representatives of two new bacterial phyla, Dictyoglomi and Nitrospirae. To highlight the first genome sequences from these poorly studied taxa, they have been placed in a new section at the top of Table 1.
The relatively small number of microbial genomes completed in the past two months (Table 1) includes, however, representatives of two new bacterial phyla, Dictyoglomi and Nitrospirae. To highlight the first genome sequences from these poorly studied taxa, they have been placed in a new section at the top of Table 1.
So far, little is known about either Dictyoglomus thermophilum or Thermodesulfovibrio yellowstonii. Both are Gram-negative thermophilic heterotrophs with an extremely low (29 mol%) G+C content of their chromosomal DNA (Saiki et al., 1985;Henry et al., 1994). Dictyoglomus thermophilum is an obligate anaerobe that grows optimally at 70-73°C. It was isolated from the Tsuetate hot spring in Japan (Saiki et al., 1985) and used to purify three extremely heat-stable amylases (Horinouchi et al., 1988). Thermodesulfovibrio yellowstonii was isolated from a thermal vent in Yellowstone Lake in Wyoming. It contains c-type cytochromes and grows optimally at 65°C using lactate, pyruvate or formate plus acetate as substrates and can use sulfate, thiosulfate and sulfite as terminal electron acceptors (Henry et al., 1994). Analysis of these genomes should provide a window into the physiology and evolutionary relationships of these new bacterial lineages.
Completion of these two genomes is a major step towards the goal of having at least one complete genome sequence from representatives of all major prokaryotic groups. Indeed, we now have at least one completely sequenced genome for 18 bacterial phyla out of the 24 listed in the taxonomic outline in the socond volume of the Bergey's Manual (Garrity et al., 2004; available at http:// www.bergeys.org/outlines.html). Representatives of five more bacterial phyla are at various stages of genome sequencing: Chrysiogenes arsenatis (phylum Chrysiogenetes), Denitrovibrio acetiphilus (Deferribacteres), Fibrobacter succinogenes (Fibrobacteres), Thermodesulfobacterium commune (Thermodesulfobacteria) and Thermomicrobium roseum (Thermomicrobia, which may be considered a class in the phylum Chloroflexi). Only one of those 24 phyla (Gemmatimonadetes) remains not a subject of any publicly announced genome sequencing project. This is obvious progress in coverage of microbial diversity compared with the status of the genome sequencing just two years ago (Galperin, 2006), However, this nice and clear picture of the bacterial taxonomy and the corresponding genome sequencing efforts is complicated by several different factors. First of all, high-rank bacterial taxonomy is still in the state of flux: new candidate phyla are being identified and new genome sequencing projects are being planned to characterize their representatives. There are ongoing sequencing projects for Lentisphaera araneosa and Thermanaerovibrio acidaminovorans, cultured representatives, respectively, of the recently recognized phyla Lentisphaerae (Cho et al., 2004) and Synergistetes (Aminanaerobia) (Hongoh et al., 2007;Jumas-Bilak et al., 2007). In addition, genomic sequencing is being performed on candidate phyla that were initially deduced based solely on the clustering of 16S rRNA sequences. Examples include a nearly complete genome from the candidate phylum TM7, which still has no cultivated members (Marcy et al., 2007) and two recently sequenced genomes from representatives of the candidate phylum Termite group 1 (TG1), one of which, "Elusimicrobium minutum", was in the meantime successfully cultivated. Sometimes genomic data reveal distant similarities between two or more phyla, which results in their unification into a group (e.g. Bacteroidetes/ Chlorobi, Fibrobacteres/Acidobacteria) or a superphylum, e.g. Chlamydiae/Verrucomicrobia/Planctomycetes/Lentisphaerae (Wagner and Horn, 2006;Hou et al., 2008). Besides, certain validly described bacterial groups still lack any sequence information (Yarza et al., 2008). Finally, there are several alternative classifications of bacteria that made their way into taxonomic literature but, for a variety of reasons, failed to gain acceptance in the community (Gupta, 1998;2000;Cavalier-Smith, 2002;2006). Another such example is the already mentioned (Galperin, 2008) recent transfer of Mollicutes from the phylum Firmicutes into a new phylum Tenericutes in the latest edition of Bergey's Manual (Ludwig et al., 2008). The phylogenetic trees that served as the rationale for that move show numerous inconsistencies and hardly justify the decision to create this new phylum.
It must be noted that back in 1992, Sneath and Brenner stated 'There is no such thing as an official classification' (see http://www.bacterio.cict.fr/Sneath-Brenner.html). This point was recently reiterated by J.P. Euzéby, whose List of Prokaryotic names with Standing in Nomenclature (http://www.bacterio.cict.fr/) includes an up-to-date listing of commonly recognized prokaryotic phyla (http:// www.bacterio.cict.fr/classifphyla.html). For a quick look at the current state of microbial genome sequencing, the easiest tool might be the NCBI's Tax Tree (http:// www.ncbi.nlm.nih.gov/genomes/MICROBES/microbial_ taxtree.html), which lists both completed and ongoing genome sequencing projects. However, for those interested in the emerging microbial diversity, the best source of information is probably the 'greengenes' website (http:// optimum at 63°C. Although C. proteolyticus was initially described as a Gram-negative bacterium and therefore suggested to belong to a deep bacterial lineage, potentially at the phylum level (Rainey and Stackebrandt, 1993), analysis of its 16S rRNA revealed that it is related to Thermoanaerobacter sp. It is currently assigned to the family Thermodesulfobiaceae (Mori et al., 2003) within the order Thermoanaerobacterales and is the first sequenced genome from that family. Phenylobacterium zucineum is an a-proteobacterium recently isolated from a human erythroleukemia cell line (Zhang et al., 2007). All close relatives of P. zucineum are free-living environmental organisms, and its 4.4 Mb genome is much larger than that of any intracellular parasite or symbiont characterized so far. Indeed, the genome sequence (Luo et al., 2008) revealed similarities with the genome of Caulobacter crescentus. However, fragments of P. zucineum genomic DNA were found among the EST libraries from breast cancer and lymphatic cell lines, suggesting that this organism might survive in proliferative tissues.
Acidithiobacillus ferrooxidans (previously known as Thiobacillus ferrooxidans; Kelly and Wood, 2000), is an obligately acidophilic chemolithoautotrophic g-proteobacterium, a popular model organism to study bacterial membrane energetics at acidic pH values (see Ferguson and Ingledew, 2008 for a recent review). It gains energy by oxidizing ferrous iron and is able to grow in the pH range from 1.3 to 4.0 using CO 2 as the sole source of carbon. Acidithiobacillus ferrooxidans is a major component of microbial consortia used in bio-mining to extract copper, zinc and other metals from low-grade ores. With the recent increase in the price of gold, A. ferrooxidansbased microbial consortia are increasingly used to improve recovery of gold from arsenopyrite ores. Despite the importance of this organism (or maybe because of it), sequencing of the A. ferrooxidans genome had a long and convoluted history. The first (incomplete or 'gapped') genome sequence of the type strain A. ferrooxidans ATCC 23270 was produced at the Integrated Genomics in 1999. It consisted of 1353 contigs covering 2611 kb and coding for 2712 proteins; it was estimated to lack~100 kb (Selkov et al., 2000). This sequence was used for an analysis of the amino acid metabolism in A. ferrooxidans, which allowed an almost complete reconstruction of its metabolic pathways, leaving just 10 unassigned (missing) enzymes. Despite the initial intent of the authors to demonstrate that 'gapped' microbial genomes were almost as good as complete ones (Selkov et al., 2000), this paper actually succeeded in proving the opposite: a meaningful and unequivocal analysis is only possible with a complete genome sequence. Furthermore, only small pieces of the genome have been submitted to the GenBank, which prevented others from analysing this genome.
Shortly after that, sequencing of A. ferrooxidans genome has been undertaken at the Institute of Genomic Research (TIGR). The resulting incomplete genome sequence of 3081 kb was made publicly available in 2001 as RefSeq entry NC_002923 and was subsequently used for a variety of genome analyses (e.g. Valdés et al., 2003;Quatrini et al., 2007). Over the next two years, this sequence was updated more than a dozen times and was finally withdrawn at the end of 2003. Since 2006, a complete genome sequence of 2982 kb coding for 3217 predicted proteins has been available on the TIGR website but was not submitted to GenBank. Finally, a recent joint paper by Chilean and TIGR scientists (Valdés et al., 2008) reported a detailed analysis of this genome and its availability to the public.
Meanwhile, JGI scientists have released a 2885 kb genome sequence of another strain of A. ferrooxidans, which encodes 2826 proteins. This strain A. ferrooxidans ATCC 53993 was isolated from mine water of the Alaverda copper deposit in Armenia and initially assigned the name Leptospirillum ferrooxidans (Balashova et al., 1974;Hippe, 2000). Although its relation to the type strain ATCC 23270 is not known at this time, their 16S rRNA sequences are 100% identical. Thus, after 10 years of struggling with unfinished genome sequences, the public now has access to two complete genomes of A. ferrooxidans. This should allow further analyses of the properties of this remarkable organism and stimulate its use in energy research and bio-mining.
The list of completely sequenced spirochaete genomes has grown to include genomes of Borrelia duttonii and Borrelia recurrentis (Lescot et al., 2008). Both organisms are important human pathogens causing relapsing fevers. The first one is transmitted by the tick Ornithodoros moubata and is found primarily in east Africa. Borrelia recurrentis is transmitted by human body lice Pediculus humanus and is found in around the world. The sequenced strain B. duttonii Ly was isolated from a 2-year-old girl with tick-borne relapsing fever in Tanzania, whereas B. recurrentis strain A1 was isolated from an adult patient with louse-borne relapsing fever in Ethiopia.
Klebsiella pneumoniae ssp. pneumoniae is a wellknown human pathogen, and the first genome of its clinical isolate MGH 78578 was sequenced more than two years ago. A very interesting paper from the JCVI scientists now reports the genome sequence of an environmental N 2-fixing strain of K. pneumoniae (Fouts et al., 2008). Such strains are commonly found as endophytes that colonize tissues of rice, maize, sugarcane, banana and various grasses and improve the growth of the host plants by supply them with ammonia. The sequenced strain K. pneumoniae 342 was isolated from maize and later shown to colonize wheat and alfalfa sprouts. Comparative analysis of the two strains pro-vides interesting clues to the adaptation to the endophytic lifestyle, as well as into the evolution of pathogenicity in K. pneumoniae.
The list of organisms with recently sequenced genomes also includes the marine g-proteobacteria Alteromonas macleodii and Vibrio fischeri, d-proteobacteria Anaeromyxobacter sp. and Geobacter bemidjiensis, five new strains of Salmonella enterica ssp. enterica that include four new serovars (Thomson et al., 2008), Streptococcus equi ssp. zooepidemicus (also known as Streptococcus zooepidemicus), the cause of an acute nephritis acute epidemic in Brazil (Beres et al., 2008), and the wellstudied Helicobacter pylori strain G27 (Table 1).
|
2014-10-01T00:00:00.000Z
|
2008-12-01T00:00:00.000
|
{
"year": 2008,
"sha1": "1268cdbb205c5d22129ebb7da43ab54ba194053d",
"oa_license": "CCBY",
"oa_url": "https://europepmc.org/articles/pmc2702502?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "1268cdbb205c5d22129ebb7da43ab54ba194053d",
"s2fieldsofstudy": [
"Biology",
"Engineering"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
228838624
|
pes2o/s2orc
|
v3-fos-license
|
Enhancement of the electrical conductivity of defective carbon nanotube sheets for organic hybrid thermoelectrics by deposition of Pd nanoparticles †
Although a single carbon nanotube (CNT) has a high thermal conductivity, the randomly assembled sheet has a high potential to be used as an effective thermoelectric (TE) film because of the high electrical conductivity due to the intrinsic property of the CNT and the low thermal conductivity due to phonon scattering at the interfaces of the CNTs. Nevertheless, the high cost of the CNTs limits the practical application of the CNT sheets in TE devices. The recent mass-production of inexpensive CNTs may resolve this problem, although the mass-produced CNTs have many more defects on their surface, which often reduces the electrical conductivity in comparison to the expensive CNTs. We have now discovered that the deposition of palladium nanoparticles (Pd NPs) can enhance the electrical conductivity of the CNT sheets. The electrical conductivity of the mass-produced and defective super-growth CNT (SGCNT) sheets has been found to be increased from about 90 S cm (cid:2) 1 to 170 S cm (cid:2) 1 by depositing the Pd NPs on the SGCNTs by an accumulated chemical reduction method. The results suggested the possibility that the Pd NPs could deposit on the defective sites of the SGCNTs, which produced the improved electrical conductivity of the CNT sheets. Because the SGCNT sheets have a low thermal conductivity, the thermoelectric figure-of-merit ( ZT ) at room temperature was estimated to be as high as B 0.3.
Introduction
Single-walled carbon nanotubes (SWCNTs) have received much attention since their first discovery by Iijima et al. 1 and Bethune et al. 2 in 1993. Many fundamental research studies have been carried out since then, which reported that SWCNTs could be semiconducting or metallic, 3,4 have diameter-dependent optical and electrical band gaps, [4][5][6] and have many attractive properties such as incredibly high charge carrier mobilities. 7 Although the as-prepared SWCNTs are mixtures, separation and purification techniques have been developed establishing the SWCNTs as a versatile electronic material and providing high-performance applications, such as transistors, 8 biological imaging fluorophores, 9 photovoltaics, 10 etc. In the past decade, nanomaterials, especially SWCNTs, have also attracted increasing attention as organic and/or hybrid thermoelectric (TE) materials, too, [11][12][13][14][15][16][17][18][19] which are essential for developing TE devices for harvesting electrical energy from low-grade waste heat 20 as well as for the so-called ''IoT.'' 21 Although a single CNT has a high thermal conductivity, a randomly assembled CNT sheet or network has a much lower thermal conductivity than that of the single CNT due to phonon scattering at the interfaces of the CNTs in the CNT sheets, but maintains a high electrical conductivity. 22 Thus, the CNT sheets have a significant potential to be used in TE devices.
There have been many reports about the electronical conductivity and Seebeck coefficient of the sheets of various SWCNTs. 15 The electrical conductivity and Seebeck coefficient of the SWCNT depend on its diameter, chirality, and doping. 13,15 Thus, separation and purification are necessary to obtain the CNT with a high TE performance. In fact, a CNT sheet with a high Seebeck coefficient was reported to have a high TE performance. 23,24 In addition, the composites of the CNTs with electric conducting or insulating polymers showed a higher performance as practically processable TE materials. 25,26 We have proposed a ternary hybrid TE material composed of CNTs, a nano-dispersed polymerized nickel complex (poly(nickel 1,1,2,2-ethylenetetrathiolate), PETT) 27 and poly(vinyl chloride) with a high TE performance. 28 Grunlan and Yu's group reported that a 40-quadlayered polyelectrolyte carbon nanocomposite prepared by the layer-by-layer deposition of polyaniline/graphene/polyaniline/double-walled CNT showed an extremely high maximum power factor. 29 These reports have clearly shown that CNTs are promising materials for TEs.
As already mentioned, however, the CNTs, which were effective for a high TE performance, were usually the specialized CNTs and very expensive. Recently, inexpensive CNTs were massproduced in a factory by a super-growth (SG) method. 30 However, the disadvantage of the SGCNT is the presence of many defects on a tube, which results in a low electrical conductivity. We wanted to improve the electrical conductivity of the defective SGCNTs, because the electrical conductivity is a key factor to enhance the TE performance, i.e., the TE figure-of-merit (ZT).
Hybridization is one of the popular techniques to improve the physical property of materials, and often prefer nanomaterials as the component. In fact, there are many reports about hybrids of one-dimensional nanomaterials like CNTs or conducting polymers with metal NPs to improve the physical property of the nanomaterials. 14,15,[31][32][33][34][35][36][37][38][39][40] Among them, however, only a few reports were concerned with the hybrid thermoelectric materials composed of SWCNTs and metal NPs. [36][37][38][39][40] In 2011, C. Yu et al. 36 reported the decoration of arc-discharged SWCNTs with Au NPs by the reaction with HAuCl 4 . The gold reduction allow electron withdrawal from the CNTs, resulting in a 1.5 times higher conductance, but half the thermopower on the produced composites. Thus, the thermoelectric power factor (PF = sS 2 , where s and S denote electrical conductivity and thermopower (Seebeck coefficient), respectively), which is a factor denoting the thermoelectric performance, was unfortunately 0.375 times the value of the pristine CNT. His group 37 also showed the thermoelectric properties of the composite composed of Au NP/SWCNT (by a HipCo method)/PEDOT-PSS (poly(3,4-ethylenedioxythiophene))-poly(styrene sulfonate)/PVAc (poly(vinyl acetate)) (15/60/15/10 in wt%). It had a high electrical conductivity (B6000 S cm À1 ), 6 times higher than that of the pristine composite without Au NPs (SWCNT/PEDOT-PSS/PVAc: 60/30/10 in wt%), but again a lower Seebeck coefficient (11 mV K À1 ), which was 0.275 times the value of the pristine one, resulting in a 0.453 times the power factor value of the pristine one. Thus, the galvanic method could be insufficient for improvement of the TE performance by decoration of the CNT with metal NPs. In 2013, Fernandes et al. 38 prepared composites of Au NP/CNT by a mixture of Au NPs (5 and 60 nm in average diameter) and SWCNTs dispersed by 1 wt% of a surfactant, sodium dodecyl sulfate, in water. The addition of small and large Au NPs decreased the electrical conductivity from 1800 S cm À1 for the pristine CNT to 620 and 790 S cm À1 for the composite of the small and large Au NPs, respectively, and also decreased the Seebeck coefficient from 31 mV K À1 to B25 mV K À1 , resulting in a decreased power factor from 167 mW m À1 K À2 to 420 and 470 mW m À1 K À2 for the small and large Au NPs, respectively. In this case, therefore, no improvement was observed by incorporation of the Au NPs. In 2016, An et al. 39 succeeded in covering the porous CNT webs with Au NPs by reduction of AuCl 3 with the CNTs. The coverage increased the electrical conductivity from 1000 S cm À1 for the pristine CNT web to 5000 S cm À1 , and decreased the Seebeck coefficient from B120 mV K À1 to B80 mV K À1 , resulting in an improvement of the power factor from B1500 mW m À1 K À2 to B3500 mW m À1 K À2 and a more than 2-fold improvement in the TE figure-of-merit ZT from 0.079 to 0.163. After covering, polyaniline (PANi) was integrated into the Au-doped CNT webs leading to an electrical conductivity of B1000 S cm À1 , Seebeck coefficient of B150 mV K À1 , power factor of B2200 mW m À1 K À2 , and ZT of 0.203, which is one of the highest ZT values reported for organic TE materials. In 2017, we prepared CNT-based hybrid films by the physical mixture of SWCNTs and Pd NPs (2.2 and 6.2 nm average diameters) with and without poly(vinyl chloride) (PVC) (4 : 0.4 : 6 and 10 : 1 : 0 in wt ratio, respectively) in NMP (N-methylpyrrolidone). 40 In the resulting hybrid films containing PVC, the addition of the larger Pd NPs (commercially-available Pd black) slightly decreased the electrical conductivity with almost no change in the Seebeck coefficient, while the addition of the smaller Pd NPs (prepared separately by ourselves) significantly improved the electrical conductivity from B67 S cm À1 to B95 S cm À1 (about 1.4 times higher) and slightly increased the Seebeck coefficient. For the hybrids of the CNTs and Pd NPs without PVC, the electrical conductivity was increased from 52 S cm À1 to 93 S cm À1 by incorporation with the small Pd NPs, while the Seebeck coefficient remained nearly constant with a slight increase (from 52.4 mV K À1 to 58.7 mV K À1 ). These previously published metal NP-containing SWCNT-based hybrids can be classified into two types, i.e., those produced by the accompanying reaction withdrawing electrons from the CNTs (ref. 36, 37 and 39) and those produced without such a reaction (ref. 38 and 40). In the former case, the reaction provided holes (carriers) to the CNTs, resulting in a significant improvement in the electrical conductivity, but a decrease in the Seebeck coefficient by the incorporation of metal NPs. In the latter case, on the other hand, the electrical conductivity decreased if the NPs were large, and moderately increased if the NPs were sufficiently small. As for the Seebeck coefficient, large NPs decreased it, but, if the NPs were small, it remained nearly constant. In addition, it is noteworthy that the metal NPs were randomly deposited on the surface of the CNTs in these hybrids.
We have now designed a new type of hybrid film by making metal NPs mainly deposited on the defect sites on the surface of the CNTs. For this purpose, the defective SGCNTs are a favorite material for the hybrids, because the defects are considered to be holes or vacancies which may be functionalized with polar groups like the COOH group. Thus, we have tried various methods for the hybridization of the SGCNTs with palladium nanoparticles (Pd NPs), and discovered that the chemical deposition of Pd NPs on the SGCNTs can enhance the electrical conductivity of the CNT sheets, resulting in an improved ZT value (ZT B 0.3) by the hybridization. This ZT value is one of the highest ZT values reported for organic TE materials. 14,15,28 These results suggested the possibility that the Pd NPs could cover the defective sites of the SGCNTs, in other words, repair the defects of the SGCNTs. This can provide a new concept to the chemistry and physics of CNTs.
Materials
Palladium black (purity: 99.95%) and palladium acetate were purchased from Kojima Chemical Co., Ltd, Japan. 1-Methyl-2pyrrolidone (NMP, for peptide synthesis), hydrochloric acid (reagent grade), nitric acid (for analysis of poisonous metals), and methanol (reagent grade) were obtained from the FUJIFILM Wako Pure Chemical Corporation. The super-growth carbon nanotube (SGCNT) was kindly provided by the Nippon ZEON Corporation, Japan. The provided SGCNTs were SWCNTs with a diameter of about 3-8 nm and a length of about 1-100 mm.
Preparation of free Pd NPs by a chemical reduction method
The reduction of palladium ions was carried out by heat treatment of a NMP solution of palladium acetate (1.4 mmol L À1 ) in a flask placed in an oil bath at 100 1C for 45 min. Quick cooling of the reaction solution in the flask with iced water resulted in the formation of a dispersion of the free Pd NPs.
Preparation of sheets of SGCNTs decorated by Pd NPs by a physical mixture
The SGCNTs were dispersed in NMP using a jet mill at the concentration of 2.0 mg mL À1 . The Pd NP dispersion in NMP (0.15 mg mL À1 ), separately prepared by a chemical reduction method or by a mixture with commercially-available Pd black, was mixed with the above-prepared SGCNT dispersion in NMP at the weight ratio of Pd : CNT = 1 : 9, 2 : 8, 3 : 7, and 5 : 5 by using a magnetic stirrer for 30 min. The suction filtration of the mixed dispersion on a membrane filter provided the wet sheets, which were washed with methanol several times, well dried at room temperature in air, then completely dried under vacuum at 40 1C overnight to produce dry sheets.
Preparation of Pd NP-decorated SGCNTs by an accumulated chemical reduction method
The SGCNT dispersion in NMP prepared using a jet mill at the concentration of 2.0 mg mL À1 was mixed with a palladium(II) acetate solution in NMP (1.4 mmol L À1 ). The mixed dispersion was heated in an oil bath at 100 1C for 45 min while stirring, resulting in a Pd NP-decorated SGCNT dispersion (abbreviated as Pd@SGCNT). The sheets were prepared from the dispersions by the same method as the dispersions prepared using a physical mixture.
Measurement and instruments
All the measurements were carried out at least more than three times, and the average value is shown as the result of all the experiments.
Elemental analysis. Agilent Technologies, Ltd, 720-ES inductively coupled plasma (ICP) emission spectrometer. About 3-5 mg of the hybrid sheets containing Pd, the weight of which was exactly measured, was dissolved in 3 mL of aqua regia overnight. After a three-fold dilution of the dispersion with deionized Milli-Q water, the SGCNTs were removed using a syringe filter with the pore size of 0.45 mm, and 25 mL of the sample solutions were prepared by dilution of the extract solution with deionized Milli-Q water. Three sample solutions were prepared for each hybrid sheet and the elemental analyses were carried out using the ICP emission spectrometer.
Thermoelectric properties. ADVANCE RIKO, Inc., ZEM-3 and ZEM-3HR thermoelectric evaluation system. The average thickness of the CNT sheet samples (4 Â 16 mm) was obtained by averaging the data at 8 points measured by a Mitsutoyo, Ltd, contact-type micrometer, which was 23.0 AE 3.0 mm. The Seebeck coefficient S and electrical conductivity s were measured by the ZEM-3 or ZEM-3HR in the in-plane direction. The thermal diffusivity was measured by the NETZSCH LFA447 NanoFlash s xenon flash analyzer in the through-plane direction. The specific heat C p was measured by a NETZSCH DSC 204 F1 Phenix differential scanning calorimeter. The density r was measured by Archimedes method.
Hall measurement. The carrier concentration n and carrier mobility m were measured at room temperature by the van der Pauw method using an in-house developed apparatus at Anno Laboratory. An external magnetic field of 1 T was applied. The Hall carrier concentration n was calculated from the Hall coefficient R H using the relation R H = 1/(en), where e is the elementary charge. The Hall mobility m was calculated from the electrical conductivity s and the Hall coefficient R H using the relation m = sR H , assuming that the Hall factor is unity.
Raman spectrum. Raman spectra were obtained using an NRS-7100 Laser Raman spectrometer (JASCO, Japan) with a 532 nm green line laser.
X-Ray photoelectron spectroscopy (XPS). The XPS spectra of the sheets of the blank SGCNTs (sample 1) and Pd NP-decorated SGCNTs (Pd@SGCNT) at various charged contents of Pd (samples 2, 3, and 4) were measured by a PHI Quantum-2000 scanning X-ray photoelectron spectroscopy instrument with monochromatized Al Ka radiation at 20 W (photon energy = 1486.7 eV).
Preparation of the hybrids of SGCNTs with Pd NPs
We previously reported the thermoelectric hybrid films of the SGCNTs and Pd NPs with poly(vinyl chloride) (PVC). 40 We initially S2b and c, ESI, † respectively) suggested that the Pd black NPs were aggregated on the surface of the SGCNTs, while the separately prepared Pd NPs were more separately and homogeneously deposited on the SGCNTs than the Pd black NPs. The TE properties (Seebeck coefficient (S), electrical conductivity (s), and TE power factor (PF = S 2 s)) for the SGCNT sheets covered with Pd black and separately-prepared Pd NPs were measured. No improvement in the TE properties was observed for the SGCNT sheets covered with the Pd black ( Fig. S3, ESI †), while the hybrids with the separately-prepared Pd NPs provided a slight improvement in the electrical conductivity and power factor of the sheets as shown in Fig. 1. These results suggest that the separately-prepared Pd NPs, which are smaller in size and more mono-dispersed than the Pd black, can more easily form the hybrids with the SGCNTs than Pd black, probably because the smaller Pd NPs can cover the defect sites on the surface of the SGCNTs by smooth interaction with the functional groups near the defects. The appropriate hybridization of the SGCNTs with Pd NPs may require the deposition of sufficiently small Pd NPs on the exact defective sites of the SGCNTs. For this purpose, we have developed an accumulated chemical reduction method based on our long and extensive experience regarding the preparation of metal nanoparticles. The idea has been conceived based on the properties and formation mechanism of dispersion of the metal NPs in presence of functional polymers in solution. [41][42][43] By this method, we expected to succeed in the appropriate hybridization of the SGCNTs with Pd NPs. An NMP (N-methyl-2-pyrrolidone) solution of Pd(II) acetate was mixed at room temperature with a dispersion of SGCNTs in NMP, resulting in the possible adsorption of Pd ions on the defects because the defects were supposed to be active sites having functional groups. The mixed dispersion in NMP was treated by heat, resulting in reduction of the adsorbed Pd ions by NMP as a reductant to form Pd atoms or clusters at the defects of the SGCNTs. A continuous heat treatment of the dispersion was expected to cause the accumulative deposition of Pd atoms to form the Pd NPs at the defect sites of the SGCNTs. In this case, the functional groups at the defects on the SGCNTs were predicted to play a protectant role to stabilize the small Pd NPs. The hybrids composed of the Pd NPs and SGCNTs (Pd@SGCNT) were observed by transmission electron microscopy (TEM). The TEM image of the Pd@SGCNT (Fig. 2a) suggests the presence of Pd NPs on the CNTs, and the average diameter of the Pd NPs was found to be 2.9 AE 1.1 nm (Fig. 2b). A high-resolution TEM (HRTEM) image (Fig. 2c) might show the deposition of a Pd NP at a rather defective site on the surface of the CNT. The deposition of Pd NPs at the defects of a CNT is schematically illustrated in Fig. 2d. The NP-decorated CNT can be called a ''Marshal's baton'' (Fig. S4, ESI †), where the baton is covered by metal ornaments. As for the location of the Pd NPs at the defect sites on the SGCNTs, there is no direct evidence at the present time. However, we have much supporting evidence for the location, which is discussed in a later section. Based on these facts, we postulated that the Pd NPs produced by the chemical method in the presence of the SGCNTs can mainly deposit at the defect sites on the SGCNTs. There are reports about the supported metal NPs on the surface of carbon nanomaterials like CNTs and graphenes. [44][45][46][47][48] However, only a few reports described the applications of the supported metal NPs on the SWCNTs for the hybrid thermoelectric materials. [36][37][38][39][40] The support of metal NPs on CNTs generally requires chemical modification of the CNTs on the surface or at the edge. For example, the oxidation of the CNTs by a strong acid can introduce functional groups like -OH and -COOH, 49 which can interact with the metal NPs. However, the oxidation usually damages the CNTs and decreases the electrical conductivity. Utilization of a surfactant or polymer binder can provide another method to introduce metal NPs on the surface of the CNTs. Fujigaya and Nakashima reported hybrid materials supporting platinum NPs using poly(benzoimidazole). 50 In this case, the NPs were well dispersed, but no enhancement in the electrical conductivity was observed. In contrast, our method is very mild and does not require the use of extra additives, and thus expected to readily cover the defects with NPs and enhance the electrical conductivity of the defective CNTs, as described in a later section.
Characterization of the hybrids of the SGCNTs with Pd NPs
In order to characterize the hybrids of the SGCNTs and Pd NPs, we used Raman spectroscopy and X-ray photoelectron spectroscopy (XPS) in addition to the TEM described in the previous section. Raman spectroscopy is a useful method to characterize the crystallinity of the CNTs. The peaks at about 1600 cm À1 (G-band) and 1350 cm À1 (D-band) are attributed to sp 2 carbons with a 6-membered graphene skeleton and sp 3 carbons with an amorphous structure, respectively. 51,52 The intensity ratios of the G-and D-band (G/D ratio) were usually over 10 for the pure single-walled CNTs like those prepared by the MEIJO eDIPS, HiPco, and laser methods, while the G/D ratio of the SGCNTs was as low as 1.5 (Fig. S5, ESI †), because they had many defective sites. However, the G/D ratio of the Pd NP-covered SGCNTs (Pd@SGCNT) was found to be 2.5 (Fig. S5, ESI †), which was slightly higher than that of the free SGCNTs. This increase in the G/D ratio might be considered as the results of the coverage of the defects of the SGCNTs by the Pd NPs.
X-Ray photoelectron spectroscopy (XPS) is a useful measurement technique which shows not only what elements are present within a film but also what other elements they are bonded to. We measured the XPS spectra of the sheet of the blank SGCNTs (sample 1) and those of the Pd NP-covered SGCNTs (Pd@SGCNT) with various Pd contents (samples 2, 3, and 4 with 4. 62, 9.47, and 17.43 wt% Pd contents, respectively). The results are summarized in Table 1 and the original XPS spectra are shown in Fig. S6 and S7 (ESI †). The Pd concentration of the Pd@SGCNT in Table 1 is reasonably increased with the increased content of Pd. The metallic Pd rather than ionic Pd is increased with the increased Pd content. This is also acceptable because the size of the Pd NPs in Pd@SGCNTs is increased with the increased Pd content, and the smaller Pd NPs contain more ionic Pd due to easy oxidation of the surface atoms in the Pd NPs. In addition, the ionic Pd could be attributed to the surface Pd atoms coordinated to oxygen atoms. Another important aspect in Table 1 is that the content of the functionalized carbons with the COO, CQO, C-O, and C-H/C-C structures is increased from 19.2 wt% to 26.6 wt% with the increase in the Pd content, which suggests that the Pd NPs are supported at the active sites with the functionalized carbons through the chemical bond of C-O-Pd.
Thermoelectric properties of the chemically hybridized SGCNT sheets
The TE properties of the sheet of SGCNTs covered by the Pd NPs using the accumulated chemical reduction method (Pd@SGCNT) were measured at 345 K. The Pd content dependence is shown in Fig. 3. In order to compare the change in the electrical conductivity at the same weight percentage of the Pd NPs on the SGCNTs with the chemical method and physical ones, we have tried to obtain approximate expressions by the secondary curves for the relation between the Pd content (wt%) and electrical conductivity from the experimental results in the systems of Pd black + SGCNT, Pd NP + SGCNT, and Pd@SGCNT, respectively, which are shown as Fig. S8 (ESI †). From these curves, we estimated the electrical conductivity of three types of Pd-covered SGCNT sheets at the Pd contents of 6, 8, 10, and 12 wt%, respectively, which are shown in Table S1 (ESI †). From the data in this table, the It is noteworthy that only the decoration with Pd NPs by the chemical reduction method can effectively improve the electrical conductivity of the SGCNT sheet. At the maximum Pd content (17.4 wt%), the Seebeck coefficient (S) and electrical conductivity (s) of the sheet of the Pd@SGCNT hybrid were B56 mV K À1 and B160 S cm À1 , respectively, resulting in the power factor (PF) of B52 mW m À1 K À2 . This value is about twice that of the blank sheet of the SGCNTs without the Pd NPs. The maximum Pd content of the covered SGCNTs (Pd@SGCNT) was higher than that prepared by the physical mixture with separately-prepared Pd NPs. This result predicts a stronger interaction of the Pd NPs with SGCNT in the chemically covered SGCNTs (Pd@SGCNT) than that in the physically covered ones. The strong interaction may cause a much higher electrical conductivity and power factor of the former sheet than the latter one. This strong interaction can suggest the presence of a type of the charge transfer interaction between the NPs and CNT, which might provide the enhancement in the electrical conductivity of the CNT sheets, as described in the next paragraph. In order to clarify the effect of the Pd NPs on the increase in the electrical conductivity of the SGCNT sheets, the electrical conductivity (s), carrier concentration (n), and carrier mobility (m) of the Pd NP-covered SGCNTs and blank SGCNTs were determined based on a Hall measurement by the van der Pauw method, with the relation of s = enm (e: elementary charge). The results are listed in Table 2. The covering by the Pd NPs does not affect the carrier concentration but leads to the increased carrier mobility. This result suggests that the Pd NPs could cover the defects of the SGCNTs, which results in the increased carrier mobility. In other words, the metal NPs might interact strongly with SGCNTs by a kind of charge transfer interaction to form a weak bond, i.e., the NPs might repair the defects of the SGCNTs, which could not increase the carrier concentration, but provide a smooth motion of the carriers on the surface of the SGCNTs. This increased carrier mobility could be considered to be due to the covering effect of the Pd NPs rather than the change in the structure or morphology of the surface of the CNT sheets due to the lack of difference in the magnified SEM photographs (Fig. S9, ESI †). In contrast, the nearly constant or slightly decreased carrier concentration can result in the nearly constant or slightly increased Seebeck coefficient S because the Seebeck coefficient is a function of the carrier concentration n in the relation of S = (8p 2 k B 2 m*T/3qh 2 )(p/3n) 2/3 where k B , h, m*, T, and n denote the Boltzmann constant (1.38 Â 10 À23 J K À1 ), Planck constant (6.63 Â 10 À34 J s), effective mass of the carrier, absolute temperature, and carrier concentration, respectively, and q = AEe. This speculative consideration is very consistent with the experimental results.
TE performance of the hybrid SGCNT sheets
The TE performance of the materials can be evaluated by the TE figure-of-merit ZT, which can be calculated by the equation, ZT = (S 2 s/k)T, where S, s, k, and T denote the Seebeck coefficient, electrical conductivity, thermal conductivity and absolute temperature, respectively. It is well known that there is an anisotropy in the electrical and thermal conductivities of the CNT sheets. Inoue et al. reported that both the electrical and thermal conductivities of the sheets composed of multiwalled CNTs in an in-plane direction are about 8 times higher than those in the through-plane direction. 53 Since we could not measure the thermal conductivity of the CNT sheets in the in-plane direction at present, we estimated that the thermal conductivities of our samples in the in-plane direction might be 8 times higher than those in the through-plane direction.
The thermal conductivities of the pristine and Pd NPdeposited SGCNT sheets measured in the through-plane direction are shown in Table S2 (ESI †). The thermal conductivities of the pristine SGCNT sheet and Pd@SGCNT sheet in the in-plane direction were estimated to be 0.4 W m À1 K À1 and 0.64 W m À1 K À1 , respectively, which were used for the calculation of the ZT value. In order to understand the low thermal conductivity of the SGCNT sheets with and without the Pd NPs, it is noteworthy to provide the following discussion. Generally, the thermal conductivity k is the sum of the phonon component k ph and carrier component k e (k = k ph + k e ). In addition, the carrier component can be calculated from the electrical conductivity according to the Wiedemann-Franz law shown in the equation k e = LsT, where L and T denote the Lorentz number (2.44 Â 10 À8 W O K À2 ) and absolute temperature, respectively. The calculated carrier components of the thermal conductivity (k e ) in the in-plane direction of the pristine SGCNT sheet and Pd@SGCNT with the highest electrical conductivity in the through-plane direction were 0.07 W m À1 K À1 and 0.15 W m À1 K À1 , respectively. The typical CNT sheet has a high electrical conductivity, which leads to the high thermal conductivity. However, the SGCNTs are very porous with many defects, and have a rather larger diameter (3-8 nm) with a longer length (about 100 mm) than the typical CNTs. The many defects and large diameter provide a low electrical conductivity (50-300 S cm À1 ) and thus, a low thermal conductivity. In addition, our SGCNT sheets are as thick as approximately 20 mm, which may allow random alignment of the SGCNTs in the sheet, resulting in the low density and rather low conductivity. Thus, we consider that the low thermal conductivity of the SGCNT sheet and Pd NP-decorated SGCNT ones could be attributed to the physical properties of the SGCNT, i.e., the high porosity and high surface area with many defects, and the low density of the SGCNT sheet because of the rather low alignment of the SGCNTs in the sheet. The calculated ZT values based on this estimation are summarized in Fig. 4. The blank sheet composed of only the SGCNTs is estimated to have the ZT value of 0.14 at 345 K, and the sheet of Pd@SGCNT with 9.5 wt% Pd has the highest value of 0.30 at 345 K, which is 2.1 times higher than that of the blank one. Nakai et al. reported the ZT value of 0.33 at 340 K for the sheet composed of the selected single-walled CNTs with a high Seebeck coefficient, which were collected by extraction and purification. 23 We have succeeded in simply obtaining sheets with a similar high ZT value by chemical coverage of the inexpensive SGCNTs with Pd NPs. The pristine SGCNT sheets have a low electrical conductivity due to their many defects. We approached this problem from a different angle and regarded them not as defective sites but as active sites. This revolutionary idea provided the effect of Pd NPs to repair the SGCNT defects. The effects of metal NPs on the electrical conductivity of organic TE compounds have been reported.
In the hybrid materials of conducting polyaniline doped with a small amount of gold or platinum NPs, for example, the electrical conductivity was slightly improved by the doping. 33 In the conducting poly(3,4-ethylenedioxythiophene)-poly(styrenesulphonate) (PEDOT-PSS), the addition of a small amount (e.g., 0.01 wt%) of poly(N-vinyl-2-pyrrolidone) (PVP)-protected gold or silver NPs provide the increased electrical conductivity, and thus, the power factor of the films. [54][55][56] The increase in the electrical conductivity and power factor could be accelerated by altering the type of protectant and metal in the hybrid PEDOT-PSS TE films. 57 The effect of metal NPs has been explained by a bridging effect of the NPs between the conducting polymers with NPs, but the complete mechanism has not yet been clarified. There are also a few reports about the improvement of the TE properties of the SWCNTs by the addition of metal NPs. [36][37][38][39][40] The authors claim the effect of bridging nanotubes and the modification of the Fermi level by electron exchange. Such effects of the NPs could be also applied to our case, but the simple concept of the repair of defects by covering the defective sites with NPs might be rather useful to simply understand the present effect.
Discussion about location of the Pd NPs at the defects on the SGCNTs
It is one of the important conclusions in this study that the in situ synthesized Pd NPs are located at the defect sites on the SGCNTs. However, there is no direct evidence for the location of the Pd NPs. Nevertheless, we have many results supporting the location of the Pd NPs in the defect sites.
First, the defect sites are modified by functional groups like OH and/or COOH. This is a common idea for the CNTs. In fact, CNTs can be chemically modified by placing these functional groups in the defect sites and at the edges of the CNTs. Second, the polymer-supported Pd NPs are generally prepared by starting from the coordination of Pd ions on the functional groups of the polymers, followed by reduction of the Pd ions to produce Pd atoms, then the Pd clusters (Pd NPs). The coordination of Pd ions to nonbonding electrons of the heteroatoms like O and N in the functional groups is well known in coordination chemistry. Note that the reduction of Pd ions to Pd atoms can be carried out not by the functional groups, but by the solvent, NMP, in the present case. This process for the formation of polymer-supported Pd NPs has been already determined, for example, by the peak shift of the IR spectra. 58 By taking account of the first result, the presence of the functional groups at the defect sites, and the second result, formation of Pd NPs at the functional groups, together, we came to the conclusion that the Pd NPs should be located at the sites of the functional groups, i.e., at the defect sites. Third, the TEM photograph might prove the location of the Pd NPs at the defect sites on the CNTs, if the defect site could be clearly detected by TEM. Unfortunately, the defect sites cannot be observed by the TEM in our case. However, the TEM photograph (Fig. 2c) shows the location of the Pd NP on the step site between the doublewalled part and single-walled part of the SGCNT. Because the step of the CNTs can be considered to have the functionalized structure, which is similar to the defect sites on the CNTs, the TEM photograph may give evidence supporting the location of the Pd NPs on the defect sites. Fourth, the fact that the chemical reduction method gave a better-dispersed location of the smaller Pd NPs on the SGCNTs in comparison to the physical mixture methods. This was indicated by the SEM (Fig. S2 and S9, ESI †) and TEM photographs (not shown here). In other words, we used the same feeding weight ratio of Pd ions or Pd NPs to the SGCNTs in the three methods. However, the average diameters of the deposited NPs were 2.9 nm, 2.7 nm, and 3.5 nm in the case of the chemical reduction, physical mixture with the separately-prepared Pd NPs, and physical mixture with the commercial Pd black, respectively. This difference in the size of the Pd NPs suggests that the chemical reduction provides the mild conditions for the Pd NP formation, which is attributed to the stabilization of the Pd NPs by functional groups at the defect sites on the SGCNTs. Fifth, for comparison, the same chemical reduction method was applied to the CNTs having different amounts of defective sites, i.e., the reaction was carried out under the same conditions using the SGCNT having many defect sites and a mostly defectfree eDIPS-CNT (MEIJO CNTs prepared by an enhanced Direct Injection Pyrolytic Synthesis method, G/D ratio = 40). The results were quite different for each other. When 50 wt% of Pd ions was supplied to the dispersed SGCNTs, 17.4 wt% of Pd NPs was deposited on the SGCNTs. In contrast, when the same amount of Pd ions was supplied to the eDIPS-CNTs, only 7.8 wt% of Pd NPs was deposited on the eDIPS-CNTs. It is postulated that this difference may be attributed to the facts that the eDIPS-CNTs have fewer defects than the SGCNTs and that the Pd NPs were easily deposited on the defect sites of the CNTs.
Based on these results, we believe that the Pd NPs produced by the chemical method in the presence of the SGCNTs can mainly deposit at the defect sites of the SGCNTs.
Discussion about a possible mechanism of the improved electrical conductivity for the Pd NP-deposited SGCNT sheets There are many reports about the enhancement of the electrical conductivity of CNT sheets. [13][14][15] Most of them claimed that the increased electrical conductivity was realized by doping the CNTs with an acid or base as a dopant. In their methods, the carrier concentration was raised by the doping.
However, there are only a few reports about the improved TE performance of CNT sheets decorated by metal NPs. Yu et al. 36 prepared the Au NP-decorated SWCNT films by galvanic displacement, in which the diameters of the Au NPs were a few tens of nanometers. The Au NPs on the CNTs decreased the electrical conductivity, but the Seebeck coefficient increased, resulting in a 2-fold increase in the TE power factor. Upon the NP precipitation, electron transfer occurred in order to equilibrate the Fermi levels of the materials in contact. Thus, the Au NP decoration made the CNTs more p-type by withdrawing electrons from the CNTs. Additionally, tube-tube junctions may have been modified as a result of bridging the nanotubes with the NPs. The same group 32-37 also reported TE composite films of Au NP/SWCNT(HipCo)/PVAc(600BP)/PEDOT:PSS (15 : 60 : 10 : 15 in vol%), where the electrical conductivity was as high as B6000 S cm À1 while the Seebeck coefficient was B13 mV K À1 . The authors claimed that the high conductivity could be due to p-type doping caused by the Au NPs when they were precipitated on the CNTs. They also claimed that a CNT dispersion with a proper amount of CNT dispersants was crucial to maximize the electrical conductivity. Fernandes et al. 38 found that the incorporation of the Au NPs deteriorated the TE properties of the CNT films, which was sensitive to the size of the used Au NPs. Thus, the addition of small (B5 nm in diameter) Au NPs to the CNTs made the electrical conductivity, Seebeck coefficient, and thermal conductivity decrease from 1800 S cm À1 to 620 S cm À1 , from 31 mV K À1 to B25 mV K À1 , and from 82 W m À1 K À1 to 73 W m À1 K À1 , respectively. The power factor and the TE figure of merit (ZT) also decreased from 1.67 Â 10 À4 W m À1 K À2 to 4.2 Â 10 À5 W m À1 K À2 and from 6.04 Â 10 À4 to 1.73 Â 10 À4 , respectively. In this case, the Au NPs were just contaminants. An et al. 39 reported the CNT bundles, which were interconnected by a direct spinning method to form 3D networks without interfacial contact resistance, provided both a high electrical conductivity and high carrier mobility. Covering the porous CNT webs with Au NPs increased the electrical conductivity, resulting in the optimal ZT of 0.163, which represented a more than 2-fold improvement compared to the ZT of the pristine CNT webs (0.079). In this case, the authors claimed that the Au NPs improved the carrier mobility. After coverage, polyaniline (PANI) was integrated into the Au-doped CNT webs to both improve the Seebeck coefficient by an energy-filtering effect and decrease the thermal conductivity by the phonon-scattering effect. This led to a ZT of 0.203. Previously, we prepared hybrid TE films only by mixing the CNTs, PVC, and separately-prepared Pd NPs. 40 The TE properties were slightly improved by the Pd NPs in the threecomponent films, in which the Pd NPs were used for promotion of the carrier transport between the CNTs and poly(vinyl chloride) (PVC), as a binder of the CNTs.
In contrast, it is clear in our present method that the chemical deposition of the Pd NPs on the defective SGCNTs clearly improved the carrier mobility of the CNTs based on the Hall measurement. Thus, the increase in the electrical conductivity of the SGCNT sheets by the deposition of Pd NPs may not be attributed to the normal doping effect. It could be considered that the metal NPs might strongly interact with the SGCNTs by a kind of charge transfer interaction to form a weak bond, i.e., the NPs might repair the defects of the SGCNTs, which could not increase the carrier concentration, but provide a smooth motion of the carriers on the surface of the SGCNTs. This increase in the carrier mobility could be attributed to the covering effect of the Pd NPs rather than the change in the structure or morphology of the surface of the CNT sheets because no difference was clearly observed in the magnified SEM photographs (Fig. S2 and S9, ESI †). In the previous reports [36][37][38][39][40] about the hybrids of SWCNTs and metal NPs, the metal NPs were randomly deposited on the surface of the CNTs, while the Pd NPs were designed to mainly deposit on the defective sites in the present study, which resulted in the increased electrical conductivity. This result allowed the creation of a novel concept of defect repair. The detailed mechanism including theoretical calculations, like a DFT simulation, is now under investigation. However, covering the SGCNT with Pd NPs is very effective for improving the electrical conductivity of the SGCNT sheets and we believe that this is an important discovery for organic hybrid TEs. The types of metal NPs effective for this purpose are not limited to Pd. It is expected that Au, Ag and Pt will also be effective elements. When the experiments will be completed, we will publish the results elsewhere. Note that the reduction of metal ions should be carried out in the presence of the SGCNTs in a solution in every case. Along with the improved TE properties by hybridization, it should be mentioned that there are some reports presenting the enhancement in conductivity and power factor of inorganic TE semiconducting materials by a nano-structure, especially in order to overcome the trade-off between the electrical conductivity and Seebeck coefficient. [59][60][61] In the case of inorganic hybrids, a rather significant enhancement of the Seebeck coefficient was often observed. In contrast, a rather small improvement occurred in the Seebeck coefficient in the present CNT-Pd NP case. However, it could be emphasized that the present system can be applicable to practical TE devices.
Conclusions
In summary, we have succeeded in covering the defects of mass-produced, inexpensive, and defective CNTs (SGCNTs) with Pd NPs to enhance the electrical conductivity of the SGCNT sheets, although the detailed mechanism is still not clear. The better coverage was carried out by an accumulated chemical reduction method rather than a physical mixture method, because the accumulated chemical reduction of Pd ions at the actual defective sites of the SGCNTs proceeded in a completely similar way as the formation of the Pd NPs protected by functionalized polymers. 62 This result allowed the novel concept of ''defect repair,'' which was found to be attributed not to the increase in the carrier concentration but rather to the increase in the carrier mobility. In addition, it is noteworthy that the defect repair results in not only an increased electrical conductivity, but also a constant or slight increase in the Seebeck coefficient, although the electrical conductivity and Seebeck coefficient usually have a trade-off relation in the TEs. The defect repair reaction on the SGCNTs and carrier transport on the repaired SGCNTs are schematically illustrated in Fig. 5.
The combination of two kinds of nanomaterials, i.e., CNTs and metal NPs, has introduced a novel ''repairing effect'' concept by coverage of the defective sites of the CNTs with NPs, resulting in enhancement of the electrical conductivity (from B90 S cm À1 to B165 S cm À1 at the Pd content of 10.8 wt%), which provided new TE materials with the high ZT value of B0.3 from inexpensive and defective CNTs (ZT B 0.13). The high ZT value of B0.3, provided by the improvement in the carrier mobility while maintaining the carrier concentration, is one of the highest ZT values reported for practically applicable organic TE materials. 14,15,28 In fact, at the maximum Pd content (17.4 wt%), the Seebeck coefficient (S) and electrical conductivity (s) of the Pd@SGCNT hybrid sheet were B56 mV K À1 and B160 S cm À1 , respectively, resulting in a power factor (PF) of B52 mW m À1 K À2 . This strategy allows various NPs to self-locate at the exact defective sites in any type of nanomaterial like nanotubes, nanowires, and nanosheets. As the results, the coverage of metal NPs makes the defective nanomaterials behave as if they were defect-free ones. This technology based on the new concept of ''defect repair'' could provide not only inexpensive practical TE devices composed of repaired defective CNTs, but also extended applications of inexpensive and defective nanomaterials in various fields as a new electronic material, for examples, electrodes of solar cells and the lithium ion battery.
Conflicts of interest
There are no conflicts of interests to declare.
|
2020-10-28T19:19:53.247Z
|
2020-11-16T00:00:00.000
|
{
"year": 2020,
"sha1": "688983302edeeb1050b1b3f1e5a1c1fa639d954a",
"oa_license": "CCBY",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2020/ma/d0ma00534g",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "a227b05adc2eb72b6c55fc0b220f20a167c8a1eb",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
119157772
|
pes2o/s2orc
|
v3-fos-license
|
Generalized solutions of the Dirac equation, W bosons, and beta decay
We study the 7x7 Hagen-Hurley equations describing spin 1 particles. We split these equations, in the interacting case, into two Dirac equations with non-standard solutions. It is argued that these solutions describe decay of a virtual W boson in beta decay.
Introduction
Recently, we have shown that in the free case covariant solutions of the s = 0 and s = 1 Duffin-Kemmer-Petiau (DKP) equations are generalized solutions of the Dirac equation [1]. These wavefunctions are non-standard since they involve higher-order spinors. We have demonstrated recently that in the s = 0 case the generalized solutions describe decay of a pion [2]. The aim of this work is to interpret spin 1 solutions, possibly in the context of weakly decaying particles.
In the next Section we transform the Hagen-Hurley equations, in the interacting case, into two Dirac equations with non-standard solutions involving higher-order spinors, extending our earlier results described in [1]. These generalized solutions bear some analogy to generalized solutions of the Dirac equation argued to describe a lepton and three quarks [21]. In Section 3 we describe transition from non-standard solutions of two Dirac equations to the Dirac equation for a lepton and the Weyl equation for a neutrino. In the last Section we show that the transition is consistent with decay of a virtual W boson in beta decay.
In what follows we are using definitions and conventions of Ref. [22].
Generalized solutions of the Dirac equation in the interacting case
We have shown recently that, in the non-interacting case, solutions of the s = 0 and s = 1 DKP equations are generalized solutions of the Dirac equation [1]. In our derivation we have splitted the 10 × 10 DKP equations for s = 1 into two 7 × 7 Hagen-Hurley equations [16][17][18]. Let us note here that in the case of interaction with external fields such splitting is not possible since the identities (27) of Ref. [23], enabling the splitting, are not valid in the interacting case. Therefore, we shall base our theory on the 7 × 7 formulation, see Eqs. (18), (19) in [1] and Subsection 6 ii) in [19]. These equations violate parity P , where P : , and thus one should expect a link with weak interactions. We write one of these 7×7 equations (Eq. (19) of Ref. [1]), in the interacting case, in form: and it is assumed that χḂḊ = χḊḂ (2) what is the s = 1 constraint. In Eqs.
Conclusions
Results obtained in Sections 2, 3 cast new light on the Hagen-Hurley equations as well as on weak decays of spin 1 bosons. We have shown that transition from equation (1), describing a spin s = 1 particle, to equations (7), (8), via substitution (5) -which means that now s ∈ 0 ⊕ 1, corresponds to decay of this particle into a Weyl antineutrino, cf. Eq. (7), and a Dirac lepton, cf. Eq. (8). Indeed, it should be a weak decay since Eq. (1) violates parity. The spin of this particle becomes undetermined in the process of decay, more exactly it belongs to the 0 ⊕ 1 space -this suggests that this is a virtual particle. Therefore, the products, a lepton and a antineutrino, should have total spin 0 or 1 and there should be a third particle to secure spin conservation. The above descritption fits a (three-body) beta decay with formation of a virtual W − boson, decaying into a lepton and antineutrino. This is most conveniently explained in the case of a mixed beta decay [26]: Fermi transition (9) where products of the W − boson decay (see [27]) are shown in square brackets and (↑) denotes spin 1 2 -this seems to correspond well to the proposed transition from Eq. (1) to Eqs. (7), (8). Since spin of the products of decay of the virtual W − boson belongs to the 0 ⊕ 1 space, their spin can be s = 0 or s = 1. Moreover, in the case of the Gamow-Teller transition there must be a spinflip in the decaying nucleon. Let us add here, that in the reaction (9) some neutrons (82%) decay according to the Gamow-Teller mechanism while some (18%) undergo the Fermi transition [26]. This mixed mechanism is explained by decoupled spins of the just born products -indeed, the condition χ12 = χ21 for the spinor χȦḂ, due to the substitution (5a), does not hold and spin of the products is in the 0 ⊕ 1 space.
It is now obvious that another set of 7 × 7 equations, involving spinor η AB rather than χȦḂ, see Eq. (18) of Ref. [1], describes a β + decay with intermediate W + boson. Let us note finally, that kinematics of the neutrino appears in the Dirac equation for the electron with arbitrary neutrino four-momentum, suggesting a continuous distribution of neutrino energy.
|
2016-05-14T12:36:17.000Z
|
2016-05-14T00:00:00.000
|
{
"year": 2016,
"sha1": "5489bee2bf49f2ef14fb058acd4621697686d68c",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/ahep/2016/2689742.pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "5489bee2bf49f2ef14fb058acd4621697686d68c",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
219059901
|
pes2o/s2orc
|
v3-fos-license
|
Effect of Financial Statements Quality on Information Asymmetry and Investment Efficiency as Moderating Variable in Mining Companies
Efficiency investments made by the internal company is expected to improve the financial statements reporting by the company better than reducing the information asymmetri between internal company and investors. The purpose of this study are determine the effect on the quality of financial statements with information asymmetry and use investment efficiency as a moderating variable. This study uses mining companies that listed on the Indonesia Stock Exchange in 2013-2015. Samples were obtained by 84 companies with nonprobability sampling methods with purposive sampling. Data analysis technique conducted is moderated regression analysis. The result finds that quality of financial reporting has negative effect on information asymmetry. Efficiency investments strengthen the negative effect on the quality of reporting of information asymmetry. Keywords: investment efficiency, quality of financial reports, information asymmetry DOI: 10.7176/RJFA/11-8-03 Publication date: April 30 th 2020
Introduction
The capital market is a place for investors to invest their capital as well as to facilitate investment activities they do which have an important role for the country's progress. Investment is an activity to allocate funds owned by individuals or organizations within a certain period in order to obtain a return or return from the amount of funds that have been allocated (Ivan, 2013). The investment process certainly requires analysis and in-depth calculations with due regard to the principle of caution. In general, investors will be braver and feel safer to invest in companies that have relatively high profits. This is because companies with relatively high profits are suspected of having good prospects in the future and allowing investors to obtain large returns.
The business sector which is generally attractive to investors as its investment target, is the mining sector. Unstable economic power, fears of the dangers of pollution to the environment, as well as competition with the use of cleaner fuels are now making countries in the world slowly move away from coal which was once successful in driving the Indonesian industrial revolution in the mining sector. In 2012, the government issued Ministerial Regulation (Permen) Number 11 of 2012 concerning Amendments to Ministerial Regulation (Permen) Number 7 of 2012 which states that all mining businesses in the mineral sector are required to process and refine domestic mining products and are prohibited from exporting mining materials. raw (www.esdm.go.id). After the enactment of the regulation, the mining industry in Indonesia experienced a decline in income. This can be proven in Graph 1 which shows that from 2012 to 2015, the contribution of income in the mining sector has continued to decline. Table 1.
Research Journal of Finance and Accounting www.iiste.org ISSN 2222-1697 (Paper) ISSN 2222-2847 (Online) Vol.11, No.8, 2020 Services 8,08 Gross Domestic Product (GDP) of 2015 4,79 The low revenue contribution generated by mining sector companies can have an impact on investment activities that will be carried out by investors in these mining sector companies. Investors allegedly will not be interested and are not interested in investing because it is likely not to provide large profits or returns for investors, where investors are only interested in investing in companies that tend to generate large returns because investors invest in companies with returns that are then the investor will be able to obtain a large profit as well.
The motivation in this study is to examine how the quality of financial statements affects information asymmetry with investment efficiency as a moderating variable in mining sector companies on the Indonesia Stock Exchange with the 2013-2015 observation year due to a phenomenon that occurred in 2012, namely the enactment of Ministerial Regulation No . 11 of 2012 which caused a decrease in revenue contribution from 2013 to 2015 in the mining sector. This will certainly affect the efficiency of investment by companies by looking at how the quality of financial statements and their influence on information asymmetry.
The formulation of the problem of this research is how the influence of the quality of financial statements on information asymmetry and how the ability of investment efficiency moderate the influence of the quality of financial statements on information asymmetry. The research objective is to obtain empirical evidence of the negative influence of the quality of financial statements on information asymmetry and to get empirical evidence of the ability of investment efficiency to moderate the effect of the quality of financial statements on information asymmetry.
Agency Theory
Agency theory is a contract in which the principal gives authority in making decisions to the agent to carry out a number of jobs on behalf of the principal (Jensen & Meckling, 1976). The main principle of agency theory is the existence of an employment relationship in the form of a cooperation contract between the principal and the agent, referred to as the "nexus of contract". If the principal and the agent involved in the cooperation contract try to obtain their respective interests to the maximum, then there will be a tendency for the agent to not always act in the best interests of the principal.
Investment Efficiency
Investment is an activity carried out by the company in an effort to develop and increase the value of the company. Investment activities should be carried out efficiently in order to achieve the expected level of profit. Investment efficiency is the optimal level of investment that is able to provide benefits to the company (Sari & Suaryana, 2014). There are two factors that can determine investment efficiency. First, companies are deemed necessary to increase business capital to finance investment opportunities. Second, if the company makes an investment decision that is believed to increase capital but there is no certainty that the investment is in accordance with the needs, then an appropriate investment decision is needed (Sari & Suaryana, 2014). Investment decisions to be made by a company should be made based on accounting information circulating on the exchange. The financial statements presented by the company must reflect the financial position and the actual condition of the company so that the company is able to carry out investment activities optimally and does not experience underinvestment or overinvestment conditions.
Information Asymmetry
Asymmetry of information is a condition regarding information in the company's financial statements reported by the company management does not reflect the actual condition of the company. A high quality company financial report is able to minimize the emergence of information asymmetry between company management and investors (Verdi, 2006). The high quality of financial statements can cause investment decisions taken by companies to be more optimal and can improve the monitoring function in overseeing the activities of company management.
Quality of Financial Reports
An important component that is important to consider for a company is the quality of the company's financial statements. Financial statements can be said to be quality if they have presented the actual condition of the company. The high quality of a financial statement can minimize information asymmetry, adverse selection, and moral hazard, and can cause companies to be able to identify a variety of investment opportunities better. According to Gomariz & Ballesta (2013), the high quality of financial statements can enable better supervision or monitoring of shareholders so that company management becomes more responsible in presenting a financial statement.
Hypothesis Development 2.5.1 The Effect of the Quality of Financial Statements on Information Asymmetry
The presentation of quality financial information is expected to reduce information asymmetry between company management and investors (Cohen, 2003). Research by Setiany & Wulandari (2015) also found the same results, namely the quality of financial statements has a negative influence on information asymmetry. This means that the high quality of a financial statement can reduce the level of information asymmetry between company management and investors. Research by Fanani (2009) also shows the same results, namely the presentation of higher quality financial statements can reduce information asymmetry. According to Amrullah & Fatima (2015), the high quality of a company's financial statements can minimize the emergence of information asymmetry so that investment activities carried out by investors reach optimal levels. However, it is different from the results of Indriani & Khoiriyah's research (2010) which shows that the quality of financial statements represented by the relevance of values, timeliness, and conservatism has a positive influence on information asymmetry. The results of Kusuma et al. (2014) and Santoso (2012) also show that the quality of financial statements does not have a significant effect on information asymmetry. The first hypothesis of this research is: H1: The quality of financial statements has a negative effect on information asymmetry.
Investment Efficiency Strengthens the Effect of Financial Statement Quality on Information Asymmetry
Quality financial statements that can reduce the level of information asymmetry between company management and investors because the financial statements that are presented in a quality manner have described how the company's actual condition so that there is no difference in the acquisition of information between company management and investors. The low information asymmetry with the presentation of higher quality financial statements can result in investments made by investors to be more optimal and able to obtain the returns that investors want. According to Tao Ma (2012), a company should pay attention to the high quality of a financial statement with a low level of information asymmetry so as to facilitate investment activities to be carried out by investors. This statement is in line with the results of Bushman & Smith's research (2003) which explains that quality financial reports directly make company management more accountable so that it can reduce information asymmetry and reduce the level of moral hazard in determining investment opportunities.
Investments made efficiently by the company's internal parties can strengthen the effect of the financial statements that are presented to be of higher quality in the low information asymmetry between company management and investors. According to Butar (2015), financial statements have an important role in reducing investment inefficiency that comes from underinvestment and overinvestment conditions. The efficiency of investments made by internal companies in developing the value of a company's investment can affect the quality of financial statements in presenting accounting information for a company. The presentation of quality company financial statements by internal parties of the company will affect the level of information asymmetry that is getting lower between the company's internal parties and investors as seen from the level of spread. The second hypothesis of this study is: H2: Investment efficiency strengthens the effect of financial statement quality on information asymmetry.
Research Methods
This research uses a quantitative approach in the form of associative. The research locations are mining sector companies in the Indonesia Stock Exchange for the period 2013-2015 through the official website www.idx.co.id. The object of research is investment efficiency as a moderating effect of the quality of financial statements on information asymmetry. The type of data is quantitative data. The data source is secondary data. The study uses investment efficiency as a moderating variable, the quality of financial statements as an independent variable, information asymmetry as the dependent variable.
Research Journal of Finance and Accounting www.iiste.org ISSN 2222-1697 (Paper) ISSN 2222-2847 (Online) Vol.11, No.8, 2020 Figure 2. Research Conceptual Framework This study uses 3 (three) years of research, namely 2013, 2014 and 2015. This is due to a phenomenon regarding the issuance of Ministerial Regulation No. 11 of 2012 which resulted in the contribution of mining sector revenue has decreased continuously after the enactment of the regulation, namely 2013 to 2015 so that it will also have an impact on the efficiency of investment to be made in the mining company. The research population is all companies in the mining sector on the Indonesia Stock Exchange in 2013 to 2015, which is a number of 41 companies. The process of determining the sample is the nonprobability sampling method using purposive sampling technique so that 84 samples are obtained using three years of observation.
As for the criteria in sample selection, the first criterion is mining companies that publish audited annual financial statements in 2013-2015. This is because the audited financial statements of a company have been examined by an independent auditor, namely a third party outside the company that is independent so that it is able to present all information about the company's performance in a relevant and reliable manner. The second criterion is mining companies which in full annual financial statements present the information needed to calculate the research variables.
Based on Table 2, the total population of the study is 123 companies, but there are 33 companies that do not publish audited annual financial statements in 2013-2015 and 6 companies in their financial statements present incomplete information for calculating research variables. Thus, the number of research samples for three years of observation is 84. : Interaction between the quality of financial statements with investment efficiency e : Standard error Measurement of investment efficiency variables uses investment models that function as growth opportunities or company growth opportunities (Biddle et al., 2009). The investment model is as follows: Investmenti,t+1 = ß0 + ß1*Sales Growthi,t + ei,t+1 Information: Investmenti,t+1 : The total investment in fixed assets Sales Growthi,t : Percentage of change in the company's current sales value with the previous year Investment efficiency which is proxied using the investment model will later obtain a residual value which is used as a measurement of investment efficiency variables.
The quality of financial statements can be proxied by the quality of accruals (Biddle & Hilary, 2006). This
H2 (-)
Research Journal of Finance and Accounting www.iiste.org ISSN 2222-1697 (Paper) ISSN 2222-2847 (Online) Vol.11, No.8, 2020 study follows the accrual measurement model by Kothari et al. (2005) to be able to measure total accruals. The higher total accruals indicate that the lower the quality of the financial statements presented by a company. Conversely, the lower the total accruals indicate that the higher the quality of the financial statements presented by a company. The accrual calculation model for measuring the variable quality of financial statements by Kothari et al. (2005) are as follows: TAi,t = a0 + a1[1/ ASSETSi,t-1] + a2ΔSALESi,t + a3PPEi,t + a4ROAi,t(or i,t-1)+ εi,t Information: TAi,t : Total company accrual i in year t ΔSALESi,t : Changes in the company's current sales value with the previous year PPEi,t : The net value of the company's total fixed assets in year t ROAi,t or i,t-1 : Performance measurement derived from the rate of return on assets ASSETS i,t-1 : Total assets of the company in the previous year The calculation of each variable to obtain a residual value as a measure of the quality of financial statements is as follows: TAi,t : Net Income -Cash flow From Operation ΔSALESi,t : Salest-1 -Salest ROAi,t : Net Income/Total Assets PPEi,t : Cost of Fixed Assets -Depreciation of Fixed Assets All variables above will be divided by ASSETSi, t-1 to prevent heteroscedasticity on the residual value to be obtained (Kothari et al., 2005). Thus, the calculation to obtain the residual value is as follows: Residual Value : TAi,t/ ASSETSi, t-1 -ΔSALESi,t/ ASSETSi, t-1 -PPEi,t/ ASSETSi, t-1 -ROAi,t/ ASSETSi, t-1 The residual value obtained will be absolute so that the discretionary accrual value will be obtained as a measurement of the variable quality of financial statements (Dechow & Dichev, 2002). In general, bid-ask spreads are used as a tool to measure the level of information asymmetry explicitly (Leuz & Verrecchia, 2000). This is because bid-ask spreads are able to show adverse selection problems that arise from information asymmetry between company management and investors. Low information asymmetry will have an impact on low adverse selection and low bid-ask spreads. The bid-ask spread calculation as a measurement of the asymmetry variable is as follows (Howe & Lin, 1992): SPi,t : (APi,t -BPi,t)/(( APi,t -BPi,t)/2) Information: SPi,t : Spread from company i that occurred at the t-time APi,t : The highest asking price of company i shares at the t-time BPi,t : The lowest offering price of company i stock at the t-time Table 8, the moderated regression analysis equation in this study, namely: Y = 1,247 -0,893X1 -1,050X2 + 1,239X1X2 + e Based on the moderated regression analysis equation above, it can be explained that the first hypothesis states that the quality of financial statements has a negative influence on information asymmetry. The test results show the quality of financial statements using the measurement of total accruals has a regression coefficient of -0.889 with a probability of 0.007. This means that if other variables are considered constant or zero, then every 1 percent increase in the quality of financial statements will result in a decrease in information asymmetry by 0.893 percent. The value of the significance level of 0.007 is less than 0.05, so H1 is accepted. This shows the quality of financial statements has a negative effect on information asymmetry.
Moderated Regression Analysis
The second hypothesis states that investment efficiency moderates positively or strengthens the effect of the quality of financial statements on information asymmetry. The test results show that the product of the quality of financial statements with investment efficiency has an interaction coefficient of 1.239 with a probability of 0.017.
Research Journal of Finance and Accounting www.iiste.org ISSN 2222-1697(Paper) ISSN 2222-2847(Online) Vol.11, No.8, 2020 This means that the magnitude of the effect of the interaction between the quality of financial statements and investment efficiency has increased as seen from the probability value of 0.017. While the direction of the interaction does not decrease information asymmetry, but increases information asymmetry as seen from the interaction coefficient value of 1.239. The value of the significance level of 0.017 is less than the significance level of 5 percent, then H1 is accepted. This shows the efficiency of investment strengthens the influence of the quality of financial statements on information asymmetry.
The Effect of the Quality of Financial Statements on Information Asymmetry
The results of testing the first hypothesis explains that the quality of a company's financial statements negatively affects information asymmetry so that the first hypothesis (H1) of this study is accepted. This means that the higher the quality of a financial statement, the lower the information asymmetry. According to Gomariz & Ballesta (2013), the high quality of financial statements can enable better supervision or monitoring of shareholders so that company management becomes more responsible in presenting a financial report. If company management presents quality financial reports, investors will get information based on financial statements that present the actual condition of the company so that the level of information asymmetry between the two parties will be lower and investors will be able to carry out their investment activities efficiently and optimally.
The results of the study are consistent with the results of the research of Setiany & Wulandari (2015) which shows the quality of financial statements has a negative influence on information asymmetry. According to Cohen (2003), the presentation of quality financial information is expected to reduce information asymmetry between company management and investors. This means that the high quality of a financial statement can minimize the level of information asymmetry between company management and investors. Research by Fanani (2009) also shows the same results, namely the presentation of higher quality financial statements can reduce information asymmetry.
Investment Efficiency Strengthens the Effect of Financial Statement Quality on Information Asymmetry
The results of the second hypothesis test show that investment efficiency strengthens the effect of the quality of financial statements on information asymmetry. Financial statements presented in high quality by company management can minimize information asymmetry, adverse selection, and moral hazard. Investment efficiency made by internal company in developing company investment value is expected to increase the presentation of higher quality financial statements by the company management. That is because the fixed assets investment made by the company's management will reach efficiency if the information in the financial statements has described the financial position and the actual condition of the company. Presentation of quality financial statements by internal companies will affect the level of information asymmetry that is increasingly lower for the company's management with investors as seen from the level of spread. High spreads reflect high information asymmetry while low spreads reflect low information asymmetry. Investors are expected to be able to carry out their investment activities optimally based on accounting information in the financial statements that have been presented by a company.
The results of the study are consistent with the results of research by Tao Ma (2012) which states that a company should present financial reports about quality accounting information so that the company can facilitate optimal investment activities to be carried out by internal companies and affect the level of information asymmetry between the company management and investors. Bushman & Smith's research (2003) also explains that quality financial reports can directly make company management more accountable so as to reduce information asymmetry and reduce the level of moral hazard in determining investment opportunities.
Conclusions and Suggestions
Based on the results and discussion of data processing analysis through the SPSS version 13.0 for Windows program that has been outlined in the previous section, the research conclusions namely the quality of financial statements have a negative effect on information asymmetry. Investment efficiency is able to strengthen the negative influence between the quality of financial statements on information asymmetry.
Suggestions that can be given based on the conclusions of the study are to add a number of other independent variables or find the right independent variable in the study because the coefficient of determination (R2) is relatively small at 10.2 percent indicating only 10.2 percent of the variation of the dependent variable used in research that can be explained by the independent variables. Future studies are suggested to use a wider range of samples so that future studies can provide more precise and accurate results so that they can be used more broadly and can provide more valuable benefits to many readers.
|
2020-05-07T09:11:43.263Z
|
2020-04-01T00:00:00.000
|
{
"year": 2020,
"sha1": "0a49eb87de5bdd26bb89286cd05a1d7fad40ecdd",
"oa_license": "CCBY",
"oa_url": "https://iiste.org/Journals/index.php/RJFA/article/download/52376/54108",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "3da6d1d49cd8b63b54985829f41b629724412f11",
"s2fieldsofstudy": [
"Economics",
"Business"
],
"extfieldsofstudy": [
"Business"
]
}
|
13313233
|
pes2o/s2orc
|
v3-fos-license
|
What Determines Salt and Pepper Passage? A Brief Commentary on the Published Reports
In the 1970s, two similar reviews of an interesting ostensible research literature were published. They claimed to show that a number of factors had been identified to explain how salt is passed on request. Recently, this matter has been taken up again with further reports of new factors suggested to influence the behavior. Moreover, the ideas have been extended to pepper passage. This paper comments critically on these writings.
According to Deese, psychology involves both science and art, which places it in the humanities as well as in the natural sciences [1]. Around the same time, two papers appeared with parallel reviews of the ostensible literature explaining the process by which salt is passed upon request [2,3]. Reflecting the theme of Deese's title, the authors observe that this has been a humanistic topic of philosophical debate but, based on formal psychological theories, they also made a compelling and interesting case that many scientific empirical studies had collectively identified factors affecting salt passage: in particular, the politeness of the request, the number of people present, and both attitudes and race of sender and receiver. In the tradition of the typical psychological study, the authors concluded by making suggestions for future research going forward.
These two satirical papers were widely read and enjoyed by the psychological community, which recognized the clever application of psychological theory and research methods to what on the surface seems like a rather trite topic. However, the issue lay dormant until very recently, when what almost seems like a spate (if three publications in rapid succession is a spate) of papers has appeared [4][5][6]. The purpose of the present note is to critically review these recent works.
Current Publications on Salt Passage
In the spirit of openness and transparency, my attention was drawn to these papers because research of [7] was cited. On looking up the source [6], I was led to the other two.
The first of the three recent works [4] is an investigation of the role that sex of requester and sex of sender might play in the behavior of passing the salt. In this proposal for a research study, Minér anticipated an opposite-sex effect, perhaps due to the factor of attraction. That is, salt passing would be faster when a male asks a female than when a male asks another male, and would also be faster when a female asks a male than when a female asks another female. This paper also generalized the discussion of the passage of salt to the passage of pepper, arguing that pepper would be passed more slowly than salt because it is less common to shake pepper over the chips and peanuts that would be present on the table in the experimental situation. It was speculated that pepper may be more likely to cause sneezing than salt, interfering with response time. This proposal was presented in some detail, with numbers backing up the predictions.
In the second publication, Minér et al. [6] propose generalizing the work in another way: to investigate the attraction hypothesis directly by experimentally manipulating the attractiveness of the requester. This would be accomplished by creating an extra-long nose for half of the conditions. That is, the request is made by a person with a long nose or a normal short nose. My work [7] was cited because of the finding that schematic faces with long noses were rated as less attractive than faces with short noses. In addition, like Minér [4], Minér et al. included pepper passage along with salt passage. It was speculated that the combination of a long nose and pepper is special because together they might encourage more sneezing, causing a marked slowing of response time over and above the two main effects. This implies a significant interaction between nose length and substance. Again, quantitative backup was provided for the predictions, with longer response times for longer noses and for pepper. However, in these numbers, the slowing effects of nose and pepper were actually independent and additive. Because the data were said to be entered to reflect predictions, this inconsistency with the expected interaction is puzzling.
The third paper in this recent flurry of activity [6] presented a detailed expected report based on the suggestions offered by Minér et al. [6]. Consistent with predictions, it was stated that passing times were slower for pepper than for salt and, in line with attractiveness theory, also slower for the long-nosed requester than for the short (normal) nosed requester. As with, [6] the interaction of nose and substance was not significant, so the patterns in the entered data again did not match the prediction. However, there was also a second anomaly: for both main effects, the numerical data were opposite to the described results. In the mean scores reported in a table, passing times were faster for pepper than for salt and for the long nose than for the short nose.
Conclusion
Taken together, this series of papers report intriguing but anomalous results on what seems to be an esoteric topic. Everyone who read the two original papers [2,3] appreciated the work and acknowledged its subtleties. The recent papers seem to follow along the same line, extending it further.
Research anomalies are not unusual. For example, results may be statistically significant, leading people to conclude that they are important, when the effect size is actually small. Or a highly significant and even large effect may be reported, but is not replicated indeed; some error is always present in research. For example, sampling error may occur in choosing subjects and Type I error and Type 2 errors may occur in decision-making [8]. However, it is rare to see the internal contradictions like the two identified above. Could they be transcription slips that escaped attention?
Overall, the solution to error is greater vigilance by editors and reviewers, and of course by the data gatherers, data analyzers, and writers themselves. It is not possible to eliminate all errors in research, but they can surely be minimized. At the same time, the unconventional but careful use of anomaly may draw attention to weaknesses in the research process while simultaneously highlighting that a linguistic device (ironic errors) can be part of the critic's toolkit [8] Consider Pencil's [3] and Pacanowsky's [2] subtitle following their main title: "Salt passage research: The State of the Art" (italics mine). Surely James Deese would approve [1].
|
2019-03-16T13:07:37.223Z
|
2016-01-01T00:00:00.000
|
{
"year": 2016,
"sha1": "2ba732b1e09dd9eb96b27752b3a0d5f1f7c66bad",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.21767/2471-7975.100020",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "c56152ab96d0c61720c889d37ab838aedf8eb6fc",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
53048945
|
pes2o/s2orc
|
v3-fos-license
|
Evaluating Different Visualization Designs for Personal Health Data
With the massive development of sensing technology and the availability of self-tracking devices and apps; the interest in personal data collection has widely increased. However, the data representation methods on these tracking devices and apps have many limitations. Our research concentrates on evaluating different data visualization alternatives that could be used to represents the tracked physical activity data (e
INTRODUCTION
Self-tracking is becoming more interesting recently due to the low cost wearable self-tracking devices and the ability to track different personal data using smartphones (Larsen et al., 2013).Many apps, which are available on the major app stores, are developed to help the user collect different types of personal data.For example, capturing data on sports and physical activity (e.g.Fitbit, Jawbone, Runkeeper), sleep (e.g.Sleep Cycle), food and liquid consumption (e.g.MyFitnessPal, Waterlogged), locations (e.g.Moves), time tracking (e.g.Hours) and many others.
The intensive use of these devices and apps and the ability to access information anywhere using the Internet result in a huge amount of personal data such as habits and behaviours (Li, Dey, & Forlizzi, 2011).All of the data collected could be used for different purposes such as self-reflection to help in decision making, increasing self-awareness or changing behaviour in different domains including health and energy (Li et al., 2011).
In this project, we aim to concentrate on a very important aspect of personal data which is health data.Personal health data is beneficial not only to the person himself, but also to others who have interest in the data such as other patients and clinicians (Zhu et al., 2016).It also could be shared with friends and family members.According to (IQVIA Institute, 2017), the number of mobile healthrelated apps available today in the market has exceeded 318,000 with around 200 apps being added every day.In addition to this increase in health apps, there are around 340 wearable devices worldwide (IQVIA Institute, 2017).These health apps and devices often collect different types of health data, which could be automatically generated by the sensors (such as step count and heart rate) or entered manually by the user (such as nutrition and calories intake).Therefore, they generate large collections of complex health-related metrics.A major challenge in dealing with such large volume of complex data is in interpreting and in extracting useful knowledge about users' personal health.
Information visualization has always played an important role in communication.It offers the methods and tools that could be used to represent the data and to generate information (Mazza, 2009).Since the human visual system has the ability to perceive visual attributes very well (Mazza, 2009), visualization can become powerful in helping people to gain an understanding of the data.Most selftracking devices and their companion apps/dashboards provide different forms of data representation (e.g.Fitbit bar chart).However, these visualizations do not always fulfil users' needs in the data as presented in (Li et al., 2011).Some users who faced different difficulties in understanding their data, developed customized personal visualization to represent and reflect on their data as discussed in (Choe et al, 2014).
The structure of the paper is as follows.We start with stating the problem and research questions, then we present a background on personal health tracking and visualization followed by a discussion of our research methodology.We then discuss the initial results from the preliminary stages of the research.
We conclude with a discussion on the main contribution of the research.
RESEARCH OBJECTIVE AND QUESTIONS
The essential aim of our research is to evaluate visualization design alternatives to represent multivariate personal and shared physical activity data for non-expert users.The main research questions are as follows: • RQ1: What are users' preferences in the visualization of personal health data?And what are the limitations of the visualization methods supported by most popular apps and dashboards?• RQ2: What are the differences between visualization methods that could be used to represent multivariate personal physical activity data in term of users' performance and users' preferences?
In the methodology section, we describe the methods used to answer the research questions.
RELATED WORK
The focus of our research lies in the intersection of two research areas: personal health visualization and the evaluation of information visualization techniques.In the following, we present a brief overview of the most related work to our research.
Recent development in wearables and sensing technology facilitates collecting multifaceted data about oneself.An important type of these data is related to health and physical activity (e.g.activity levels, heart rate and sleep), which was supported by wide range of wearable devices like fitness bracelet and smart watches in addition to a variety of apps which are freely available on app stores.The availability of personal health data led to increase people's and patients' awareness and responsibility for their health (Shneiderman et al., 2013).Many research projects have focused on the visualization of these personal data and on users' requirements in visualization designs (Choe et al., 2015;Choe et al., 2014;Epstein et al., 2014;Li et al., 2011).
Different visualization designs are proposed to address different challenges in the field such as overcoming the limitations of statistical charts which are widely used by apps/dashboards.For example, Meyer et al. (2016), explored the challenges of visualizing complex health data on mobile devices.
They developed metaphoric and quantitative visual designs to support both long and short-term data representation.The outcomes of this study showed that the users are interested in using more advanced visualizations that could reveal more complex relation in the data and having additional features such as filtering (Meyer et al., 2016).Another example is a study conducted by Fan et al. (2012), who used an informative display to apply the concept of abstract art on Fitbit physical activity data.They developed Spark, an online visualization tool that provides four different designs of abstract representations (spiral, rings, bucket and Pollock) in addition to bar charts and investigated users' experience of using the two types of visualization with their own Fitbit data.The results showed that different visualizations were suitable for different purposes.Fan et al., argued that abstract visualization is more aesthetic and suitable for glanceable display while the bar chart was required for gaining a detailed view or looking for specific information.However, the evaluation was restricted to 6 participants only.Similarly, (Tong et al., 2015) evaluated and compared three types of visualizations which were Fitbit bar chart, Circular Ringmap and Virtual Pet visualization considering different factors such as readability and attractiveness.The result showed that there is no relation between the efficiency of the visualization and the participants' preferences and subjective feedback.
Larsen et al. ( 2013), focussed on revealing the continuity and the periodic characteristics of selftracked data by representing Fitbit physical activity data on a spiral, where circles refer to a time span.The interactive visualization proposed helped in discovering the periodic event in the data (Larsen et al., 2013).Huang et al. (2016) proposed a new design for integrating a visualization of Fitbit data as an additional layer with a personal calendar to help people to easily reason about regular and irregular patterns in the visualization.
Other personal health visualization research focused more on abstract representations and used living metaphors to reflect the users' current level of activity, such as UbiFit (Consolvo et al., 2008), which used flowers and garden metaphor to represent user's physical activity data and Fish 'n' Steps (Lin et al., 2006) that used virtual fish to represent each user's step count.The aim of using these living metaphors was to motivate the users to increase their level of physical activity by making them attached to these metaphors.However, it may also encourage negative emotions (e.g.guilt), as discussed in (Lin et al., 2006).
Data visualization is an important aspect in personal health technology design space (Bardram & Frost, 2016).Although there have been many researches discussing the visualization of personal health data and other discuss the effectiveness of different visualization methods such as bar chart and line chart, we aim to focus on the evaluation of the design alternatives for representing different variables of health data including subjective feedback of the users.Our research concentrate on evaluating the most popular data representations methods provided by health tracking apps and dashboards in term of personal health context.We evaluate the use of both line chart and bar chart in representing three variables of physical activity in the traditional linear format and in the radial format based on clock metaphor.
METHODOLOGY
Our methodology to answer research questions is a combination of both qualitative and quantitative research approaches.We employ literature survey, online surveys, autoethnography and a systematic evaluation of the visualizations.In the following, we describe these methods and how they will be applied to answer the questions.
In our literature review, we started from (Huang et al., 2015) survey that explores a number of 50 literature in the field of personal visualization and personal visual analytics to identify research trends and gaps in the fields.We considered the papers cited in this work which are related to personal health visualization and then moved to their citations and the papers cite them.We also search HCI and visualization conference and journals for related work to identify the gaps in personal health visualizations.
Online questionnaires are used in different stages of our research.The first questionnaire is deployed before visualization implementation phase and it aims to understand more about users' requirements and preferences in different visualizations designs for physical activity data.It also includes questions regarding participants tracking behaviour and the tools they use.Other questionnaires will be part of the evaluation study and it will include both demographic questions and questions to collect participants' feedback about the developed designs.
Moreover, we conduct an autoethnography study which we use to review the most popular health and physical activity tracking devices and their companion apps in the visualization they provide to report on the limitations and research opportunity in personal health data representation and sharing.
Our evaluation is based on previous visualization research, which study the effectiveness of different visualization designs through visual task-based evaluation study such as (Borgo et al., 2012;Heer, Kong, & Agrawala, 2009;Srinivasan et al., 2018).
We compare the different visualizations by measuring task accomplishment time and accuracy (i.e.error rate) when the user performs a set of tasks on each visualization.The visualizations will be evaluated in accordance with a taxonomy of tasks for multidimensional visualization (Valiati, Pimenta, & Freitas, 2006).In addition, we collect users' feedback about the visualizations pre and post the experiment using questionnaires to qualitatively evaluate the design choices and to compare the users' preferences in the visualizations with their performance.Our participants will be recruited from students and staff in the university across all the colleges and departments.
LIMITATION OF OUR METHODOLOGY
Evaluating personal visualization is challenging.
There are varied metrics need to be measured to justify design choices in personal visualization space such as how the design fit in users' daily life.
According to (Huang et al., 2015) and (Thudt et al., 2017); depending on conventional metrics (i.e.task completion time and accuracy) for evaluating personal visualization is not enough.
In our research, we focus on the effectiveness of different visualization layouts for physical activity data.Therefore, we measure the accuracy and task completion time for comparing the proposed visualizations.This restricts our evaluation to use a fixed dataset for the visualizations and not to be able to conduct the evaluation in a personal context such as using participants' personal health data with the visualizations.However, to strengthen our evaluation, we investigate other metrics related to users' personal preferences in the evaluation.
INITIAL RESULTS
In this section, we present a brief summary and outlines of the results we have found from the preliminary stages in our research.
Results from the questionnaire
This questionnaire was deployed before the development of the visualizations.The aim of this questionnaire is to understand more about users' requirements and preferences in different visualizations designs for physical activity data for personal and for sharing purposes by presenting several designs and collect participants' opinions about the visual representation.We asked the participants about their preference in the widely used traditional statistical charts such as bar/line/pie charts and other abstract representations such as the use of metaphors and abstract art.
The questionnaire was circulated during November and December 2016 and we have collected responses from 84 participants.For all the 84 participants, 53 choose traditional statistical charts, only 1 participant choose the abstract visualization while the rest (around 36%) suggested that both will be helpful.In regard to the preferences in the visualizations of each type, the highest preferred is line chart (57%) and then bar chart (52%) while the least preferred is Radial pie chart (12%) and pie chart (18%).6% of the participants have not chosen any preferences.In term of abstract representation, we presented four different design options, which are flowers and garden metaphor, living metaphor, clock and calendar metaphors and abstract art.41 participants preferred the clock and calendar metaphor which is the highest preferred type of abstract visualization by 49%, 27 participants (32%) choose abstract art, 14 (17%) preferred living metaphors and only 9 participants (11%) preferred flowers and garden metaphors.17 participants (20%) had no preferences of any of the presented abstract visualizations.
Open-ended questions revealed more about participants' choices such as the main aim of using charts for data.The answers given included various topics, the three main trends we have identified are: (1) Data Comprehension and Gain Knowledge, (2) Aesthetic View and Entertainment, and (3) Personal Preferences with sub-topics in each category.However, the traditional charts were preferred for its familiarity, clarity, its ability to provide a detailed view and because it is easy-to understand.On the other hand, abstract visualization which was reported as it has aesthetic view, it was considered as confusing and ambiguous and could be used for other purposes such as entertainment.
Results from the pilot study
We conducted a pilot study with 16 participants from Cardiff University.We recruited participants among students and staff in the department of computer science and informatics in the university (7 male and 8 female).We developed eight visualizations for three physical activity variables, which are step count heart rate and active calories.We used bar chart and line chart to represent the data in hourly basis.We apply these charts in traditional linear layout and in radial layout based on a clock metaphor.We also visualize the variables in two different ways.The first is visualizing the three variables to be overlapped with each other by sharing the same visual space and the second method is visualizing them separately where each variable has its own space.The 16 participants were assigned to four groups, where each group sees the visualizations in different order.
The preliminary analysis of the pre-experiment questionnaire data shows limited preferences in the presented visualizations.Four visualizations are preferred in order as the following: 8 participants liked linear stacked bar chart, 4 participants preferred linear overlapped line chart while 4 participants preferred each linear stacked line chart and radial overlapped bar chart.However, the other four types of visualizations have not been chosen by any participants.
Participants' preference after conducting the experiment have changed.The data shows that the most preferred type of the visualizations is linear stacked bar chart as 10 participants preferred it, followed by linear overlapped line chart which was preferred by 6 participants then followed by linear stacked line chart as it was selected by 5 participants.The radial stacked bar chart was the most preferred between the radial layouts, it was preferred by 4 participants while the least was the overlapped bar chart which was chosen by one participants only and the rest two radial visualizations was preferred by two participants for each one.
We aim to continue with data analysis to include the data collected from our software (i.e.tasks' answers and completion time) and then to compare the answers given by each participant with the responses to the tasks during the experiment.This study will be conducted with a larger sample of participants.
CONTRIBUTION
The main contribution of my PhD thesis is a systematic investigation of visualization design alternatives for representing multivariate physical activity data.The research goes through several stages starting from initial investigation of users' preferences in the visualization methods of personal physical activity and the identification of the pros and cos of the visualizations provided by the most popular physical activity tracking devices and their companion apps/dashboards to task-based evaluation of different visualization designs.The evaluation includes quantitative analysis of task completion time and error rate as well as qualitative analysis of users' preference and feedback before and after using the visualizations.The output of this evaluation will contribute in identifying design guidelines for designing visualizations that are used to represent physical activity data.
As noted by (Huang et al., 2015), the current visualization designs are formulated by designers who decide on what information to visualize and how to visualize it without involving users or considering their perspective.In this thesis we aim to include users in the evaluation by including participants' qualitative feedback.We also investigate the relationship between users' preferences and the effectiveness of the visualizations.We aim to examine the effect of different visual elements and the options of combining and separating different variables in one visual layout and how it may help the user to find the relationship between different physical activity variables.
|
2018-10-03T17:45:05.228Z
|
2018-07-01T00:00:00.000
|
{
"year": 2018,
"sha1": "fe2891a266f816f4bee8673f13f30f5f1e16e8ce",
"oa_license": "CCBY",
"oa_url": "https://www.scienceopen.com/document_file/54c6564a-15a6-4b91-bdc4-e2008bf3a19f/ScienceOpen/BHCI-2018_Alrehiely.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "fe2891a266f816f4bee8673f13f30f5f1e16e8ce",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
236004029
|
pes2o/s2orc
|
v3-fos-license
|
The impact of the COVID-19 pandemic on influenza, respiratory syncytial virus, and other seasonal respiratory virus circulation in Canada: A population-based study
Background The ongoing coronavirus disease 2019 (COVID-19) pandemic has resulted in implementation of public health measures worldwide to mitigate disease spread, including; travel restrictions, lockdowns, messaging on handwashing, use of face coverings and physical distancing. As the pandemic progresses, exceptional decreases in seasonal respiratory viruses are increasingly reported. We aimed to evaluate the impact of the pandemic on laboratory confirmed detection of seasonal non-SARS-CoV-2 respiratory viruses in Canada. Methods Epidemiologic data were obtained from the Canadian Respiratory Virus Detection Surveillance System. Weekly data from the week ending 30th August 2014 until the week ending the 13th March 2021 were analysed. We compared trends in laboratory detection and test volumes during the 2020/2021 season with pre-pandemic seasons from 2014 to 2019. Findings We observed a dramatically lower percentage of tests positive for all seasonal respiratory viruses during 2020-2021 compared to pre-pandemic seasons. For influenza A and B the percent positive decreased to 0•0015 and 0•0028 times that of pre-pandemic levels respectively and for RSV, the percent positive dropped to 0•0169 times that of pre-pandemic levels. Ongoing detection of enterovirus/rhinovirus occurred, with regional variation in the epidemic patterns and intensity. Interpretation We report an effective absence of the annual seasonal epidemic of most seasonal respiratory viruses in 2020/2021. This dramatic decrease is likely related to implementation of multi-layered public health measures during the pandemic. The impact of such measures may have relevance for public health practice in mitigating seasonal respiratory virus epidemics and for informing responses to future respiratory virus pandemics. Funding No additional funding source was required for this study.
Introduction
The emergence of coronavirus disease 2019 (COVID-19) has resulted in an unprecedented global pandemic leading to significant morbidity and mortality, particularly among older and vulnerable adult populations [1] . It has been more than one year since the pandemic first began and in this time, policy makers worldwide have implemented stringent mitigation effort s to reduce transmission of severe acute respiratory syndrome virus 2 (SARS-CoV-2). These measures have included the implementation of local and international travel restrictions; use of targeted lockdowns including stay-at-home orders and school closures; universal guidance on handwashing, physical distancing and, use of face coverings [ 2 , 3 ]. In Canada, restrictions on international travel with a 14-day quarantine for returning travellers who do not meet exemption criteria were introduced on 25 th March 2020 and these have remained in effect since [4] . Localised lockdown policies have also been implemented, with variation in region-specific stay-at-home orders and, school closures [5] .
Previously, there was concern regarding the potential of increased healthcare burden from the dual impact of an ongoing COVID-19 pandemic in many countries coinciding with the seasonal influenza virus peak which causes significant annual morbidity and mortality [6] . Understanding of the impact of COVID-19-related transmission mitigation measures on the transmission of other respiratory viruses is limited. Increasingly, it is being recognised that during the COVID-19 pandemic, the traditional respiratory virus season has been significantly altered with notable decreases in the incidence of other seasonal respiratory viral infections [ 3 , 7 , 8 ]. In particular, significant decreases in influenza detection in the 2020 to 2021 season are being reported by surveillance data from many countries worldwide. In the United States, influenza-related hospitalisation rates are lower during the 2020/2021 season compared to any season since routine data collection began in 2005 and similarly, Australian influenza surveillance reported historically low activity levels from April 2020 onwards [ 9 , 10 ]. The reasons for this observed significant decrease are not yet fully understood and are likely related, at least in part, to the aforementioned COVID-19 mitigation strategies.
The impact of the current COVID-19 pandemic on detection of influenza, respiratory syncytial virus and other seasonal respiratory viruses across all of Canada during the 2020-2021 season has not previously been reported. The objective of this study is to characterise the epidemiology of laboratory confirmed detection of influenza, respiratory syncytial virus, and other non-SARS-CoV-2 seasonal respiratory viruses across Canada before and during the COVID-19 pandemic.
Design and setting
The study is a population-based observational study using Canada-wide laboratory surveillance data sources as detailed below.
Data sources
Data on non-SARS-CoV-2 virus detection were obtained from the Canada Respiratory Virus Detection Surveillance System [11] . This national surveillance is coordinated by the Public Health Agency of Canada (PHAC). Sentinel public health and hospital laboratories across Canada provide PHAC with weekly summaries of respiratory virus test results and test volumes. PHAC collates the data and delivers weekly publicly available updates. Reporting laboratories, include reference and public health laboratories for the Atlantic region, Province of Quebec, Province of Ontario, Prairies region, British Columbia and the Territories. For further details on laboratories included please refer to https://www.canada.ca/en/public-health/services/ surveillance/respiratory-virus-detections-canada.html . Weekly Respiratory Virus Detections data are collected continuously, with reports available from 2006.
Data on SARS-CoV-2 cases in Canada were obtained from the Government of Canada Public Health Infobase [12] . Public Health Infobase data is co-ordinated and managed by PHAC. The number of new daily cases of COVID-19 (confirmed and probable) are established from the net change between what provinces and territories report to PHAC for the current day and for the previous reported day.
Information on international and provincial travel restrictions and public health measures were obtained from the Government of Canada and Public Health Agency of Canada [ 13 , 14 ].
All reported data in this study contains information licensed under the Open Government Licence -Canada. All data included in this analysis was obtained from publicly available de-identified datasets and therefore additional ethical approval was not required.
Participants
The study population included all respiratory virus tests conducted at sentinel reporting laboratories in Canada during the study period of 2014 to 2021 inclusive. The reporting laboratories provide limited information on participant demographics and no clinical information on cases (either positive or negative) is available. Limited available demographic information was not analysed for this study.
Measures/variables
In this study we analysed weekly data from the week ending 30 th August 2014 (epidemiological week 35) until the week ending the 13 th March 2021 (epidemiological week 10) inclusive. This study period was chosen to include five full respiratory virus seasons prior to 2020. For the purpose of the study, we defined a case of non-SARS-CoV-2 respiratory virus as any laboratoryconfirmed positive test for non-SARS-CoV-2 respiratory viruses reported to PHAC. Non-SARS-CoV-2 respiratory viruses included influenza, respiratory syncytial virus (RSV), parainfluenza viruses (PIV) types 1, 2, 3 and 4, adenovirus, human metapneumovirus (hMPV), enterovirus/rhinovirus, and seasonal coronaviruses. Seasonal coronavirus detection includes seasonal human coronaviruses HCoV-229E, HCoV-OC43, HCoV-NL63, HCoV-HKU1 and does not include human coronaviruses SARS-CoV, MERS-CoV, and SARS-CoV-2. Weekly percentage of tests positive for each virus was defined as the number of cases reported over the total number of tests reported for the epidemiologic week under surveillance, expressed as a percentage.
For SARS-CoV-2 cases we analysed daily data from 31 st January 2020 to 13 th March 2021 inclusive. For the purposes of this study, we defined weekly SARS-CoV-2 cases as the number of confirmed or probable cases of COVID-19 reported to PHAC for each week under study. For greater detail on case definition, fluctuation in daily case reporting across provinces and territories and a full list of web sources used in the epidemiological data please refer to healthinfobase.canada.ca and the Government of Canada COVID-19 National Case Definitions [ 15 , 16 ].
Statistical methods
The total study period included the 2014/2015 influenza season, defined as beginning at the week ending 30th August 2014 (epidemiological week 35) until the 2020/2021 influenza season, the week ending 13th March 2021 (epidemiological week 10). The baseline "pre-pandemic" period was defined as beginning from the week ending 30th August 2014 to the week ending 7th March 2020 inclusive. This period was selected to allow inclusion of five full seasons of baseline data until the global pandemic was called by the World Health Organization (WHO) on March 11, 2020 [17] . The pre-pandemic period was compared to the 2020/2021 season to date, which we defined as the week ending 29 th August 2020 (epidemiological week 35) to the week ending the 13 th March 2021 (epidemiological week 10) inclusive. An inter-season period was defined as beginning at the onset of the COVID-19 pandemic from the week ending 14 th March 2020 (epidemiological week 11) until the week ending 22 nd August 2020 (epidemiological week 34) inclusive. For primary comparative analysis the inter-season period was not included and to permit similar comparison with the prepandemic period, only epidemiological weeks 35 to 10 were included for each season (i.e. epidemiological weeks 11 to 34 each season were excluded).
Data analysis was performed using GraphPad Prism 9.0.2 (Graph-Pad Software, LLC) and Stata version 16.1. Interrupted time series analyses [18] were performed to assess the difference in percent positivity for each virus of interest between the pre-pandemic period and the 2020/2021 season. Specifically, segmented negative binomial regression was used given the presence of over dispersion and each model included a) the number of positive tests; b) two pairs of Fourier sine-cosine terms; c) a binary variable denoting the time period as either pre-pandemic or 2020/2021 season; and d) a discrete variable denoting week of the study period to account for potential changes in respiratory virus testing over time; and e) an offset of the number of tests conducted. Rate ratios and 95% confidence intervals were generated between the pre-pandemic and 2020/2021 season. Sensitivity analyses including the inter-season period were also conducted. A p-value of < 0.05 was regarded as statistically significant.
The number of positive laboratory results was summarized by virus type using line graphs. Data on rhinovirus/enterovirus testing in the Province of Quebec, data on PIV, adenovirus, hMPV and rhinovirus testing in the Yukon territory and data on seasonal coronaviruses testing in Newfoundland were not available throughout the entire study period. One sentinel laboratory, St. Joseph's Healthcare in Hamilton, Ontario, has been added to those providing surveillance data over the study period and began reporting data on the week ending 23 rd November 2019. Laboratories in the Territories region did not report data in the 2014/2015 season. One sentinel laboratory, Saskatoon laboratory, Saskatchewan, stopped reporting during the study period from the week ending 15 th February 2020. To identify the impact of these reporting differences, analysis was repeated excluding the data from St. Joseph's Healthcare in Hamilton, the Territories and Saskatoon Laboratories.
This study is reported according to the STROBE guideline for observational studies.
Funding Source
No funding source was required for completion of this work.
The average weekly number of combined laboratory tests for all non-SARS-CoV-2 respiratory viruses performed was reported as 28,039 (range from 7,383 to 63,795) for the pre-pandemic period (excluding epidemiological weeks 11 to 34) and 43,708 (range from 27,597 to 77,110) for the 2020/2021 season. Table 1 details average weekly testing for each virus of interest.
By comparison, for the period from the week ending 29 th August 2020 (epidemiological week 35) to the week ending the 13 th March 2021 (epidemiological week 10) inclusive., 20,155,105 SARS-CoV-2 tests were reported to have been performed at reporting laboratories throughout Canada.
We observed dramatically lower detection levels of all non-SARS-CoV-2 respiratory viruses during the 2020-2021 season compared to pre-pandemic levels ( table 1 , Figure 1, 2 and 3 ). This observed decrease did not seem to be related to a drop in overall numbers of tests for these viruses (supplemental figures S1 and S4). For influenza A and B, the percentage of tests with positive results during the 2020/2021 season decreased significantly, recorded at a rate of 0 • 0015 (95% CI: 0 • 001-0 • 002) and 0 • 0028 (95% CI: 0 • 0 01-0 • 0 07) times that of pre-pandemic levels, respectively. For RSV, the percentage of tests positive dropped to 0 • 0169 (95% CI: 0 • 012-0 • 024) times that of pre-pandemic levels during the 2020/2021 season. For the remaining respiratory viruses studied, Table 1 Average weekly testing numbers and percentage positive tests for non-SARS-CoV-2 respiratory viruses at sentinel laboratories in Canada for the 2020/2021 season and 2014-2019 pre-pandemic seasons.
Pre-pandemic period begins the week ending 30 th August 2014 through to the week ending 7 th March 2020 inclusive. The 2020/2021 season was defined as beginning the week ending 29 th August 2020 to week ending 13 th March 2021 inclusive. Comparative analysis excluded the same inter-season period (epidemiological week 11 to week 34) for all pre-pandemic years as well as for the 2020/20201 season. * Rate ratio and p-values calculated using interrupted time series to assess the difference in percent positivity for each virus of interest between the prepandemic period and the 2020/2021 season period. as described in methods. This table is based on weekly count data. Abbreviations: RSV = Respiratory syncytial virus, PIV = parainfluenza virus (includes combined type 1,2,3 and other types), hMPV = human metapneumovirus. * * Coronavirus excludes human coronaviruses SARS-CoV, MERS-CoV and SARS-CoV-2; Includes seasonal human coronaviruses HCoV-229E, HCoV-OC43, HCoV-NL63, HCoV-HKU1. Note data on rhinovirus/enterovirus testing in the Province of Quebec, data on PIV, adenovirus, hMPV and rhinovirus testing in the Yukon territory and data on seasonal coronaviruses testing in Newfoundland were not available for any week throughout the entire study period. One sentinel laboratory, St. Joseph's Healthcare in Hamilton, Ontario, has been added to those providing surveillance data over the study period and began reporting data on the week ending 23 rd November 2019. Laboratories in the Prairies region did not report data in the 2014/2015 season. One sentinel laboratory, Saskatoon laboratory, Saskatchewan, stopped reporting during the study period from the week ending 15 th February 2020. Analysis excluding the data from St. Joseph's Healthcare in Hamilton, the Territories and Saskatoon Laboratory did not appreciably change the results, with no change in p-values noted and minimal difference in magnitude any of the rate ratios (see table S1).
comparison of the percentage of tests positive for the 2020/2021 season with pre-pandemic seasons also showed significant decreases. The percentage of tests with positive results for PIV and adenovirus decreased to 0 • 0019 (0 • 014-0 • 025) and 0 • 234 (0 • 200-0 • 273) times that of pre-pandemic levels respectively, for hMPV this decreased to 0 • 038 (0 • 024-0 • 059) times that of pre-pandemic levels, for enterovirus/rhinovirus and seasonal coronaviruses this decreased to 0 • 531 (0 • 480-0.593) and 0 • 028 (0 • 019-0 • 041) times that of pre-pandemic levels respectively. Importantly, at the time of these observed decreases in percentage of positive tests for non-SARS-CoV-2 respiratory viruses, ongoing cases of SARS-CoV-2 were reported with an initial peak in cases in April 2020 followed by a larger second peak in January 2021 ( Figure 1 b,c).
Interestingly, higher (although still attenuated) ongoing detection of enterovirus/rhinovirus was reported during the 2020/2021 season versus other non-SARS-CoV-2 respiratory viruses, with average weekly percentage of tests positive of 8 • 5%, versus less than 1% respectively ( table 1 , Sensitivity analysis excluding the data from St. Joseph's Healthcare in Hamilton, the Territories and Saskatoon Laboratory there was no appreciable change in the results, with no change in pvalues noted and minimal difference in magnitude any of the rate ratios (supplementary table S1). In sensitivity analyses including the inter-season periods, steep declines in weekly percentage of tests positive were still detected, though rate ratios for most viruses were more conservative primarily due to ongoing detection during March and April 2020 (supplementary table S2).
Discussion
This study is the first Canada-wide study to examine the impact of the COVID-19 pandemic on the detection of both influenza and non-influenza seasonal respiratory viruses during the 2020-2021 season. In this extensive study of public health and hospital microbiology laboratory data across Canada, we demonstrate a significant change in the seasonal variation of respiratory viral infections detected during the current COVID-19 pandemic. With the exception of ongoing enterovirus/rhinovirus detection, the usual seasonal peak in positive laboratory tests for common respiratory viruses, including non-SARS-CoV-2 coronaviruses was effectively absent during the 2020-2021 season. This decrease in detection of non-SARS-CoV-2 respiratory viruses is consistent with studies from the United States and United Kingdom showing an early termination of the 2019/2020 influenza season and drop in detection of other viruses in the early phases of the pandemic [ 3 , 19 ]. Similarly, a number of studies in Japan, Taiwan, Korea and Thailand demonstrated a marked decline in influenza virus detection in the early phases of the COVID-19 pandemic [20][21][22][23][24] . As the pandemic has progressed and national lockdowns continued, studies of the respiratory virus season in the southern hemisphere from New Zealand and Australia demonstrated absence of the usual winter influenza virus seasonal peak and a marked reduction of other respiratory viruses [ 25 , 26 ].
Typically in the Northern hemisphere, influenza, RSV and non-SARS-CoV-2 human coronaviruses are noted to peak during winter months, with adenovirus detection occurring throughout the year, peaks of parainfluenza viruses occurring in non-winter months and human metapneumoviruses and rhinoviruses being mainly detected during spring and fall seasons [27] . The observed decrease in almost all seasonal respiratory viruses during the COVID-19 pandemic is likely related in large part to the implementation of multiple public health interventions, including restriction of international travel, hand-washing, wearing of face masks, school closures and stay-at-home orders. We note that at the same time of no detection of circulating seasonal coronaviruses, there were ongoing increases in cases of COVID-19 due to continuing SARS-CoV-2 transmission, which may reflect the difference in population susceptibility to the novel coronavirus. Seasonal human coronavirus antibodies are not associated with protection against SARS-CoV-2 infection [28] . Whether antibodies to SARS-CoV-2 provide any cross-protection against seasonal coronaviruses remains unknown. We also reported continued detection of enterovirus/rhinoviruses during the 2020-2021 season. While this may suggest ongoing circulation of enterovirus/rhinoviruses despite public health measures to control COVID-19, variation in regional multi-layered public health measures may account for this continued circulation. Other studies have reported similar continued enterovirus/rhinovirus detections despite implementation of public health measures to reduce the spread of SARS-CoV-2. In a recent study of children in New South Wales, Australia, despite significant decreases from the expected RSV seasonal peak between April and June 2020, an uptick in rhinovirus detections was noted in June 2020 [26] . Similarly, in a recent study in New Zealand, based on hospital-based severe acute respiratory surveillance data, between May and September 2020, marked reductions in rhinovirus levels were observed on the introduction of initial lockdown measures, however on easing restrictions a notable significant increase in rhinovirus-associated incidence rates was observed compared to other respiratory viruses [25] . This potential for rhinovirus resurgence is important for the interpretation of syndromic surveillance models for COVID-19 as there is considerable overlap in clinical symptoms for even mild enterovirus/rhinovirus SARS-CoV-2 [29] .
An apparent increase in rhinovirus detection on reduction of public isolation measures has also been reported in hospitalised adults in the UK following the re-opening of schools [30] . Likewise, in Hong Kong, after the re-opening of schools and childcare centres, a large number of outbreaks of acute upper respiratory tract infections (URTIs) occurred and on laboratory testing rhinovirus/enterovirus infections were identified [31] . The reasons for this reported resurgence of rhinovirus compared to other respiratory viruses are unclear. Rhinovirus is a non-enveloped virus that may be less susceptible to inactivation by handwashing [26] . In addition, enteroviruses have the potential for fecal/oral transmission and it is possible that differences in the mechanism of enterovirus/rhinovirus transmission compared to other seasonal respiratory viruses mean more stringent public health measures are required to fully reduce transmission. On provincespecific analysis of enterovirus/rhinovirus testing percentage positivity in Canada, regional variation during the 2020/2021 season was present. Higher percent positivity was seen in the Territo- ries and Atlantic regions with reduced peaks compared to historical baseline observed for Ontario, British Columbia and The Prairies. This may reflect differences in regional implementation of public health measures, such as later introduction of compulsory facemask guidance, as well as differences in return to in-person classes for children and indoor gathering regulations [32] . It is also possible that provincial/regional differences in SARS-CoV-2 activity may have played a role. In addition, rhinovirus infections are more likely to be symptomatic in children [33] , who may be differentially impacted by regional variation in public health measures such as school closures and masking mandates. Due to the absence of age-specific data in this study we are unable to assess the role of circulating enterovirus/rhinovirus in the paediatric population specifically. Overall, our results are consistent with a growing body of work indicating strict public health measures, such as regional lockdowns, border closures, handwashing and facemask wearing may be effective in significantly reducing the spread of epidemic respiratory viruses. However, what is notable in this study is the observed dramatic decrease in non-SARS-CoV-2 respiratory viruses including seasonal coronaviruses, despite ongoing detection of SARS-CoV-2 during the 2020/2021 winter season. This may reflect increased transmissibility of SARS-CoV-2 in comparison to other respiratory viruses, due to lack of preceding population immunity, such that it outcompetes seasonal respiratory viruses which are being more readily impacted by the measures implemented to mitigate SARS-CoV-2 transmission. It is difficult at this time to determine whether there is also potential impact of viral-viral interactions between SARS-CoV-2 and other respiratory viruses contributing to the observed decline in circulation. The potential for interaction between respiratory viruses has been speculated by pre-vious modelling studies, such as work by Nickbakhsh et al. who found both positive and negative respiratory viruses interaction at population and individual host level and suggested that viral interference may account for reduced frequency of rhinovirus infections during influenza seasons [34] . Nickbakhsh et al. further suggested such viral interference may potentially occur via interferonmediated mechanisms. This concept is supported by recent in vitro work in differentiated airway epithelial cultures, demonstrating upregulation of interferon stimulated gene expression following rhinovirus infection, leading to significant inhibition of subsequent influenza A virus infection [35] .
Another potential explanation for the apparent dramatic decrease in seasonal respiratory viruses despite ongoing SARS-CoV-2 detection is a possible reduction in testing and/or laboratory reporting for non-SARS-CoV-2 respiratory viruses. We recognise one potential limitation of this study is that laboratories and hospitals may have become overburdened with implementing large scale testing for SARS-CoV-2 leading to changes in testing and reporting mechanisms for seasonal respiratory viruses compared to previous years. Although, importantly, we did not note a decrease in the total number of laboratory tests performed for each of the seasonal respiratory viruses studied during the 2020/2021 season compared to pre-pandemic. Indeed, early in the pandemic an apparent increase in the total number of tests performed for each virus of interest was observed (supplemental figure S4). Thereafter, towards the end of 2020, the total number of non-SARS-CoV-2 tests performed decreased simultaneous to increased testing for SARS-CoV-2. It is likely that, compared to previous seasons, variation in testing practices by regional and local laboratories, such as increased periodic community testing, combined influenza/SARS-CoV-2 testing and increased testing for influenza and other seasonal respi- ratory viruses may have occurred during the 2020/2021 season. Moreover, it is worth noting that approaches for population testing of SARS-CoV-2 and non-SARS-CoV-2 viruses also differed, with testing of SARS-CoV-2 being much more community-based in comparison to other respiratory virus testing.
Limitations
There are a number of potential limitations of this study. Firstly, this is a retrospective observational study and it is difficult to identify whether the described decrease is as a direct result of multilayered public health strategies to mitigate spread of SARS-CoV-2 or other unidentified factors. Strategies were added in close proximity and targeted specific populations, via for example school and workplace closures, at different times. These differing measures were implemented with substantial regional variation across Canadian provinces, such that identifying individual key factors contributing to the overall observed decrease in non-SARS-CoV-2 respiratory virus detection is not possible in this study. In addition, baseline demographic and clinical comorbidity data were not available and accordingly these factors could not be adjusted for in the comparative analysis.
This study used data from the Canadian Respiratory Virus Detections Surveillance System, which relies on reporting of results from individual laboratories. Changes in testing policies and eligibility at reporting laboratories may have led to differences in detection of non-SARS-CoV-2 respiratory viruses during the pandemic. Laboratory reporting remained consistent throughout the pandemic for most reporting laboratories and as detailed above. Analysis excluding data from sentinel laboratories with variation in reporting practice during the pandemic compared to pre-pandemic resulted in no significant change in the time-series or rate ratio results. Other changes in testing practices such as the introduction of COVID-19 assessment centres throughout Canada could have an impact on respiratory viruses detection during the pandemic. Furthermore, differences in health-seeking behaviour by individuals during the pandemic may also have altered the detection of non-SARS-CoV-2 respiratory viruses. For instance, in light of the concern for COVID-19, individuals with respiratory symptoms may have been more likely to attend for specific SARS-CoV-2 respiratory virus testing at designated testing centres rather than to receive testing for other respiratory viruses through usual mechanisms. This is likely to explain in part the higher numbers of SARS-CoV-2 tests and detections seen during the 2020-2021 respiratory virus season and may account for some of the differences observed in non-SARS-CoV-2 respiratory virus detection over the same period (see supplementary figure S4).
Conclusion
In conclusion, in this study we report an effective absence of the annual seasonal epidemic for most non-SARS-CoV-2 respiratory viruses in Canada in 2020/2021 respiratory virus season. The reasons for this dramatic decrease are not yet clear and may reflect a number of different factors. It is likely the implementation of stringent measures to reduce the spread of SARS-CoV-2 also led to significant decreases in transmission of other respiratory viruses. However, the role of viral displacement and interference by SARS-CoV-2 is not yet known. In addition, the concept of a rebound in seasonal respiratory virus levels after relaxation of lockdown measures warrants further consideration. It is unclear whether the lack of exposure to non-SARS-CoV-2 respiratory viruses during the COVID-19 pandemic might result in larger outbreaks of other respiratory viral illnesses on easing of current public health measures in Canada. The absence of seasonal RSV and influenza epidemics also has potential implications for delivery of both palivizumab and influenza seasonal vaccination programs. Understanding the mechanisms behind the observed decreases in seasonal respiratory viruses is therefore of great importance and may benefit public health practice for mitigating seasonal respiratory virus epidemics and for informing responses to future respiratory virus pandemics.
Declaration of interests
Dr. Groves reports personal fees from Honoraria received from Abbvie for education meeting presentation, outside the submitted work. Dr. Piché-Renaud reports grants from Pfizer Global Medical Grants (Competitive grant program, investigator-led), outside the submitted work. Dr. Peci has nothing to disclose. Mr. Farrar has nothing to disclose. Mr. Buckrell has nothing to disclose. Dr. Bancej has nothing to disclose. Dr. Sevenhuysen has nothing to disclose. Dr. Campigotto has nothing to disclose. Dr. Gubbay has nothing to disclose. Dr. Morris reports personal fees from GSK Canada, personal fees from Pfizer Canada, grants from Pfizer Canada, outside the submitted work.
|
2021-07-18T13:18:09.428Z
|
2021-07-17T00:00:00.000
|
{
"year": 2021,
"sha1": "dc413d42dca4b0cd4df65eac34a04ad9694c58a3",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.lana.2021.100015",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0f6cbd5aca326dcd894fb9dff580ddcf6c9c60d6",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
239009084
|
pes2o/s2orc
|
v3-fos-license
|
Intermodal transportation as a quality improvement tool in tourism industry
The international tourism, with its developed transport infrastructure, transforms the earlier closed community into the open one, where the communications between the representatives of different countries become the everyday routine. Georgia has strategic geopolitical location on the conjunction of Europe and Asia and holds all the abilities to develop different types of passenger transportation and it enhances the further development of international tourism in the country. The authors examine the condition of country’s transport infrastructure and organization of passenger transportation via different types of transport, aiming to carry out the directions of improving the quality of tourist’s transport service and raising the competitiveness of tourism industry in Georgia, the authors proposed Intermodal Transportation Service Center as a tool of improving the quality of tourism services.
Introduction
Georgia with its Geopolitical location and natural, historical and cultural resources has practical unlimited resources to develop international tourism. While analyzing the tourism development issues it is very important to define its interconnection wit transport industry, keeping in mind that the development of tourism and transport is the interconnected and interdependent process. Related to the above mentioned it is rather actual the identification of impact of transport infrastructure on development of international tourism in Georgia, as the role and importance of tourism is acknowledged as most important factor of international tourism development. It is natural and logic, because tourism is relatively new social-economic phenomena and mostly is the result of transport development [1][2][3][4][5]. It became very important after the significant qualitative and quantitative changes in the tourism flow volumes, dynamics and structures at national and international levels.
The aim of the given research is to perform complex analyze of development of international tourism and problems of organization of passenger tourist transportation via all types of transport in Georgia and based on it carry out the methodic and practical recommendations to raise the effectiveness of organization of international tourism in the country. To reach the goal of the research following objectives were set: analyze modern conditions of international tourism in Georgia; analyze modern conditions of organization of passenger transportation via different types of transport; identify problems in organization of passenger transportation; develop scientific-methodic recommendations for improving passenger transportation in Georgia aiming to raise the quality of transportation of international tourists via different types of passenger transport and competitiveness tourism industry of Georgia on the global market of tourism services.
Methodology
The theoretical and methodical base of the research are the publications of Georgian and foreign researchers in the sphere of economics, transport and tourism management, also the legal acts and normative-legal documents, regulating the legal and organizational issues of transport and tourism performance.
The informational base of the research are the materials of National Statistics Office of Georgia and its territorial bodies, Ministry of internal affairs and Ministry of Economy and Sustainable Development, also the information from Georgian National Tourism Administration and Tourism and Resorts Department of Autonomous Republic of Adjara. The methods of economic analysis, synthesis and analysis, and statistical methods were used during research. For the purposes of quantitative evaluations the official materials of National Statistics Office of Georgia were used to perform comparative analysis between analytical and statistical estimations. Practical importance of developed scientificmethodical and practical recommendations comprises the argumentation for systematic measures to improve the organization of performance of transport infrastructure of Georgia supporting the raising the effectiveness of development of international tourism in the country. Interesting publications on the topic, especially about the multimodal transportation and tourism are made by Efthymiou and Papatheodorou [6], Lohmann and Pierce [7], Yang, Li and Li [8], Chang and Shieh [9] and Darmawan and Chen [10].
Tourism and transport in Georgia
In the market economy the development of tourism business means the existence of recreational resources, suitable capital, technology and workforce. In contrast with other fields of economy, tourism resources of Georgia are much diversified and include natural and anthropogenic geo-systems, also the natural phenomena that have comfortable characteristics and consumer value for commercial activity to use for organization of leisure and recreation [1]. In the recent years (all the years except 2020, because of pandemic covid-19 that destructively influenced on international tourism, lowering its qualitative, quantitative and financial characteristics to the minimum. As a result, the article presents all the figures of international tourism development except 2020 year) Georgia faces sharp raising the number of international tourists and revenues from international tourism. in the year of 2019 the quantity of international tourists reached 9 357 964 persons (growth rate compared to the 2018 consisted of 7,8%). Also, the revenues from international tourism exceed 3 268,7 million US dollars in 2019 (Table 1). The Table 2 below shows top-10 countries originate higher tourist arrivals in Georgia. The most of crossings of international tourists assigned to the cars could be explained with the majority of travelers from neighboring countries: Armenia, Azerbaijan, Turkey, Iran etc., who prefers to travel with cars because of territorial closeness of Georgia and the accessibility of transport system. As for railway and sea transports' low rates, it is worth to mention that Georgia has the railway connection with Armenia and Azerbaijan only, also the sea passenger transportations take place from the sea ports of Batumi and Poti to the directions of Sochi (Russia) and Odessa (Ukraine), with the small amounts.
Georgia's flight market shows the considerable growth rate last years. In the prepandemic period there were 3 international and 2 internal airports working in Georgia, fully suitable to the standards of the International Civil Aviation Organization (ICAO). The international flight mostly consists of flights from and to Tbilisi airport. The management of Tbilisi and Batumi airports is performed by Turkish company TAV Airports Holding Co.
Air companies that worked in Georgia in Pre-pandemic period performed lowbudgetary flights with relative prices: Wizz Air, Air Arabia, Pegasus, FlyDubai, Pobeda Airlines, Air Baltic, Buta Airways, Salam Air, Flynas, Ukraine International Airlines, Skyup Airlines.
Processing ability of Tbilisi, Kutaisi and Batumi airports consists 6,1 million, 600 thousand and 600 thousand passengers per year, accordingly. After the completing the expanding of airports in Kutaisi and Batumi (planned at 2021) their processing ability will rise to 2,5 million and 1,4 million passengers per year, accordingly.
In Transport is the key element of tourism product, in its part that the tourist consumes out of the tourist destination, on the way to it. While researching tourism, it is crucially important to define its interdependence with transport industry. The success on the tourist markets and adequate transport infrastructure are important conditions for development of tourist destination. By the other hand, demand on tourism is strong stimulus for accelerating the development of transport industry. Tourism depends on transport, its safety, speed and comfort; those are offered to the tourists during journey.
As it could be concluded from the analysis above, as tourism becomes massive phenomenon the range of the problems appears related to the transport services. One of the most important problems is the problem of imbalance of development of transport system of Georgia in total. It contains three elements.
Disproportion in the rates and scales of development of different types of transport. Best example for that -considerable low rates of sea transport development and high growth rates of automobile transport.
Insufficient development of existed transport infrastructure, that is shown in incomparability of the levels automobile roads to the level of automobile transport development and demand on automobile services, also in the existence of many problematic crossroads on the crossroads of different types of transport. Territorial inequality of development of transport infrastructure.
Intermodal transportation
Today the development of transport system becomes the necessary condition of implementation of innovation model of economic growth of Georgia and raising the service quality of passengers. Although the positive tendencies in the dynamics of several types of transport, the transport system in total doesn't respond to the existed demands and perspectives of country's development.
Each type of transport, performing passenger service, acts separately, according to its own interests, aiming to get maximum profit and do does not care about other types of transport. In sum the transport means are not used effective enough, the quality of passenger transportation services are low and their demand for transportation is not satisfied in sufficient manner.
If the role of transport infrastructure is not valued enough it affects negatively on the development of international tourism and quality of transport service of passengers and as a result, country could lose the image of attractive tourism country. To keep the level it is necessary to improve the existed legislation, perform the goal-oriented policy in the sphere of transport system.
Tourism and transport development is the interconnected and interdependent process. As it appears from the scientific literature the study of interconnections in the system "tourism-transport" the preference is given to the transport, because its role and meaning is acknowledged as an important factor of tourism development. It is natural and logic, as the tourism is relatively new social-economic phenomenon and in significant volume is the result of the transport development [5].
It is worth to mention that the majority of resorts in Georgia are in a significant distance from the major transport nodes of country, those are the cities of Tbilisi, Kutaisi and Batumi. , and the passengers voyaging to the resort zones face a number of problems when making the changes in the transport type to reach the final destination. related to the above mention to improve the level of satisfaction of passengers, especially transit ones, in the transportation process there is a necessity of technical, technological, organizational and economic coordination of transportation process by different types of passenger transport, especially in key transport junctions of the country where the main mass of passengers, including tourists, are changing the transport types.
To meet the growing demands of tourists and passengers for transportation, it is desirable that the land transport agency, jointly with agencies of sea transport and civil aviation, under the management of Georgian ministry of economy and sustainable development to decide about creation joint web of service-center with the goal of elaborating and implementing the intermodal transport system on the main directions of passenger flows (Fig. 2).
Intermodal passenger transportation should be considered as the transportation of passengers, luggage and handbag from the point of origin to the final destination with the several types of transport by united transportation document, when the responsibility for total transportation process, including the connection in the conjunctions, takes particular operator or the third party operator (for example, tourist company).
Organization of intermodal passenger transportation represents the innovative and prospective directions of transport field development. This technology allows to combine the advantages of each type of transport and to make the transportation process most effective. In relation with the specifics of technology of organization of this type of passenger transportation, also taking into account the different requirements of passengers according to the types of transportation it is mostly preferable to use intermodal technology for the transportation of long haul passengers using such a types of transport as air, rail, car and sea transport. The key condition for such a technology is the presence of responsible person for successful change in conjunctions that means the solution of problems and avoiding of financial lost for passenger those could appear at the time of unsuccessful change, and the responsible person takes responsibility to solve the unforeseen situation and take the passenger to the final destination.
During the introducing the intermodal transportation it is desirable to draw the key principles to implement: -High level of integration operators managing different types of transport; -Information assurance of passengers during all the time of transportation "from door to door"; -Unified system; -Interaction of the transport types on the basis of united rules and requirements; -Maximization of use of advantages of each transport type participant in intermodal transportation; -United informational area; -Value benefit for the passenger compared to the total value of separate transportation along the route.
Unification of above mentioned transport agencies into one web of service-center and establishing the suitable communications between them allows creating different tourist products according to the formulae "train + bus + . . ."; "train + bus + hotel"; "train + bus + excursion"; "train + bus + football event"; "airplane + bus + sanatorium" "airplane + bus + mountain resort" etc. advertisement of the created tourist products and their development could be performed by above mentioned service-center, tourist firms and agencies, in the close collaboration with sea, rail, car and air transport companies (Fig. 3).
The service-center could allow the improvement of the interaction of different transport companies that means improvement of quality of transportation of transit passengers on the basis of the elaboration of common transport schemas with agreed schedules of intercity or suburb buses with sea, rail or air transport at the transport junctions.
The specific attention should be paid to the some problematic issues, which appear during the service performed by tourist firms and transport organizations. The most important is that the transport costs contain the 40-50% of total expenditures of tourist services.Such a ratio of prices blocks the tourist firm's ability to perform the elastic price policy during implementation of full pack of tourist service. In given conditions the non-price methods of competition are put on front and contain attracting consumers by improving quality of services offered alongside with additional services.
Conclusion
The problem of improving the effectiveness of transport infrastructure is one of the most important for the future development of country's international tourism. It should be considered on the comprehensive basis, taking into account the regional conditions of functioning of all transport types, reflecting the territorial and other specifics of their exploitation. Improvement of the quality of services for transit tourists should be ensured on the basis of elaboration of high-effective forms of transportation process organization by improving the interaction of different transport companies.
|
2021-08-27T16:43:47.968Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "acb8dff48af0ff8a220169f993eb985ba8a55b75",
"oa_license": "CCBY",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2021/08/matecconf_istsml2021_01002.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "062d65ea891d19cb940549a72622eafc4b2fdaf6",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
}
|
244605052
|
pes2o/s2orc
|
v3-fos-license
|
Distributed Load Shedding considering the Multicriteria Decision-Making Based on the Application of the Analytic Hierarchy Process
This paper shows an analytic hierarchy process (AHP) algorithm-based approach for load shedding based on the coordination of the load importance factor (LIF), the reciprocal phase angle sensitivity (RPAS), and the voltage electrical distance (VED) to rank the load buses. This problem is important from a power system point of view, and the AHP method is able to support the decision-making process in a simple and intuitive way in a three-criterion environment. This satisfies the multicriteria decision-making to meet economic-technical aspects. The ranking and distributed shedding power at each demand load bus are based on this combined weight. The smaller overall weights of the load buses show the lesser importance of the load bus, the smaller reciprocal phase angle sensitivity, and the closer voltage electrical distance. Therefore, these load buses cut a larger amount of capacity, and vice versa. By considering the generator control, the load shedding consists of the primary and secondary control features of the generators to minimize the load shedding capacity and restore the system frequency value back to the allowable range. The efficiency of the suggested load-shedding scheme was verified via the comparison with the under-frequency load shedding (UFLS). The latter result is that the load shedding power of the suggested approach is 22.64% lower than the UFLS method. The case studies are experienced on the IEEE 9-generator; the 37-bus system has proven its effectiveness.
Introduction
In the load-shedding issue, the ranking of loads according to the priority of shedding is essential for adjusting the power balance, restoring frequency to bring economic, technical efficiency to customers. erefore, it is necessary to determine which loads need to be classified in the list of loads to be shed and their priority order. e ranking of these loads should satisfy many aspects that require an analysis of the economic and technical consequences. However, the calculation of economic and technical analysis is very complicated, and most power companies in the world still base on the evaluation of electrical system experts. However, it is very difficult for experts to prioritize these loads, especially when a load needs to be considered in many different criteria.
Studies on optimizing load shedding considering multiobjective constraints are mainly to solve the problem of minimizing the load shedding power. e proposal of a multiobjective optimization model considering load-shedding risk [1] is of interest to many researchers operating power systems. e multiobjective constraints are mostly technical constraints, such as conditions for power constraints of generating sets, power carrying capacity of the line, and the voltage at the nodes. However, in current times, the load shedding must meet many different goals, including achieving the technical requirements and economic goals including restoring frequency, load importance factor, the damage caused by power shedding, and priority level. e distributed shedding power at each demand load buses so that it is optimal and reduces the damage to the power supplier as well, such as customer power consumption. Solving this multicriteria load-shedding problem needs the application of algorithms for system experts.
e calculation of the load-shedding power is an essential factor to return the frequency back to the value within the permissible range and prevent the frequency degradation in the power system [2,3]. e load-shedding power is usually calculated based on frequency degradation [4], calculating the amount of load-shedding power; if the calculation is insufficient, it will not be possible to restore the frequency to the permissible value, and vice versa will cause excessive load shedding. Studies on load shedding mainly calculate it based on the rotation motion of the rotor [5]. However, these methods do not consider the actual operating conditions such as primary and secondary controls of generating sets. e techniques of load shedding are distributed into three fundamental areas of study [6]: conventional load shedding, adaptive load shedding, and intelligent load shedding techniques. Conventional load shedding is a method of load shedding by using underfrequency loadshedding (UFLS) or under-voltage load-shedding (UVLS) relays. is is the most common method used for frequency control and voltage stabilization of the power grid. According to the IEEE standard, UFLS must be implemented quickly to prevent the electrical system frequency attenuation and power system blackout [7]. Many works used UFLS and UVLS [8][9][10][11]. ese studies have the advantage of a low-cost, simple working principle. However, they have the main disadvantage that they do not estimate the amount of unbalanced power in the system. is result causes excessive load shedding, affects the quality of electricity, or leads to the discontinuation of electricity services or consumers [12]. In [13,14], the high-priority loads were considered during load shedding, but the secondary frequency control was not added. Reference [15] showed the UFLS using decision trees to decide whether the load needs to be cut or not and the amount of load capacity to be cut. e decision tree was built based on the frequency derivative, the load demand, and the system's reserve capacity. However, this method has not considered the important factor of the load in the electricity system. e adaptive loadshedding method uses the swing rotor equation to calculate the amount of load shedding [5]. Rate of change of frequency (ROCOF) relay is used to perform load shedding [16]. e method proposed in [12] used both frequency deviation and voltage parameters to improve the accuracy of the frequency and voltage stability. In [17], a semi-adaptive multistage UFLS plan with ROCOF element and AHP method is proposed. e AHP method is based on two main criteria including the total amount of load shed and the minimum point of frequency response to rank the importance of load shedding. However, assessing the importance of loads based on the AHP algorithm has not been considered. In [18], the artificial neural network (ANN) and power flow tracing were used to evaluate the total active power imbalance. e load priority was considered in this study. However, the 0-1 variable is introduced to represent. e load priority of the load at bus k was allowed to be shed; the value of a is set to 1. Otherwise, the value of a was set to 0. is shows that the baseload and the ranking of load shedding priority have not been considered in this situation. e intelligent load-shedding methods include the application of intelligent algorithms such as artificial neural network (ANN) [19][20][21], adaptive neural-fuzzy inference system (ANFIS) [22,23], fuzzy logic control (FLC) [24], genetic algorithm (GA) [25], and particle swarm optimization (PSO) [26,27] to calculate and select the shed load. ese approaches can easily solve nonlinear, multiobjective problems in power systems that conventional methods cannot solve with the desired speed and acceptable accuracy [19,28]. For ANN, the output is the total quantity of active power that needs to be cut. is output is not an actual signal because it does not determine the number of loads and the load capacity to be shed in each step. Multiobjective optimization methods using GA or PSO algorithms only have constraints on technical conditions. ese methods do not have a combination of multiple methods including economic technical parameters when studying the load ranking. In [29], the weight coefficient in the load shedding objective equation was adjusted to satisfy the actual needs. Furthermore, reference [30] coordinated the optimization loadshedding method based on sensitivity analysis. e weighted sum of economic expense and equilibrium index was taken as the objective function to establish the load-shedding optimization model. However, this model has not considered the important factor of the load and has not yet ranked the load in the order of priority load. is paper focuses on the coordination of various objectives during the load-ranking process. In this paper, a new load shedding method is presented based on the calculation of primary and secondary control of the generator to determine the minimum amount of shedding power. It coordinates criteria to consider the aspects to respond to load shedding in the direction of decision-making multicriteria.
is satisfies the technical and economic factors to optimize the distribution of the power shedding at each load bus.
ere is an easier method for experts to approach the critical issues of load shedding criteria. When giving opinions, they often rely on technology characteristics and operating realities to be able to make verbal comments. Experts make it easy for a comparison of pairs and common languages like Load 1 is more important than Load 2, or Criterion 1 is more important than Criterion 2. In addition, the evaluation of the important rank of the load in the frequency control problem is also considered under many criteria with different importance levels. e paper proposes an approach based on consultation with experts when expressed in words. Each load will be considered under many criteria. e efficiency of the suggested load-shedding technique was proved through the test on the 9-generator, 37-bus system. e calculations are evaluated with a traditional underfrequency load-shedding method.
e results have shown that the suggested approach has a lower amount of load shedding capacity than the UFLS method. erefore, the proposed method can minimize the damage and inconvenience caused to electricity customers. e recuperation time and rotor deviation angle are still guaranteed within the permissible values and sustained the power system stability. In addition, the proposed method demonstrates the combination of multimethod taking into account both technical and economic criteria that have not been carried out by previous studies. erefore, in large disturbance situations such as large outage generators, this proposed method can be used to teach operators and improve their skills.
Calculate the Overall Weights and Rank the Load Buses to
Load Shedding Based on the AHP Algorithm. It is supposed that there are m loads to be shed in the electrical system diagram. ese loads need to be ranked for load shedding based on coordination of three criteria including LIF, RPAS, and VED. e problem is that in the case of outage generators and load shedding is required, ranking and distributing the amount of load-shedding power to these loads require the satisfaction of multiple criteria simultaneously. To achieve that, it requires technical and economic consequences analysis. However, these calculations and analysis are very complicated and time-consuming. erefore, it is necessary to collect reviews of power system experts in this regard. Experts easily give a verbal comment when comparing each pair of criteria and using common language, such as Criterion 1 is more important than Criterion 2. In this section, the AHP method is able to support the decision-making process in a simple and intuitive way in a three-criterion environment to calculate the overall weights of criteria and rank the load buses to load shedding. e ranking of the load buses is based on the AHP method which includes three stages: establishing the hierarchical structure, determining the weights of criteria, and calculating the overall weights.
Stage 1: Establishing the Hierarchical Structure.
is step targets to solve the problem of the ranking of load buses into a hierarchical structure [31][32][33][34]. Accordingly, a three-level hierarchical structure is proposed for calculating the overall weights, as shown in Figure 1. In this study, three types of criteria are proposed: LIF, RPSA, and VED. e three criteria are described in detail in the following sections.
(1) First Criterion: e Reciprocal Phase Angle Sensitivity (RPAS) from the Load Buses to the Outage Generator. e concept of the RPAS between two buses is defined as follows [35][36][37][38][39]: In the power system, the goal is to concentration on the priority of load shedding at the nearby outage generator location. To do this, the idea of the RPAS between two buses is applied. Two buses close to each other always have exceptionally little RPAS. e smaller the RPAS between the load buses and the outage generator, the closer the load bus is to the outage generator. erefore, when a disturbance occurs in an area on the grid, adjusting the grid in the disturbance area will achieve the best effect.
us, minimizing the control errors in the disturbance area will have little effect on other areas in the system. Additionally, in load shedding, the delineation of a serious disturbance and load shedding around the disturbance area make the impact of the disturbance on a smaller system a more effective loadshedding method. e following steps show the calculation of the reciprocal phase angle sensitivity: Step 1: Extract the Jacobian matrix [J Pθ ] Step 2: Inverse elements in the Jacobian matrix [J Pθ ], calculate the elements in the matrix [J − 1 Pθ ] Step 3: Apply formula (1) to calculate D p(i, j) e weight of the load bus based on the RPAS between the load bus and the outage generator is calculated by the following formula: where W D P (i,j) is the weight of the RPAS from the i-bus to the outage generator and D P (i, j)is the RPAS from the i-load bus to the outage generator.
Step 2: Calculate [zV/zQ]in all buses. is value is the inverse of the Jacobian matrix that indicates the effect on a voltage variation at neighboring buses of reactive power injection at a bus. where Step 3: Calculate α ij using the sensitivity matrix of step 2. e voltage changes in the bus i due to voltage change in bus j are as follows: where α ij is defined as Step 4: Calculate the VED using (α ij × α ji ), which is reflected by the symmetrical distance.
where α ji is defined as
Mathematical Problems in Engineering
After calculating the VED, the weight of the load buses based on the VED between the load buses and the outage generator is calculated by the following formula: where W D P (i,j) is the weight of the VED from the i-load bus to the outage generator and D V (i, j) is the VED from the iload bus to the outage generator. e VED is the physical relationship between two buses in the power system. Formula (5) shows that the closer the distance, the smaller D V or the largerα ij . On the other hand, formula (4) evaluates the voltage interactions between bus i and bus j. e bigger α ij is, the greater the voltage attenuation at bus i when a disturbance occurs at bus j. us, when an outage generator occurs, the amplitude of the voltage fluctuation near this generator is large, leading to an attenuation voltage at nodes with a small VED also increasing. To ensure the voltage profile returns to its stability margin, the amount of load shedding at each bus can be calculated on the principle that the smaller the VED, the larger the load shedding power, and vice versa. e relationship between the generator and the loads is shown in Figure 2. With: (3) ird Criterion: e Load Importance Factor (LIF). e parameters of the LIF weight, W LIF , were calculated by the fuzzy AHP algorithm and suggested to be in [34]. e LIF shows how important the loads are to each other when the assessor considers mainly the economic aspect. In other words, the larger the W LIF is, the more damage when load shedding is.
Stage 2:
Determining the Weights of Criteria. Based on the established hierarchy structure, there are three steps to determine the weights of criteria. Firstly, pair-wise comparison or judgment matrices are formed to measure the relative importance of each two criteria.
e pair-wise comparison scale proposed by Saaty [44] is applied, as showed in Table 1. A pair-wise comparison matrix is described by the following equation. e value of p ij is equal to the reciprocal of p ij in the pair-wise comparison matrix.
where P is a pair-wise comparison matrix and p ij is the importance of the i-th criteria relative to the j-th criteria. Secondly, the largest eigenvalue and the eigenvector of a pair-wise comparison matrix are calculated. e relation among the largest eigenvalue, eigenvector, and pair-wise comparison matrix is defined by the following equation. e eigenvector is then normalized to obtain the weight vector of corresponding criteria.
where P is a pair-wise comparison matrix, λ max is the largest eigenvalue of the pair-wise comparison matrix, and ω is the corresponding eigenvector. irdly, the consistency index and consistency ratio of a pair-wise comparison matrix are evaluated, as the inconsistency may happen due to subjective expert judgment. ey are defined by the following equations. A pair-wise comparison matrix is satisfied if the stochastic consistency ratio, CR < 0.10.
where CI is the consistency index of a pair-wise comparison matrix, CR is the consistency ratio of the matrix, RI is the random index of the matrix, λ max is the largest eigenvalue of the matrix, and n is the number of criteria in the matrix.
Stage 3: Calculating the Overall Scores.
e overall score of each load bus is calculated by using equation (11). e higher the overall score of load bus is, the more important the load bus is. It means that the higher the load is ranked, the less the load shedding distributed power to that load bus is done.
where μ A ∼ i is the overall score of each load bus, W i is the weight of the i-th criterion, and W D,j are the values rep-
Calculate the Minimum Load-Shedding Power and Distribute Load-Shedding Power at the Load Buses.
After the overall weights are calculated for each load bus, the distributed shedding power at each demand load bus can be implemented according to the following flow chart in Figure 3.
Distributing the shedding power at the load buses requires two processes. In the first process, from the grid configuration and the location of the outage generator, the overall weights are calculated with the support of the AHP algorithm; the results are presented in equation (11). In the second process, when there is an outage generator and load shedding has to be implemented; the calculation of the load shedding power taking into account the process of primary and secondary frequency controls reduces the amount of shedding. is minimizes damages to customers due to power outages.
Primary and Secondary Frequency Controls in Power
System. e process of frequency control when there is a disturbance in the power system consists of stages: level 1 control or primary frequency control and level 2 control or secondary frequency control [45]. In case after performing the level 2 control, the frequency has not returned to the allowable value, the load shedding control must be implemented to restore the frequency to the allowable value. e generator frequency control process includes primary and secondary frequency controls described in [46,47]. e process of this control is shown in Figure 4.
In summary, in the case of an outage of the generator or a power imbalance between the load and the generator, the power system implements primary and secondary frequency controls. After the implementation of the secondary frequency control adjustment process, the electrical system's frequency has not yet recovered to the permissible value, the load shedding will be implemented to restore the frequency.
is is the last mandatory solution to avoid grid blackout and power system collapse.
Establish the Minimum Load-Shedding Power.
e calculation of the minimum load shedding power ensures the minimum amount of power is shed while restoring the power system frequency to the permissible value and minimize damage to electricity users. e computation takes into account the primary control and the secondary control of the generator group in accordance with the actual operation. e relationship between the load power variations with frequency variation is determined by the following equation: where P L is the active power of the load, ∆P D is the change of load power according to frequency change, and D is the percentage characteristic of the change of load according to the percentage change of frequency, D value ranges from 1% to 2%. It is determined experimentally in power systems [48]. For example, a value of D � 2% means that a 1% change in frequency will cause a 2% change in load. In the power system with n generators and m loads, when the power system has an outage of the generator, the primary frequency control of (n − 1) remaining generators is performed with the amount of power adjustment according to the following expression: where ΔP Primary control is the primary control power of the i-th generator, P Gn,i is the rated power of the i-th generator, Δf 1 � f 1 − f 0 is the frequency attenuation, and f 0 is the rated frequency of the power system. When an outage of the generator occurs, the difference between the generator power and the P L load power results in a frequency difference; in particular, the frequency is attenuated. e amount of load power depending on the frequency will be reduced by the amount of ∆P D , and the value of ∆P D is presented in formula (12). e status of power balance is presented in the following formulas: Set ΔP L � P L − n− 1 i�1 P G i and β � P L .D + n− 1 i�1 P G n,i /R i . From formula (17) In the case of considering the power of the secondary control to restore the frequency, the new status of power balance with the new frequency value f 2 , equation (14) becomes: where ΔP Secondary control max is the maximum amount of secondary control power generated by the power system. is amount of secondary control power is determined by the following equation: ΔP Secondary control max � m j�1 P Gm,j − ΔP Primary control, j , (20) where P Gm,j is the maximum generating power of the secondary frequency control generator j and ΔP Primary control, j is the primary control power of the secondary control generator j.
After including the secondary control process and the system frequency has not yet restored to the fallow the allowable value, then load shedding is required to recover the frequency; the minimum amount of load shedding power P LSmin is calculated by the following equations: (H) where Δf allow � f 0 − f allow is the allowable frequency attenuation.
Formula (22) is abbreviated into the following formula:
Distribute Load-Shedding Power at the Load Buses.
After calculating the overall weights and the P LS min , the load-shedding power at each load bus can be distributed in the same way as the principle of load sharing in the parallel circuit as follows: with where P LSi is the amount load shedding power at the buses, μ eq is the equivalent weight of all load buses, μ A ∼ i is the overall weights at the i-th bus, and P LSmin is the total minimum load shedding power.
Case Studies.
e efficiency of the suggested approach is experienced on the IEEE 37-bus 9-generator system [47,49], which is shown in Figure 5. All test cases are simulated using PowerWorld GSO 19 software. e calculations are compared with the traditional load shedding method using an underfrequency load-shedding relay.
In the studied case, the generator JO345 # 1 (bus 28) is facing an outage and disconnected from the grid. Using formula (18), the established frequency value is calculated when the JO345 # 1 generator (bus 28) faces an outage at 59.6 Hz. e frequency value after the outage of generator JO345 # 1 (bus 28) is less than the allowed value. erefore, it is important to execute the process of the generator control and re-establish frequency. e primary frequency control is performed automatically. e reaction of the governor is performed immediately after the generator JO345 # 1 (bus 28) has been outage. e primary control power values for each generator turbine are shown in Table 2.
Because the recovery frequency is less than the allowed value, the secondary frequency control process should be implemented after the primary control. Secondary standby generator control power will be mobilized to perform secondary control. In the IEEE 37-bus 9-generator power system diagram, the SLACK 345 (SLACK bus) was chosen as the secondary frequency control generator. In this case, using equation (20), the amount of secondary control power calculated is 10.72 MW. A graphical simulation of the frequency of the system after the implementation of the secondary control is illustrated in Figure 6.
us, after carrying out the secondary control process, the recovery frequency is 59.66 Hz and has not been back to the allowed value. erefore, the ultimate solution is load shedding to restore the frequency to the allowable value. Application of formula (24) calculates the minimum amount of load shedding power to restore the frequency to the permissible value.
In a 60 Hz power system, the permissible frequency attenuation ∆fallow is 0.3 Hz.
In summary, the minimum load shedding power P LSmin is 17.64 MW.
We implemented the same calculation steps above for a few other case studies. We calculated the value of the system frequency, the amount of primary and secondary control power and the load power to be reduced. Calculation results for these case studies are shown in Table 3. e results of these calculations are the basis for the distribution of the amount of load-shedding power at the load buses based on the overall weights of the criteria.
After computing the minimum amount of load-shedding power, the next step calculates the load importance factor (LIF), the reciprocal phase angle sensitivity (RPAS), and the voltage electrical distance (VED). Using formulas (2) and (6), calculate the reciprocal phase angle sensitivity (RPAS) and the voltage electrical distance (VED). e parameters of the load importance factor (LIF) were calculated by fuzzy AHP algorithm and published by the authors in [34]. e overall weights for multimethod coordination criteria will be calculated using theory at stage 2: determining the weights of criteria section and expert opinion, obtaining P matrix as follows: e eigenvector is calculated based on the matrix P; its value is shown below: Equations (8)-(10) are used to calculate the largest eigenvalues (λ max ), the consistency index (CI), and the stochastic consistency ratio (CR), respectively. e calculation results are presented as follows: e above results show that the values of stochastic consistency ratio CR � 0.00775 < 0.1, so the proposed judgment matrix is reasonable.
After obtaining the weighted values of the criteria, the next step applies formula (11) to determine the values of the overall weights of each load bus and applies formula (25) to calculate the amount of power needed to be shed at each busload; the values are shown in Table 4 and Figure 7. e smaller overall weights of the load bus indicate that the load bus is of minor LIF, the RPAS is small, and the VED is small and, therefore, that the load will prioritize the shedding load with a large amount of load shedding power, and vice versa. e suggested approach is compared to an underfrequency load-shedding relay approach. ese values are shown in Table 5 [34].
It can be seen that the proposed load-shedding method has less amount of shedding (65.19 MW) than the UFLS, thereby minimizing the damage caused by power outages a lot. Simultaneously, satisfying the goal of combining a variety of economic and technical methods: e Load Importance Factor 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48
Discussion
e AHP method is quite simple and intuitive in a threecriterion environment. It easily supports the calculation of the overall weights of the criteria. In this study, the pair-wise comparison matrixes are formed by one expert. If there are many experts, some group decision-making methods can be accepted to aggregate the pair-wise comparison matrixes determined by these experts [50]. For example, a weighted geometric mean method can be used as a tool to do this aggregation. In addition, if the value of the pair-wise comparison is uncertain, the combined use of AHP with fuzzy methods is also one of the possible solutions [34].
For a very large power system, when there is a failure of one generator, the space of influence on the technical parameters of the entire grid is negligible. It is only significant when very serious problems occur and the large-power system is divided into smaller systems or islands. In particular, in the larger power grid, areas far from the outage generator are not affected much. It only affects the areas near and around the outage generator. erefore, this situation does not need to take the entire grid into consideration. In [44] e W RPSA between the load buses and the outage generator is calculated by formula (24) e W VED between the load buses and the outage generator is calculated by formula (29) e IEEE 37-bus 9 generator system Grid configuration and the Generator outage location this case, the problem should only be considered in a range around the "observation area" that is affected by the outage generator. e determination of RPAS and VED will support determining this influence gap. However, the RPAS and VED values for each grid configuration need to be studied further. At this time, the opinion of the power system expert will support limiting "observation areas" and "inter-observation areas".
Conclusions
e calculation of overall weights includes the following criteria: Reciprocal Phase Angle Sensitivity (RPAS), Voltage Electrical Distance (VED), and Load Importance Factor (LIF). It ensures multicriteria decision-making that meets economic and technical factors. e analytic hierarchy process algorithm is applied to calculate the weights of the criteria, thereby contributing to combining the weights of the criteria together to determine the combined weight. is weight is used to rank and distributed shedding power to the load buses. e computation of the amount of load shedding includes the generator control processes that makes the loadshedding power less than the UFLS method and restores the frequency to the allowed value. e distributed shedding power at each demand load bus based on the overall weights W ensures multimethod coordination of economic and technical criteria and reduces technical and economic losses to power companies and customers. e efficiency of the suggested approach has been verified on the 37-bus 9-generator system under case studies. is implementation is better than that of the traditional UFLS method.
e results proved that the suggested approaches solutions to reduce amount of shedding power while still meeting the technical and economic operating conditions of the network. In future work, the load-shedding scheme should reflect the following aspects minimizing the economic and technical losses of both power companies and customers. To solve this multiobjectives problem, we need to apply algorithms such as genetics algorithm and PSO. e feasibility of the proposed technique has been shown on the 37-bus system with 9 generators under various experiments.
is presentation is superior to that of the traditional UFLS method. e discoveries show that the proposed strategy brings about a decreased measure of load shedding while at the same time fulfilling the specialized technical-economic operating conditions of the network. Later in work, the load-shedding issue ought to consider the accompanying components minimizing the economic and technical losses of both influence organizations and clients. To resolve this multiobjective issue, we need to apply calculations like genetics algorithm and PSO.
Data Availability e data of expert opinion in the formulation of the judgment matrix and parameter values and primary control power of the generator used to support the findings of this study are included within the article. e data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
e authors declare that they have no conflicts of interest.
Authors' Contributions
All the authors contributed to the study's conception and design. Topical guidance was performed by Nghia T. Le. Material preparation, data collection, and analysis were performed by An T. Nguyen, Trang H. i, Vu Nguyen Hoang Minh, Anh H. Quyen, and Binh T. T. Phan. e first draft of the manuscript was written by Nghia T. Le, and all the authors commented on previous versions of the manuscript. All the authors read and approved the final manuscript. 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 Voltage in steady state before the load rejection Voltage in steady state a er the load rejection
|
2021-10-16T15:16:46.888Z
|
2021-10-14T00:00:00.000
|
{
"year": 2021,
"sha1": "d620f7bbca1b55b140ff327d0c19712eda7c85d3",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/mpe/2021/6834501.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "5b5691eeb125996609200ed80e6e2792656828a0",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
3119123
|
pes2o/s2orc
|
v3-fos-license
|
Mixed methods study of engagement in behaviors to prevent type 2 diabetes among employees with pre-diabetes
Background Many employers use screenings to identify and recommend modification of employees' risk factors for type 2 diabetes, yet little is known about how often employees then engage in recommended behaviors and what factors influence engagement. We examined the frequency of, facilitators of, and barriers to engagement in recommended behaviors among employees found to have pre-diabetes during a workplace screening. Methods We surveyed 82 University of Michigan employees who were found to have pre-diabetes during a 2014 workplace screening and compared the characteristics of employees who 3 months later were and were not engaged in recommended behaviors. We interviewed 40 of these employees to identify the facilitators of and barriers to engagement in recommended behaviors. Results 3 months after screening, 54% of employees with pre-diabetes reported attempting to lose weight and getting recommended levels of physical activity, had asked their primary care provider about metformin for diabetes prevention, or had attended a Diabetes Prevention Program. These employees had higher median levels of motivation to prevent type 2 diabetes (9/10 vs 7/10, p<0.001) and lower median estimations of their risk for type 2 diabetes (40% vs 60%, p=0.02). Key facilitators of engagement were high motivation and social and external supports. Key barriers were lack of motivation and resources, and competing demands. Conclusions Most employees found to have pre-diabetes through a workplace screening were engaged in a recommended preventive behavior 3 months after the screening. This engagement could be enhanced by optimizing motivation and risk perception as well as leveraging social networks and external supports.
INTRODUCTION
Nearly 30% of US adults have pre-diabetes, 1 an asymptomatic condition associated with a threefold greater annual incidence of type 2 diabetes 2 and a 50% greater risk of developing cardiovascular disease. [3][4][5][6] The landmark Diabetes Prevention Program (DPP) trial demonstrated that patients with pre-diabetes can significantly reduce their risk of developing type 2 diabetes through weight loss and physical activity or use of metformin. 7 8 Owing to the effectiveness 7 and costeffectiveness 8 of these strategies, the Centers for Disease Control and Prevention (CDC) is now disseminating the DPP in communities across the USA, 9 and many insurers are now covering the DPP. 10 Identification of individuals who could benefit from these preventive strategies begins with screening tests 11 that are widely available in healthcare settings and are increasingly being conducted as part of workplace screenings which feature health risk appraisals (HRAs), blood draws or both. 12 13 Such screenings represent an opportunity to help at-risk individuals better understand their risk for type 2 diabetes and engage in behaviors that could help them to reduce this risk. These screenings are likely to accelerate in use due to the Prevent Diabetes STAT initiative, a joint effort by the American Medical Association (AMA) and CDC to identify more Americans with pre-diabetes and connect them with DPPs, 14 and the US Preventive Services Task Force's recent broadening of criteria for screening for type 2 diabetes. 15 Key messages ▪ Most employees found to have pre-diabetes through a workplace screening were engaged in a recommended strategy to prevent type 2 diabetes about 3 months after the screening. ▪ Self-directed efforts to lose weight and achieve recommended levels of physical activity were more common than participation in a Diabetes Prevention Program or use of pharmacotherapy. ▪ Key facilitators of engagement in behaviors to prevent type 2 diabetes were high motivation, assistance and encouragement from social networks, and use of external supports such as tracking devices. ▪ Important barriers to engagement in behaviors to prevent type 2 diabetes were low motivation, competing demands, and insufficient resources to support healthy behaviors.
Currently there are evidence-based strategies to help at-risk individuals prevent or delay their progression to type 2 diabetes, widely available ways to identify individuals who might benefit, and national initiatives to identify more patients with pre-diabetes. However, little is known about how people actually respond to being told they have pre-diabetes following a screening test. [16][17][18][19][20] This lack of evidence limits understanding of the effects of current practices and precludes the development of effective strategies to optimize engagement of at-risk individuals in preventive strategies. Accordingly, the objective of this mixed methods study was to describe the frequency of, facilitators of, and barriers to engagement in recommended behaviors to prevent type 2 diabetes among employees found to have pre-diabetes during a workplace screening.
Study sample
We conducted a mixed methods observational study of University of Michigan (U-M) employees who were found to have pre-diabetes during employer-sponsored screenings organized by MHealthy, U-M's campus-wide wellness program. The screenings were advertised through email and flyers and conducted across multiple days and times at numerous locations. Employees of U-M were offered a $100 incentive for completing the screening, which included an HRA and measurements of body mass index (BMI), blood pressure, blood glucose, and cholesterol. Participants were asked, but not required, to be fasting. Screenings were conducted between 1 January 2014 and 31 May 2014.
Screening test results were communicated in person by a registered nurse health coach from The StayWell Company, LLC ('StayWell'). Prior to the communication of fasting blood glucose (FBG) test results, the health coach verified that the employee had been fasting. When an employee had an FBG measurement of 100 to 125 mg/dL (inclusive), the health coach informed the employee that their FBG was in the pre-diabetes range and provided brief counseling about this condition using talking points based on American Diabetes Association 21 and National Diabetes Prevention Program 9 guidelines (see online supplementary figure S1). Employees with pre-diabetes were also provided with a one-page handout that summarized these points and listed contact information for local DPPs.
Recruitment
Employees identified as having pre-diabetes during screenings were invited by the StayWell health coaches to be contacted by our research team to learn more about a study 'designed to identify ways to help people prevent type 2 diabetes'. Employees who agreed to be contacted signed a form and provided their contact information. Research staff then contacted these individuals by phone or email. Those who were willing to participate were emailed a survey link to an online informed consent document.
Online surveys
Individuals who provided their free and informed consent were asked to complete two online surveys. A link to the first survey was emailed to participants as soon as possible after they provided informed consent. This first survey inquired about demographics and health literacy, 22 23 and served as an initial contact with participants to facilitate subsequent data collection in the second survey. The first survey was completed in a median of 18.5 days after screenings (IQR 9-34). The response rate for the first survey was 100%.
Approximately 2 months after completion of the first survey, participants were emailed a link to a second survey. This survey contained questions about engagement in our key behaviors of interest such as, stage of change related to weight loss, discussing pharmacotherapy, and participating in a DPP; 24 and physical activity in the past 7 days. 25 This survey also included questions about potential mediators of engagement in our key behaviors of interest, such as knowledge of whether type 2 diabetes is preventable; perceptions of risk for type 2 diabetes; 26 27 level and locus of motivation to prevent type 2 diabetes; 28 and perceived competence to (1) attempt weight loss, (2) increase physical activity, (3) discuss with one's primary care provider pharmacotherapy for prevention of type 2 diabetes, and (4) participate in a DPP. 29 We measured locus of motivation to prevent type 2 diabetes using the Treatment Self-Regulation Questionnaire (TSRQ). The TSRQ includes subscales that permit measurement of three different types of motivation: autonomous motivation (ie, motivation that comes from internal sources), controlled motivation (ie, motivation that comes from external sources), and amotivation (ie, the absence of motivation). 28 These three motivational constructs are important because autonomous motivation is more likely to produce long-term, sustained healthy behaviors relative to either controlled motivation or amotivation. 30 The second survey was completed in a median of 94 days after screenings (IQR 78-111). The response rate for the second survey was 96%.
For both surveys, participants received email reminders to encourage completion. After 10 attempts to contact, participants were categorized as being lost to follow-up. Participants received a $10 gift card for each survey completed. All surveys were completed between February and October 2014.
Semistructured telephone interviews
After the surveys, we conducted semistructured telephone interviews with participants to identify facilitators of and barriers to engagement in behaviors to prevent type 2 diabetes. For purposive sampling, we categorized participants based on a manual review of their second survey responses as either (1) attempting weight loss and having gotten at least 150 min of moderate physical activity in the past week, having discussed pharmacotherapy for prevention of type 2 diabetes with a primary care provider since the screening, or participating in a DPP; or (2) not having carried out any of these things recommended to them at the screening. We then randomly selected participants in each of these 2 groups and asked them via email to participate in a telephone interview.
Interviews consisted of open-ended questions which aimed to elicit participant-identified facilitators of and barriers to engagement in recommended behaviors (ie, factors that participants felt made behavior change easier and harder, respectively), as well as emotional reactions to receiving a pre-diabetes diagnosis, understanding of pre-diabetes, and reasons for any behavior changes.
Participants who were invited to participate in an interview received email reminders to set up an interview appointment, and after 10 unsuccessful attempts to contact were categorized as being lost to follow-up. Participants who completed an interview received a $10 gift card. In each of the two groups, we stopped conducting interviews once we reached thematic saturation. Eighty-three per cent of participants who were invited to be interviewed completed an interview. All interviews were conducted between August and November 2014.
Statistical analysis
We used data from both surveys to compare demographic characteristics and potential behavioral mediators of participants who either (1) were attempting weight loss and had gotten at least 150 min of moderate physical activity in the past week, had discussed pharmacotherapy for prevention of type 2 diabetes with a primary care provider since the screening, or were participating in a community DPP; or (2) had not carried out any of these things recommended to them at the screening. We compared continuous variables using Wilcoxon rank-sum tests. For categorical variables we used χ 2 tests or Fisher's exact tests. We used Stata V.13 (Stata Corp, College Station, Texas, USA) to conduct all analyses.
Qualitative analysis
All interviews were audio recorded and transcribed verbatim. Four members of the research team independently reviewed a subset of transcripts using modified grounded theory to identify salient themes. 31 They then met to discuss the themes, refine them, and achieve consensus on codes and their definitions. Once the coding scheme was established, two members of the research team independently coded all transcripts in Dedoose software. They then met to discuss their coding and resolve differences by consensus. After all transcripts were double-coded, the research team met to discuss the code summaries and memos, and to identify key themes with a focus on facilitators of and barriers to engagement in recommended behaviors. 32 The study protocol was approved by the University of Michigan Medical School Institutional Review Board. Data were analyzed in 2015.
Sample characteristics
Of the 17 310 U-M employees who attended the screenings, 5279 provided an FBG and 523 had an FBG in the pre-diabetes range. Eighty-five of these 523 pre-diabetic individuals consented to study participation and 82 completed both surveys (see online supplementary figure S2). Most (72.0%) of the participants were women, white (73.2%), and had at least a college education (59.8%). The median age was 50.5 (IQR 40 to 56.5) and the median household income was $75 000 (IQR $55 000 to $125 000). Approximately two-thirds (65.9%) had never been told they have pre-diabetes (see online supplementary table S1). The age, gender, and race of the 82 employees who consented to and completed both surveys were not significantly different from the 523 employees who in the screening had an FBG of 100 to 125 mg/dL.
Quantitative analyses
Approximately 3 months after the screening, 53.7% of participants were engaged in at least one strategy to prevent type 2 diabetes that had been recommended to them by a health coach at the screening (figure 1). Most participants (79.3%) were trying to lose weight, and most (57.3%) reported they had gotten at least 150 min of at least moderate physical activity in the last week. Fewer than half (45.1%) reported they were trying to both lose weight and had gotten at least 150 min of at least moderate physical activity in the last week. Nearly 1 in 5 (18.3%) had talked with a primary care provider about whether metformin for prevention of type 2 diabetes would be right for them. Few (3.7%) reported having participated in a DPP in the past 30 days.
Compared with participants who were not engaged in at least one recommended strategy to prevent type 2 diabetes (ie, 'non-engagers'), there were no differences in the demographic characteristics of participants who were engaged in at least one recommended strategy (ie, 'engagers') (table 1). Compared with non-engagers, engagers reported higher median levels of motivation to prevent type 2 diabetes (9 vs 7 (on a scale of 1-10), p=<0.001), higher median levels of personal importance of avoiding type 2 diabetes (10 vs 9 (on a scale of 1-10), p=0.01), and lower median estimations of their risk for developing type 2 diabetes in the next 3 years (40% vs 60%, p=0.02). Engagers also had higher median levels of autonomous motivation (6. engagers and non-engagers in potential mediators of engagement in behaviors to prevent type 2 diabetes such as a previous diagnosis of pre-diabetes, knowledge that type 2 diabetes is preventable, or perceived competence to engage in each recommended preventive behavior.
Qualitative analyses
We conducted semistructured telephone interviews with 22 engagers and 18 non-engagers. From these interviews, key themes emerged about participant-identified facilitators of engagement in behaviors to prevent type 2 diabetes (box 1) and barriers to engagement (box 2). Among the 22 engagers, the two predominant facilitators of engagement in behaviors to prevent type 2 diabetes, each cited by 14 participants, were the support from a member of their social network (eg, family or friends) and use of external resources or tools, such as rewards and activity trackers. For example, one participant who cited the support of social network members said, "It was helpful to me that I have family members who have experienced the same thing… having both family members that I could talk to that have gone through this…" A participant who used an activity tracker said, "I have a Fitbit that makes it easier cause I like to challenge myself to make sure I get my steps in every day. So, lots of times, I'll get home in the evening and I'll see them at 9000 steps and I'll like, go out and walk up and down the driveway." Motivation to avoid getting type 2 diabetes was cited by 13 engagers, one of whom noted, "If it hadn't been for the prediabetes I probably would have left it where I was at. So that definitely was a big motivator." In addition, engagers described barriers they faced while engaging in behaviors to prevent type 2 diabetes: 17 cited competing demands, 11 mentioned periods of low motivation, and 11 identified people in their social network who impeded their efforts.
Among the 18 non-engagers, the predominant barrier to engagement in behaviors to prevent type 2 diabetes, cited by 16 participants, was competing demands such as work or family responsibilities. One participant explained, "The kids get busy in school and activities and…stuff for myself seems to be the first thing that gets cut out." The next most common barrier, cited by 12 participants, was insufficient motivation to engage in behavior change. One participant who faced this challenge said, "The answer is I need to put myself on a program and stick to it. If I do that, it'll work. I just need to be motivated and take the time and effort…the only problem is me." Ten participants described lack of resources, such as affordable healthy foods or exercise facilities to support healthy choices, as a key barrier. For example, one participant stated, "I don't understand why it just seems so much harder to buy the healthier foods, the salads…I incorporate it in my diet every day but, you know after a while you have to stretch your money and… sometimes all you can get is lunch meat and bread." Non-engagers also described facilitators of attempts to engage in behaviors to prevent type 2 diabetes: eight cited external resources or tools (eg, rewards and activity
Box 1 Main facilitators and illustrative quotes from interviews with employees engaged in recommended behaviors
Support from social network ▸ "Having the family involved in and supportive…of those changes helps because if…we all don't change diet or…make time to be more active then it's hard for me to do it on my own." ▸ "I have made the choice that 3 days a week I'm going to… attend a class and work out…As a family we've chosen to be active…not only just for me but for the whole family." Use of external supports ▸ "That is a huge motivator for me, to have to go log into the computer…I really thought about it a lot more…I have to think that it's the…emails you know that we get reminding us to record our time and all of the other little links to personal stories and recipes and those things seem to plant a seed in my head." ▸ "I have some reminders on my phone…I have a weight checking app that I got just recently…I do not expect them to maintain my attention for very long but while I am using them they keep it more frame of mind." High motivation ▸ "I just kind of want to get myself back in shape and…healthy you know because I am getting older so I want to get…healthier now." ▸ "I have access to working out and everything, and I just think I'm more motivated because I don't want it ( pre-diabetes) to progress."
Box 2 Main barriers and illustrative quotes from interviews with employees not engaged in recommended behaviors
Competing demands ▸ "I definitely didn't participate in anything…mainly because of time, and I knew I was stretched pretty thin as it was, and it would just feel like one more obligation that I couldn't follow through on." ▸ "There is always more work than time and so when I walk away from my desk to get on the treadmill, I am letting something sit that could be done and that is the way we all feel I think." Low motivation ▸ "Mentally I know what I need to do…I don't know if I want to call it the leap, but I need to make the commitment and then hold myself accountable for it, and that is where I struggle." ▸ "I know that (exercise) would work for me too. It would. I think it would make a significant difference, but I just can't get myself to that place where I'm motivated." Lack of resources to support healthy choices ▸ "I'm traveling a lot and…when I'm traveling, it's just really hard to…find a gym, and…find a healthy place to eat." ▸ "I just live in a smaller community and nothing like that (a DPP) is really around…where I live, so there's not really a whole lot of access to some of that stuff." trackers), seven identified supportive people in their social network, and seven mentioned access to key resources like exercise facilities or healthy foods.
DISCUSSION
In this study of employees found in a workplace screening to have pre-diabetes, most had engaged in at least one recommended behavior to prevent type 2 diabetes in the 3-month period after the screening. Using both quantitative and qualitative data we also identified key facilitators of and barriers to this engagement. While others studies have examined the relationship between awareness of a pre-diabetes diagnosis and risk-reducing behaviors, 1 20 33 our study is one of the first to describe the frequency of engagement in risk-reducing behaviors to prevent type 2 diabetes among employees found to have pre-diabetes through a workplace screening and to identify key opportunities to optimize their engagement in preventive strategies. Many more employees we surveyed ∼3 months after they were screened reported engaging in self-directed efforts to lose weight and achieve recommended levels of physical activity than in a community DPP. While the main DPP clinical trial identified specific weight loss and physical activity targets that individuals can aim for to prevent or delay the onset of type 2 diabetes, 7 it is unclear whether such individually-directed efforts can be as effective in preventing or delaying the onset of type 2 diabetes as formal programs such as DPPs that offer structured, ongoing support. 11 34 Although the DPP continues to be disseminated in communities across the USA, 9 including southeast Michigan where many of the study participants lived, our interviews revealed that competing demands such as work and family responsibilities often impeded engagement in structured programs inside or outside of the workplace. An alternative approach that could help some busy employees engage in behavior change while still receiving ongoing support would be to encourage them to engage in online versions of the DPP that can be accessed on demand. 35 Nearly one in five employees we surveyed had discussed metformin with a primary care provider since the biometric screening. Although we did not ask participants whether they were actually taking metformin for the prevention of type 2 diabetes at follow-up, our results suggest that prompting discussion of metformin with a primary care provider could be a way to spur people with pre-diabetes to be considered for this preventive therapy.
As more Americans with pre-diabetes are likely to be diagnosed through the AMA-CDC Prevent Diabetes STAT initiative 14 and the US Preventive Services Task Force's recent broadening of criteria for screening for type 2 diabetes, 15 our findings point to several potential opportunities to optimize engagement of these individuals in efforts to reduce their risk for type 2 diabetes. For example, in our surveys we found that engagers had more modest perceptions of their risk for type 2 diabetes than non-engagers. It is possible that more modest risk perceptions-in which there is perhaps a recognition that risk for type 2 diabetes is elevated yet the development of type 2 diabetes is not felt to be a foregone conclusion-could yield less anxiety and thus leave individuals better poised to take preventive actions. 18 26 27 36 Alternatively, perceived risk could have been lower among individuals who were engaged in behaviors to prevent type 2 diabetes because they were taking action to reduce their risk. Either way, this finding suggests that different levels and types of perceptions of risk for type 2 diabetes may be closely related to behaviors to prevent type 2 diabetes and should be closely examined in future research.
We also found that engagers had higher levels of motivation-including both greater autonomous and controlled motivation-to prevent type 2 diabetes. Further, the importance of motivation was voiced repeatedly among interviewed respondents. This finding reinforces the critical importance of enhancing motivation to prevent type 2 diabetes as another key ingredient for engagement in evidence-based preventive strategies. Future research should focus on testing promising strategies to bolster levels of motivation to prevent type 2 diabetes such as motivational interviewing, 37 tailored messaging, 38 peer support, 39 financial incentives, and different combinations of these approaches. 40 Since autonomous motivation is more likely to produce longterm, sustained behavior change than controlled motivation, 30 it will be important to track the degree to which such intervention strategies affect different types of motivation to prevent type 2 diabetes and whether those with higher levels of autonomous motivation indeed sustain healthy behaviors to a greater degree than those with controlled motivation.
Another important difference between engagers and non-engagers was their level of patient activation, which refers to individuals' overall understanding of their roles in healthcare processes, as well as having the knowledge, skills, and confidence to manage their own health. 41 In this case, individuals who were more activated prior to the screening may have been better able to translate information about pre-diabetes into engagement in preventive strategies. Alternatively, information about prediabetes may have preferentially boosted activation in some individuals, thus leading them to engage in preventive strategies. Though we are unable to determine which of these dynamics occurred in our study, both potential explanations suggest patient activation may play an important role in facilitating engagement in recommended behaviors to prevent type 2 diabetes and thus should be examined in future research.
Key facilitators of engagement that emerged in our interviews included assistance and encouragement from social networks 42 as well as use of external supports such as tracking devices. Important barriers to engagement included competing demands and insufficient resources for healthy behaviors. These factors could be leveraged either alone or in combination in the design of approaches to promote engagement in behaviors to prevent type 2 diabetes. Some examples of approaches suggested by our findings include sharing information about pre-diabetes with key members of an individual's social network, 43 providing ready access to devices to track weight, food intake and physical activity, 44 building competence and self-efficacy to integrate preventive behaviors into busy schedules, 45 and/or enhancing access to affordable, nutritious foods and exercise opportunities. 46 Additionally, both engagers and non-engagers identified social support and external tools as key facilitators of engagement, and competing demands and low motivation as important barriers to engagement. More research is needed to understand the factors that enable engagers to successfully capitalize on these shared facilitators and overcome these shared barriers so that these factors can be taken into account in the design of interventions to promote engagement in behaviors to prevent type 2 diabetes.
Limitations
Our data rely on participant self-report and focus only on engagement in preventive strategies ∼3 months after a workplace screening. Our sample may not be representative of other populations, particularly those with lower incomes or less education. Although the StayWell health coaches invited study participation from all employees found to have pre-diabetes, and key demographic characteristics of study participants and all employees found to have pre-diabetes in the screening were similar, employees who were already engaged in behaviors to prevent type 2 diabetes could have been more likely to participate in the study. Further, because of our study design we were unable to determine whether postscreening engagement in recommended behaviors was a direct result of the screening. We did not inquire about participants' BMI and were unable to measure how successful individuals who were trying to lose weight had been (ie, how much progress they had made towards losing 7% of their body weight as recommended). We measured physical activity through a widely used survey scale, which may be less valid and reliable than objective measures of physical activity. Finally, we were unable to link our survey and interview data to other biometric screening and HRA data that had been collected during the workplace screening.
In conclusion, most employees with pre-diabetes who we surveyed had engaged in at least one recommended strategy to prevent type 2 diabetes ∼3 months after they had been found to have pre-diabetes during a workplace screening. Further, we identified key facilitators of and barriers to engagement in recommended preventive behaviors. More research is needed to understand employees' reactions to and understanding of a prediabetes diagnosis, measure longer-term engagement in preventive behaviors among employees with pre-diabetes, and test promising strategies to optimize their ongoing engagement in strategies to delay or prevent the onset of type 2 diabetes.
|
2018-04-03T03:24:42.795Z
|
2016-09-01T00:00:00.000
|
{
"year": 2016,
"sha1": "463eda645e3b6ad7cdbf32701630dc37a50238e4",
"oa_license": "CCBYNC",
"oa_url": "https://drc.bmj.com/content/bmjdrc/4/1/e000212.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "463eda645e3b6ad7cdbf32701630dc37a50238e4",
"s2fieldsofstudy": [
"Medicine",
"Business"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
219965832
|
pes2o/s2orc
|
v3-fos-license
|
Automated machine vision enabled detection of movement disorders from hand drawn spirals
A widely used test for the diagnosis of Parkinson's disease (PD) and Essential tremor (ET) is hand-drawn shapes,where the analysis is observationally performed by the examining neurologist. This method is subjective and is prone to bias amongst different physicians. Due to the similarities in the symptoms of the two diseases, they are often misdiagnosed.Studies which attempt to automate the process typically use digitized input, where the tablet or specialized equipment are not affordable in many clinical settings. This study uses a dataset of scanned pen and paper drawings and a convolutional neural network (CNN) to perform classification between PD, ET and control subjects. The discrimination accuracy of PD from controls was 98.2%. The discrimination accuracy of PD from ET and from controls was 92%. An ablation study was conducted and indicated that correct hyper parameter optimization can increases the accuracy up to 4.33%. Finally, the study indicates the viability of using a CNN-enabled machine vision system to provide robust and accurate detection of movement disorders from hand drawn spirals.
I. INTRODUCTION
The usage of machine learning for automated disease diagnosis in an efficient and accurate manner has the potential to reduce labor and cost in healthcare, and improve patient care. Deep learning has been widely studied for automated and quantitative disease assessment particularly for medical imaging, where convolutional neural networks (CNNs) were employed [1], [2]. Other medical applications are typically less studied, mainly since datasets in these other contexts are not as readily available as imaging datasets.
An area in medicine where this problem is prevalent is the quantitative assessment of motor disorders such as Parkinsons disease (PD) and Essential Tremor (ET). Both these conditions are debilitating and have increasing prevalence [3]. Both PD and ET typically manifest in hand tremors which severely impacts the patients' quality of life. Distinguishing between the different types of tremors (PD vs ET) is critical for correct treatment and long term management of the disease [4]. The Static Spiral Test (SST) is a widely used test in tremor diagnosis [5].
This simple and short test requires the patient to retrace Archimedean spirals on paper using a pen. An expert neurologist performs an observational assessment of the subject as they carry out the task, as well as a visual analysis of the drawn spiral. Healthcare professionals and patients could greatly benefit from an automation of the SST since observationbased data is qualitative and subjective, and hence are prone to bias and inaccuracy, which may lead to incorrect treatment. To the best of our knowledge there is no known human baseline for PD, ET and Control discrimination. Additionally, an automated analysis of the SST would allow patients in clinics without expert neurologists to be triaged.
Attempts to automate the SST typically use digitized tablets or instrumented pens embedded with sensors, such as accelerometers and/or pressure sensors, that capture information as the patient draws the spiral [7]- [9]. Although these methods have a potential to capture the relevant motor information from the SST and can easily record quantitative data for analysis, practical clinical considerations often limit their usage. In particular the cost of additional and expensive high-end hardware is often a limitation.
Additionally, most analyses with these devices focus on a binary classification between PD vs controls or ET vs Controls, while movement disorder clinics require a discrimination between PD, ET and controls, since early stage symptoms of PD and ET are mild and may resemble control subjects [6].
This paper therefore makes the following contributions to address these aforementioned limitations: • Discrimination between PD, ET and Controls using an end-to-end deep learning solution. • Automated analysis of the SST based on conventional pen and paper tests. • An ablation study to highlight the performance improvement which can be obtained through correct hyperparameter optimization of deep neural networks.
II. METHOD
In this work we present a deep-learning based solution applied to the discrimination of Parkinsons Disease (PD) patients from controls, and PD patients from Essential Tremor (ET) patients and from controls. We propose an end-to-end system illustrated in Figure 1, making use of a convolutional 978-1-5386-5541-2/18/$31.00 ©2020 IEEE
A. Dataset
The dataset consists of camera captured images of handdrawn Archimedean spirals which were acquired from subjects performing the Static Spiral Test (SST) using a pen and paper. The image dimensions are 300 x 300 x 3. Examples of the images of the spirals in the three groups studied, Parkinson's disease (PD), Essential Tremor (ET) and controls are illustrated in Figure 2.
The data was acquired during routine neurological assessments in a tertiary hospital and labeled by the examining neurologists. The re-use of the dataset was approved by the hospital as well as the university ethics committee.
The dataset consists of spirals of the following categories: • 370 Parkinsons disease subjects. • 669 Essential Tremor subjects. • 357 control subjects.
B. End-to-End Deep Learning System
The proposed end-to-end deep learning system precludes the need for manual feature engineering, with the network learning the underlying feature representations needed for discrimination. Three important sub-modules are important to the overall system and are described below.
1) Data Augmentation: Deep neural networks typically require large quantities of data for training. Even in cases where transfer learning is applied, an increased number of samples is beneficial to training. Since changes in orientation and contrast do not directly affect image class as well as the overall perceptive task associated with SST, we apply the following data augmentations to the dataset.
• Random application of a horizontal flip to the image with a probability of 0.5. • Random change of image contrast with a probability of 0.1.
• Random zoom and crop on the image with a probability 0.75.
2) Convolutional Neural Network (CNN) Approach: We aim to demonstrate the value of a transfer learning approach applied in a biomedical computer vision application, where a pre-trained CNN is used as a base network and is finetuned for the specific task. The hypothesis is that the generic features (i.e. edges, gradients) in the earlier CNN layers are still useful representations for the specific SST task, whilst the later layers of the network which learn specific feature representations are re-trained to be representative of the SST classification task at hand. Furthermore, we aim to demonstrate the value of tranfer learning on small datasets (such as the SST dataset) and that it reduces the time to train the CNN, reduces compute resources required compared to training the network from scratch and is still a viable method to attain strong discriminative performance.
Consequently, we make use of a ResNet-32 CNN architecture [10] with pre-trained ImageNet weights, which is one of the state of the art networks trained on Imagenet [10]. Whilst, other base architectures could be utilized the ResNet (Deep Residual Network) architecture was utilized as the deeper and thinner representation has been shown to provide better generalization which should allow for better translation to the specific task [10]. Moreover, the smoother loss surfaces in ResNets allow for easier forward and backward propagation leading to easier optimization when fine-tuning on the task [10].
Finally, the fine-tuned architecture built on top of the pre-trained ResNet, consists of a fully-connected dense layer where the number of outputs for the softmax equals the number of classes classified. Specifics of training the network is discussed under the experimental analysis.
3) CNN Hyper-parameter optimization: Neural networks are also highly sensitive to the specific hyper-parameters on which the network is trained. Therefore, we aim to demonstrate the value of hyper-parameter optimization in ensuring optimal classification performance.
In particular, we optimize the learning rate as it directly We apply two techniques that demonstrate the importance of this optimization: Cyclical learning rate: As per the work of [11], the aim of a cyclical learning rate policy is to allow the traversal of saddle points and local minima in the loss landscape. This is done by varying the learning rate over an epoch between a lower and upper threshold, where the periodic higher learning rate assists in the traversal of the saddles and local minima points. Furthermore, this allows not only for fewer experiments (and by virtue computations) to find optimal learning rates, but also this policy results in superior accuracy when compared to a singular learning rate with decay.
Discriminative learning rate:
As per the work of [12], [13], the network is divided into groups of layers from earlier to later layers. Earlier groups of layers are trained with a lower learning rate as the weights represent generic features that do not need to be adapted to the task, whilst later layers are trained at a higher learning rate as the weights need to adapted specifically to the classification task. Hence, we divide our network into three weight groupsearly, middle and late. We apply an adaptive policy of learning rates to these three weight groups as follows: Early: 10 −6 , Middle: 10 −4 , Late: 10 −2 .
C. Technical pipeline
Our technical evaluation pipeline then follows the following protocol: 1) We perform 5-fold cross-validation, such that during each fold: 80% of the data is used as training data, whilst 20% of the data is held-out as an unseen test dataset. 2) Apply data augmentation as described in Section 2.2 to the training dataset. 3) Use a Resnet32 CNN pre-trained on Imagenet as the base network. Since the pre-trained Resnet32 has an input image size requirement of 224x224x3, all images are resized using nearest neighbor interpolation.
4)
Remove the final fully-connected layer from the pretrained network. 5) Add a dense fully connected layer (Multi Layer Perceptron) with the correct number of outputs in the softmax based on the number of classification classes (two for experiment 1 and three for experiment 2). 6) Freeze the pre-trained CNN networks weights and train the dense layers with a high learning rate (10 −2 ) for 5 epochs. 7) Unfreeze the entire networks weights and fine-tune the networks for 3 epochs at a low learning rate using the the three weight groups -Early: 10 −6 , Middle: 10 −4 , Late: 10 −2 . 8) Repeat the cross-validation process (steps 2-7) to ensure numerical stability/robustness and compute the mean and standard deviation of the 5-fold cross-validation.
III. EXPERIMENTAL ANALYSIS
Two sets of experiments are carried out on the hand-drawn SST data. The first experiment carries out discrimination between PD and control subjects and the second experiment carries out discrimination between PD, ET and control subjects.
Both experiments make use of the experimental protocol outlined in Section II (c). An ablation study is carried out for steps 6-7, where steps 6 and 7 are carried out with and without the learning rate hyper-parameter optimization techniques described in Section 2.2. The aim is to demonstrate the performance benefit obtained by making using of these optimizations.
A. Discrimination between Parkinson's disease and control subjects
This experiment aims to classify between PD subjects and controls based on the SST test images. The results shown in Table 1 are obtained from the 5-fold cross-validation.
B. Discrimination between Parkinson's disease, Essential Tremor and controls
This experiment aims to classify between PD subjects, ET subjects and controls based on the SST test images. The results shown in Table 2 are obtained from the 5-fold cross-validation. The confusion matrix shown in Figure 3 represents the averaged and normalized 5-fold cross-validation results obtained for the three-class discrimination (PD vs ET vs Control), making use of the hyper-parameter optimizations.
IV. DISCUSSION
This study presents an automated machine learning discrimination of PD, ET and Controls using images of the hand written SST. This is a preliminary work, which aimed to validate the techniques within this application domain.
The results convey that an end-to-end deep learning solution using a CNN is both robust and accurate at detecting and discriminating between these three classes in a reliable and autonomous manner, whilst still fitting in with current clinical practices.
The proposed solution where the SST test is conducted using off-the-shelf writing equipment and paper eliminates the need for additional and expensive hardware such as digitized tablets or specialized instrumented writing instruments. This would allow the test to be performed easily in a clinical setting and thus, making it easier to carry out the test in busy clinical settings.
The results imply that the optimal configuration for discrimination both of PD vs Controls and between PD, ET and controls is a ResNet-32 CNN, with the pre-trained ImageNet weights being fine-tuned for the task. The mean 5-fold crossvalidation accuracy for the PD vs Control discrimination was 98.2%, whilst the mean accuracy for PD, ET and Control discrimination was 92%.
In particular, the confusion matrix for the PD, ET and Control discrimination shows that that mis-classification is typically between ET and PD. This is understandable as both PD and ET are movement disorders that result in tremor, which would manifest in the spirals of the subjects. The strength of this result is that even when PD or ET is mis-classified it is still mis-classified as a movement disorder rather than as a control, which is beneficial in a triaging and referral scenario.
The result of the ablation study demonstrates the value of learning rate hyper-parameter optimization techniques. The cyclical learning rate and discriminative learning rate policies combined to increase discrimination accuracy by 4.33% with no change to the network architecture, thereby highlighting the value of correct hyper-parameter tuning in order to maximize performance of neural networks.
Finally, the value of transfer learning was demonstrated by the overall high discriminative accuracy of the networks. This highlights the value of using pre-trained networks even in biomedical applications and that the high level feature representations from pre-trained networks are useful even on significantly different tasks than the original ImageNet task. Moreover, the benefit is that fewer epochs are required to train the networks which reduces the computational requirements, as well as, training time.
The combined ease of use and machine learning discrimination proposed in this study has the potential to assist healthcare professionals in motor disorder evaluation using the SST, while easily fitting into the clinical environment by making use of the current pen and paper SST. The method could allow healthcare practitioners to easily and quantitatively diagnose motor disorder subjects using standard tools.
Future work could expand this study and aim at finergrained classification, seeking to classify the severity, or stage of the motor diseases.
|
2020-06-23T01:00:38.822Z
|
2020-06-22T00:00:00.000
|
{
"year": 2020,
"sha1": "1eaef804b61663d22e630c5f4659d2e3fa8a6a2c",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2006.12121",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "1eaef804b61663d22e630c5f4659d2e3fa8a6a2c",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
18557623
|
pes2o/s2orc
|
v3-fos-license
|
Structural integrity of the PCI domain of eIF3a/TIF32 is required for mRNA recruitment to the 43S pre-initiation complexes
Transfer of genetic information from genes into proteins is mediated by messenger RNA (mRNA) that must be first recruited to ribosomal pre-initiation complexes (PICs) by a mechanism that is still poorly understood. Recent studies showed that besides eIF4F and poly(A)-binding protein, eIF3 also plays a critical role in this process, yet the molecular mechanism of its action is unknown. We showed previously that the PCI domain of the eIF3c/NIP1 subunit of yeast eIF3 is involved in RNA binding. To assess the role of the second PCI domain of eIF3 present in eIF3a/TIF32, we performed its mutational analysis and identified a 10-Ala-substitution (Box37) that severely reduces amounts of model mRNA in the 43–48S PICs in vivo as the major, if not the only, detectable defect. Crystal structure analysis of the a/TIF32-PCI domain at 2.65-Å resolution showed that it is required for integrity of the eIF3 core and, similarly to the c/NIP1-PCI, is capable of RNA binding. The putative RNA-binding surface defined by positively charged areas contains two Box37 residues, R363 and K364. Their substitutions with alanines severely impair the mRNA recruitment step in vivo suggesting that a/TIF32-PCI represents one of the key domains ensuring stable and efficient mRNA delivery to the PICs.
Crystallization of the a/TIF32 276-494 domain and crystal structure determination X-ray diffraction images were collected at 100 K at beamline 14.1 (BESSY, Berlin, Germany; (3)) equipped with a MAR Mosaic 225mm CCD detector (Norderstedt, Germany). The oscillation images were indexed, integrated, and merged using the XDS package (4,5) to the final resolution of 2.65 Å for the native and to 2.79, 2.81 and 2.97 Å for Se-Met derivative crystal (peak, inflection and remote datasets, respectively). The crystal structure of a/TIF32 was solved by means of SAD using the Se-Met dataset at the peak wavelength in SHARP/autoSHARP (6). Within autoSHARP the heavy atom search was performed by SHELXD (7) and resulted in localizing four heavy atom positions that were further refined using SHARP followed by density modification in Solomon (8) and automatic model building in Arp/wARP (9). The resulting protein model comprising 172 amino acids has been refined using torsion angle dynamics in CNS (10) and manually rebuild and verified in Coot (11) against Simulated Annealing (SA) omit maps. The model comprising 217 residues (from 6 to 223, including three methionines) and belonging to one a/TIF32 monomer has been refined using PHENIX (12) at resolution of 2.65 Å to R and R free factors of 29.63% and 33.85%, respectively. Unusually high R-factors in concert with the solvent content of 72% and one additional (the fourth) Se peak found during the heavy atom search suggested the presence of additional protein molecule or its fragment in the asymmetric unit. The presence of the second full length a/TIF32 molecule in the asymmetric unit was unlikely due to the moderate resolution and resulting low solvent content of 42%. Performed MS analysis could not confirm any proteolytic digestion of protein samples obtained from dissolved crystals (data not shown). Difference electron density maps calculated with PHENIX (2mFo-DFc and mFo-DFc contoured at 1 and 3 sigma respectively) did not indicate any missing protein fragments as a few small separated blobs of density could not be interpreted as even a small part of a polypeptide chain. However the difference electron density maps (3mFo-2DFc, mFo-DFc) calculated with CNS revealed the presence of most probably some polypeptide fragments in the solvent channels, although the maps were highly diffused and not interpretable (no secondary structure elements could be recognized). In order to test if fragments of mostly α-helical a/TIF32 monomer could be the source of diffused electron density maps observed in solvent channels, we decided to carry out several molecular replacement (MR) searches with short a/TIF32 fragments as search models using PHASER (13) and keeping the already refined model as the fixed partial solution. About 650 short polypeptide fragments differing in length (25, 30, 35, 40, 55, 60 aa) have been generated with an offset of two and five amino acids covering the complete a/TIF32 monomer structure. The individual MR searches resulted in several well scoring solutions (TFZ scores between 9 to 11) which, when displayed simultaneously, formed an ensemble of overlapping fragments building a fragment of the second a/TIF32 molecule covering the amino acids range from 102 to 205. In order to localize the missing N-terminal fragment of the second a/TIF32 molecule the search was repeated, this time using the refined a/TIF32 monomer and formerly found 103 amino acid long fragment of the second a/TIF32 molecule as the fixed partial model. Well scoring solutions could only be identified after increasing the value of RMSD from 0.35 Å (used for the first runs) to 0.5 Å. The necessity of increasing RMSD implicated higher level of positional disorder of the missing N-terminal part of the second a/TIF32 molecule in comparison to already localized 103 residues long fragment of it. Solutions with the highest TFZ scores (8 to 11) formed an ensemble comprising residues 4 to 65 of a/TIF32 monomer. Based on these results, two a/TIF32 fragments comprising residues 4 to 65 and 102 to 205, respectively have been subjected to additional molecular replacement search and were successful only when using RMSD of 0.9 Å and 0.6 Å for fragment one and two, respectively. The presence of the second a/TIF32 molecule was in addition confirmed by calculating the self-rotation function using GLRF (14) program giving one clear solution at 10 sigma level. The self-rotation axis corresponds to the rotation between two a/TIF32 monomers present in the asymmetric unit (the difference in kappa angle is 7 degrees). Refinement of the structure, comprising complete a/TIF32 monomer and two additional a/TIF32 fragments -residues 8 to 64 and 104 to 205, resulted in decrease of R and R free factors to 24.85 % and 29.56 %, respectively. Due to high level of disorder of the second a/TIF32 molecule (average B-factor of 165 Å 2 ) reference model restrains generated from the complete a/TIF32 molecule, as implemented in PHENIX, have been used during the refinement. As a consequence no conformational differences can be observed between the two a/TIF32 molecules occupying the asymmetric unit of which only the complete a/TIF32 molecule has been refined independently.
RNA synthesis
A template for DAD4 RNA synthesis was prepared by PCR amplification from yeast genomic DNA using the following primers: GAAATTAATACGACTCACTATAAGCAGATAGGGAGGAAAAGAAGTGAGTTTA and ATGCGTATATAGAAAATTGGTGAATTAAA(T) 20 . In the forward primer the sequence of the T7-promoter was included. A stretch of 20 T was added to the 5' end of the reverse primer to mimic the presence of a poly-A in the resulting RNA. The PCR product was precipitated with ethanol. To remove potential RNAse contaminations, Proteinase K was added and subsequently denatured by heating to 95°C. 1 -1.5 mg DNA template was employed in an in vitro transcription approach containing T7 Polymerase and each 40 mM rNTPs in 1× HT buffer (30 mM HEPES pH 8.0, 25 mM MgCl 2 , 10 mM DTT, 2 mM spermidine, 0.01% Triton X-100). After incubation at 37 °C for 3 h, the transcript was ethanol precipitated. The resulting pellet was dissolved in RNase-free water.
Other biochemical methods β-galactosidase assays were conducted as described previously (15). Polysome profile analysis, 2% HCHO cross-linking, WCE preparation and fractionation of extracts for analysis of pre-initiation complexes were carried out as described by (16).
Analysis of the 48S PICs was done as described by(1)with the following exceptions. Total RNA was isolated from 0.5 ml of gradient fractions by hot-phenol extraction, and resuspended in 26 µl of diethyl pyrocarbonate (DEPC)-treated H 2 O. Isolated total RNA was treated with 0.7 µl of DNaseI(NEB)in the total volume of 30 µl. 3 µl of RNA were subjected to reverse transcription with SuperScript III reverse transcriptase (Invitrogen) in the total volume of 20 µl. Aliquots of cDNA were diluted 3-fold or 12-fold for measuring mRNA or 18S rRNA levels, respectively (this way the cDNA was diluted 20 or 80-fold in total, respectively, in comparison to non-diluted RNA). qPCR amplifications were performed on 2 µl of diluted cDNA in 10-µl reaction mixtures prepared with the Brilliant II SYBR green qPCR Master Mix (Stratagene) and primers for RPL41A mRNA (0.3 µM), DAD4 mRNA (0.3 µM), SME1 mRNA (0.3 µM) or 18S rRNA (0.4 µM) using the Mx3000P system (Stratagene). For each round of qPCR, each fraction was measured in triplicates together with no-RT control. The experiment with each strain was performed at least three times for RPL41A mRNA and two times for DAD4 and SME1 mRNAs with similar results.
TABLES, FIGURES AND FIGURE LEGENDS
Supplementary Figure S1.Solubility test of different fragments of recombinant a/TIF32. Cell lysate was clarified by centrifugation, resulting in the soluble protein (S) and insoluble protein in the pellet (P). In each case, the total cell lysate prior to centrifugation is loaded on the gel (T). The best soluble fragment was a/TIF32 276-494 which is indicated by a black arrow.
Supplementary Figure S2. Multiple sequence alignment of eIF3a/TIF32 from different organisms. Sequence alignment was done using ClustalW (17). Espript (18) was used for graphical presentation of the results. The crystallized fragment is marked by a box; Box37 is indicated by a grey bar. Green bars and orange arrows represent helices and strands, respectively.
Supplementary Figure S3. The tif32-Box37 substitution eliminates association of three reporter mRNAs with 43S PICs in vivo. (A -C) The isogenic rpl11b∆ strains carrying either wt or mutant a/TIF32 were heat shocked at 36˚C for 4 hours and processed for mRNA binding analysis as described in Figure 6. The amounts of 18S rRNA and (A) RPL41A, (B) DAD4, and (C) SME1 mRNAs were measured by realtime quantitative PCR (qPCR). (D) The relative amounts + SDs of all three mRNAs in the tif32-Box37 mutant versus wt in 18S rRNA containing fractions were calculated.
|
2018-04-03T01:38:41.621Z
|
2014-01-13T00:00:00.000
|
{
"year": 2014,
"sha1": "b7bb22e06c8f4595620ac6215a1f2fbffcb32f2d",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1093/nar/gkt1369",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b7bb22e06c8f4595620ac6215a1f2fbffcb32f2d",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
1414964
|
pes2o/s2orc
|
v3-fos-license
|
Finite Element Analysis of Aluminum Honeycombs Subjected to Dynamic Indentation and Compression Loads
The mechanical behavior of aluminum hexagonal honeycombs subjected to out-of-plane dynamic indentation and compression loads has been investigated numerically using ANSYS/LS-DYNA in this paper. The finite element (FE) models have been verified by previous experimental results in terms of deformation pattern, stress-strain curve, and energy dissipation. The verified FE models have then been used in comprehensive numerical analysis of different aluminum honeycombs. Plateau stress, σpl, and dissipated energy (EI for indentation and EC for compression) have been calculated at different strain rates ranging from 102 to 104 s−1. The effects of strain rate and t/l ratio on the plateau stress, dissipated energy, and tearing energy have been discussed. An empirical formula is proposed to describe the relationship between the tearing energy per unit fracture area, relative density, and strain rate for honeycombs. Moreover, it has been found that a generic formula can be used to describe the relationship between tearing energy per unit fracture area and relative density for both aluminum honeycombs and foams.
Introduction
Over the last few decades, man-made honeycombs have been widely used in many industries due to their properties such as high strength to weight ratio and good energy absorption capabilities. Honeycombs are manufactured from materials such as aluminum, nomex, polymer, and ceramic. Aluminum honeycombs can be used as industrial products as well as core materials in sandwich panels in various fields of engineering such as aerospace, aircraft, automotive, and naval engineering [1,2].
A number of studies have been conducted on the out-of-plane compression of aluminum honeycombs at low and intermediate strain rates [3][4][5][6][7][8]. Zhou and Mayer [3], Wu and Jiang [4] and Baker et al. [5] conducted compression tests on aluminum honeycombs at different strain rates in the out-of-plane direction and found that the plateau stress, σ pl , increased with strain rate, . ε. Both Xu et al. [6] and Ashab et al. [7] found that with the increase of t/l ratio (cell wall thickness to edge length ratio) and strain rate, plateau stress, σ pl , increased. Vijayasimha Reddy et al. [8] concluded that energy absorption capacity of aluminum honeycombs increased with the impact velocity under out-of-plane compression load. Alavi and Sadeghi [9] conducted experiments on foam-filled aluminum hexagonal honeycombs under the out-of-plane compression loads. They observed that the crushing strength of bare honeycombs and foam-filled honeycombs increased with strain rate and bare honeycombs were more sensitive to strain rate than foam-filled honeycombs. Mozafari et al. [10] employed ABAQUS software and observed that the mean crushing strength and energy absorption of foam-filled honeycomb were greater than the sum of those of bare honeycomb and foam.
Along with the experimental investigation, finite element analysis (FEA) has also been conducted by various researchers [11][12][13][14] to study the mechanical behavior of aluminum honeycombs. Guo and Gibson [11] conducted numerical analysis of intact and damaged honeycomb properties in the in-plane direction and reported that modulus and strength decreased due to the effect of single and isolated defects of various sizes. They also investigated the separation distance between two defects and its effect on the plastic collapse strength and Young's modulus. Ruan et al. [12] employed ABAQUS to investigate the effects of t/l ratio and impact velocity on the in-plane deformation mode and plateau stress. They derived an empirical formula to describe the relationship between the plateau stress, t/l ratio and velocity. Hu et al. [13,14] conducted experiments as well as finite element analysis to study in-plane crushing of aluminum honeycombs. They proposed a dynamic sensitivity index to describe crushing strength and energy absorption.
Deqiang et al. [15] used ANSYS/LS-DYNA [16] to study the out-of-plane dynamic properties of aluminum hexagonal honeycomb cores in compression. They found that the out-of-plane dynamic plateau stresses of honeycombs were related to the impact velocity, t/l ratio, and expanding angle θ of honeycombs by power laws. Yamashita and Gotoh [17] conducted both experimental and numerical analyses on the compression of aluminum honeycombs. The crushing strength was related to the t/l ratio of honeycombs by a power law with the exponent of 5/3, which was the same as the theoretical equation derived by Wierzbicki [18]. The computer simulation carried out by Xu et al. [19] also found a power law relationship between the out-of-plane compressive strength of aluminum honeycombs and the strain rate and t/l ratio.
A limited number of experiments were conducted on aluminum honeycombs subjected to indentation [3,7] at very low and intermediate strain rates. Zhou and Mayer [3] conducted quasi-static indentation tests on aluminum honeycombs to study the influence of specimen size on the force versus displacement curve. They found flatter and lower indentation force for the larger specimen. This was because larger specimens had a larger amount of surrounding cells, which provided stiffer support and resulted in fewer cells to be involved in tearing. The four primary deformation mechanisms were shear, tearing initiation, tearing, and compression. Zhou and Mayer also used different indenters, such as square, rectangular, and circular, to study the effect of indenter shape. Ashab et al. [7] conducted indentation tests on three types of aluminum honeycombs at strain rates from 10 to 10 2 s´1 and found that the tearing energy increased with the t/l ratio of honeycomb and strain rate. However, due to the limited honeycombs and testing machines available, previous studies were not able to draw quantitative conclusions on the effects of t/l ratio and strain rate on the tearing energy of honeycombs.
In the present paper, numerical simulation is performed using ANSYS/LS-DYNA [16] to study the dynamic out-of-plane properties of aluminum hexagonal honeycombs with various t/l ratios subjected to indentation. Compression of honeycombs is also simulated in order to calculate the tearing energy in indentation. Full-scale FE models of honeycombs are verified by the previous experimental results. The verified FE models are then used to investigate the effects of t/l ratio and strain rate on the plateau stress and tearing energy of honeycombs subjected to indentation. Empirical equations are proposed.
Finite Element (FE) Modeling
In the present paper, numerical analysis of aluminum honeycombs was carried out using ANSYS/LS-DYNA [16]. Two types of honeycombs, differing in cell size and cell wall thickness, were simulated. The honeycombs are named as H31 and H42 for honeycombs 3.1-3/16-5052-.001N, 4.2-3/8-5052-.003N, respectively. The specifications of the honeycombs, provided by the manufacturer, are listed in Table 1. The dimensions of each honeycomb model are the same as those of the actual specimen used in the previous experiments [7]. The height of all honeycombs, h, was 50 mm. The in-plane dimensions of all honeycomb specimens were 180 mmˆ180 mm in indentation simulation ( Figure 1a) and 90 mmˆ90 mm in compression simulation (Figure 1b). Figure 2 shows comparison between the experimental and simulated deformation of honeycomb H31 in compression at 5 ms −1 . Identical deformation mode in both the experiments [6,7] and FEA was observed: when the honeycomb was compressed in the out-of-plane (T) direction, buckling of cell walls was initiated from both the top and bottom ends and propagated to the middle of the honeycomb (Figure 2a,b). Figure 2c,d show the deformed honeycomb H31 after crushing in the experiment and FEA, respectively. Almost identical deformation patterns were found in the Aluminum honeycomb walls were simulated using a bilinear kinematic hardening material model. The corresponding material properties are listed in Table 2. Belytschko-Tsay Shell 163 elements with five integration points were employed to simulate the honeycomb cell walls for high computational efficiency [19]. In each honeycomb cell, single wall thickness was employed for the four oblique walls and double wall thickness was employed for the two vertical walls. To identify the optimum element size, a convergence test was carried out. Five different element sizes-2.1 mm, 1.4 mm, 0.7 mm, 0.3 mm, and 0.15 mm-were used to simulate compression of honeycombs at 5 ms´1. No significant difference (less than 7%) was observed between the results for element sizes 0.7 mm and 0.15 mm. Therefore, in this FE analysis of aluminum honeycombs an element size of 0.7 mm was employed. Since tearing of cell walls happened in honeycombs under indentation, MAT_ADD_EROSION failure criterion with a maximum effective strain of 0.3 [21] was used in the indentation models. All degrees of freedom of one node at a corner of the honeycomb were fixed to keep the honeycomb in place (i.e., no rigid body movement). In physical experiments, honeycomb specimens were placed on a fixed lower plate and crushed by an upper plate (in compression) or indenter (in indentation). In FE models, the lower plate was simulated by a rigid plate while the upper plate and indenter were simulated by rigid bodies. The lower plate was 1 mm (thickness)ˆ200 mmˆ200 mm. The upper plate and indenter were cuboids with dimensions of 50 mm (height)ˆ90 mmˆ90 mm, the same as in the previous experimental study [7]. The material properties used for the plates and indenter are listed in Table 3. Table 3. Material properties used in the FE model of rigid plate and bodies [19].
Material Properties Mass Density (ρ) Young's Modulus (E) Poisson's Ratio (υ)
Magnitude 7830 kg/m 3 207 GPa 0.34 For the lower plate, all degrees of freedom were fixed. For the upper plate (in compression) and indenter (in indentation), all three rotational movements and two transitional movements in the X and Z directions were fixed. The upper plate or indenter could move in the negative Y direction at a constant velocity to compress or indent honeycombs.
A tiny gap (0.1 mm) between the fixed lower plate and the honeycomb was employed to avoid the initial penetration at the beginning of the simulation. For the same reason, an initial gap of 5 mm was also introduced between the upper plate or the indenter and the honeycomb. SURFACE_TO_SURFACE contacts were employed between the plates or indenter and honeycomb. Typical finite element models of indentation and compression of honeycombs in the out-of-plane direction are shown in Figure 1. Figure 2 shows comparison between the experimental and simulated deformation of honeycomb H31 in compression at 5 ms´1. Identical deformation mode in both the experiments [6,7] and FEA was observed: when the honeycomb was compressed in the out-of-plane (T) direction, buckling of cell walls was initiated from both the top and bottom ends and propagated to the middle of the honeycomb (Figure 2a,b). Figure 2c,d show the deformed honeycomb H31 after crushing in the experiment and FEA, respectively. Almost identical deformation patterns were found in the experimental and FEA results. Due to the stronger lateral constraints in the central part of the honeycomb, the honeycomb deformed in a much more regular pattern in the central part. However, along the four edges of the indenter, honeycomb cell walls deformed in an irregular pattern. Similar deformation patterns were observed for honeycomb H42 in compression. Figure 3 shows a comparison between experimental and FEA deformation patterns of honeycomb H42 subjected to out-of-plane indentation at a velocity of 5 ms −1 . Similar irregular tearing patterns were observed in both the experiment and FEA. The FEA results of another type of honeycomb, H31, also showed a similar deformation pattern to that observed in the previous experiments [7].
Stress-Strain Curves
FEA and experimental stress-strain curves of two types of honeycombs are shown in Figure 4. Similar general trends in the stress-strain curves were found for both honeycombs in indentation and compression. The plateau stress is defined as the average stress between displacements from 5 to 38 mm. The total dissipated energy is the area under the force-displacement curves up to 38 mm, which is described by EC in compression and EI in indentation. Tearing of the cell walls along the four edges of the square indenter occurred simultaneously during the indentation. Tearing energy, Et, was calculated using the following energy conservation equation:
Stress-Strain Curves
FEA and experimental stress-strain curves of two types of honeycombs are shown in Figure 4. Similar general trends in the stress-strain curves were found for both honeycombs in indentation and compression.
Stress-Strain Curves
FEA and experimental stress-strain curves of two types of honeycombs are shown in Figure 4. Similar general trends in the stress-strain curves were found for both honeycombs in indentation and compression. The plateau stress is defined as the average stress between displacements from 5 to 38 mm. The total dissipated energy is the area under the force-displacement curves up to 38 mm, which is described by EC in compression and EI in indentation. Tearing of the cell walls along the four edges of the square indenter occurred simultaneously during the indentation. Tearing energy, Et, was calculated using the following energy conservation equation: The plateau stress is defined as the average stress between displacements from 5 to 38 mm. The total dissipated energy is the area under the force-displacement curves up to 38 mm, which is described by E C in compression and E I in indentation. Tearing of the cell walls along the four edges of the square indenter occurred simultaneously during the indentation. Tearing energy, E t , was calculated using the following energy conservation equation: where, E t is the dissipated energy in tearing; E I is the energy dissipated in indentation; and E C is the energy dissipated in compression.
Comparisons between the FEA and experimental results in terms of plateau stress and dissipated energy are listed in Table 4. For two different types of honeycombs (H31 and H42), the simulated plateau stresses and total dissipated energies were found to be slightly lower than the corresponding experimental values in both indentation and compression. The differences were between 4.71% and 11.62%, which was acceptable.
The Effect of t/l Ratio
The effect of t/l ratio on the mechanical properties of honeycombs is discussed in this section. Firstly, the thickness of honeycomb cell walls was fixed as 0.0254 mm. Five different cell sizes-3.175 mm, 3.969 mm, 4.763 mm, 6.35 mm, and 9.525 mm-were employed. A constant strain rate of 1ˆ10 3 s´1 was used in the simulation. The FEA results are listed in Table 5. Both in indentation and compression, it was found that the plateau stress decreased with the increase of cell size for a constant cell wall thickness. Similar to the plateau stress, dissipated energy and tearing energy also decreased with the increase of cell size. corresponding t/l ratios were from 0.00647 to 0.05542. The simulation results at a constant strain rate of 1ˆ10 3 s´1 are listed in Table 6. The plateau stresses of honeycombs subjected to out-of-plane compression and indentation were found to increase with t/l ratio ( Figure 5) by power laws. The exponents are 1.47 for compression (Equation (2a)) and 1.36 for indentation (Equation (2b)). Xu et al. [6,19] also found a similar power law relation between plateau stress and t/l ratio with an exponent of 1.49. The tearing energies were calculated using Equation (1) and are shown in Tables 5 and 6. Tearing energy was also found to increase with the t/l ratio. The fracture area, A t , was calculated as the product of the circumferential length of the square shape indenter (90 mmˆ4) and the displacement (38 mm) of the indenter [3]. The relationship between the tearing energy per fracture area, A t, and the relative density, ρ{ρ 0 , is shown in Figure 6. It was found that with the increase of t/l ratio or relative density, the tearing energy per unit fracture area increased. The tearing energies were calculated using Equation (1) and are shown in Tables 5 and 6. Tearing energy was also found to increase with the t/l ratio. The fracture area, At, was calculated as the product of the circumferential length of the square shape indenter (90 mm × 4) and the displacement (38 mm) of the indenter [3]. The relationship between the tearing energy per fracture area, At, and the relative density, ⁄ , is shown in Figure 6. It was found that with the increase of t/l ratio or relative density, the tearing energy per unit fracture area increased. Previously, Zhou and Mayer [3] and Ashab et al. [7] conducted quasi-static indentation tests on different honeycombs. Moreover, other researchers [22][23][24] conducted quasi-static indentation tests on aluminum foams. Shi et al. [22] proposed a theoretical formula and an empirical formula between tearing energy per unit fracture area and relative density. In order to compare these two types of cellular materials (honeycomb and foam, which are made from different aluminum alloys), tearing energy per unit fracture area was normalized by the yield stress of the parent aluminum alloy for both honeycombs and foams. The relationship between the normalized tearing energy per unit fracture area and relative density is shown in Figure 7. Using yield stress σys = 150 MPa for the foams in Shi et al. [22], Olurin et al. [23], and Olurin et al. [24], the equation proposed by Shi et al. [22], 119.4 ̅ , can be rewritten as 0.79 ̅ , where , , and ̅ are tearing energy per unit area, yield stress of aluminum, and relative density of foam, respectively. The normalized tearing energy per unit fracture area for both aluminum foams and honeycombs are plotted together in terms of relative density in Figure 7. The equation of the best fitted line is as follows, which is very similar to that for foams: Figure 6. The relationship between the tearing energy per unit fracture area and relative density of honeycomb at a strain rate of 1ˆ10 3 s´1.
Previously, Zhou and Mayer [3] and Ashab et al. [7] conducted quasi-static indentation tests on different honeycombs. Moreover, other researchers [22][23][24] conducted quasi-static indentation tests on aluminum foams. Shi et al. [22] proposed a theoretical formula and an empirical formula between tearing energy per unit fracture area and relative density. In order to compare these two types of cellular materials (honeycomb and foam, which are made from different aluminum alloys), tearing energy per unit fracture area was normalized by the yield stress of the parent aluminum alloy for both honeycombs and foams. The relationship between the normalized tearing energy per unit fracture area and relative density is shown in Figure 7. Using yield stress σ ys = 150 MPa for the foams in Shi et al. [22], Olurin et al. [23], and Olurin et al. [24], the equation proposed by Shi et al. [22], γ " 119.4ρ, can be rewritten as γ " 0.79σ ys ρ, where γ, σ ys , and ρ are tearing energy per unit area, yield stress of aluminum, and relative density of foam, respectively. The normalized tearing energy per unit fracture area for both aluminum foams and honeycombs are plotted together in terms of relative density in Figure 7. The equation of the best fitted line is as follows, which is very similar to that for foams: 119.4 ̅ , can be rewritten as 0.79 ̅ , where , , and ̅ are tearing energy per unit area, yield stress of aluminum, and relative density of foam, respectively. The normalized tearing energy per unit fracture area for both aluminum foams and honeycombs are plotted together in terms of relative density in Figure 7. The equation of the best fitted line is as follows, which is very similar to that for foams:
Plateau Stress
In the previous experimental study [7], honeycombs were crushed at low and intermediate strain rates (1ˆ10´3 to 1ˆ10 2 s´1). FEA was conducted on honeycombs at high strain rates (1ˆ10 2 to 1ˆ10 4 s´1). Both experimental and FEA results are shown in Figure 8, which demonstrates the influence of strain rate on the plateau stress of two different honeycombs subjected to out-of-plane indentation and compression loadings, respectively. For both types of honeycombs, the plateau stress increased with strain rate in both indentation and compression. Due to the higher t/l ratio, the plateau stress is larger for honeycomb H42 than that for honeycomb H31.
Plateau Stress
In the previous experimental study [7], honeycombs were crushed at low and intermediate strain rates (1 ×10 −3 to 1 × 10 2 s −1 ). FEA was conducted on honeycombs at high strain rates (1 × 10 2 to 1 × 10 4 s −1 ). Both experimental and FEA results are shown in Figure 8, which demonstrates the influence of strain rate on the plateau stress of two different honeycombs subjected to out-of-plane indentation and compression loadings, respectively. For both types of honeycombs, the plateau stress increased with strain rate in both indentation and compression. Due to the higher t/l ratio, the plateau stress is larger for honeycomb H42 than that for honeycomb H31. Experiments and FEA of compression of aluminum honeycombs were conducted by various researchers [7,15,[25][26][27][28][29][30]. In previous experimental study [7], enhancement in the plateau stress was observed at low and intermediate loading velocities. Wang et al. [26] reported remarkable enhancement of plateau stress at high impact velocity (20-80 ms −1 ). Goldsmith and Sackman [27] found a 50% enhancement in plateau stress at dynamic velocities up to 35 ms −1 . Zhao and Gary [28] observed significant enhancement (approximately 40%) in the plateau stress when the loading velocity increased from quasi-static to dynamic (2-28 ms −1 ). Similar enhancement of plateau stress with the loading velocity was also discussed by Hou et al. [29] and Zhao et al. [30]. In order to compare these results with the current FEA, plateau stresses of honeycombs were normalized as (σpl/σys)/(t/l) 1.5 and plotted in Figure 9 in terms of strain rate. These current FEA results show significant enhancement of plateau stress at high impact velocities, which agree very well with the FEA results of Deqiang et al. [15]. Experiments and FEA of compression of aluminum honeycombs were conducted by various researchers [7,15,[25][26][27][28][29][30]. In previous experimental study [7], enhancement in the plateau stress was observed at low and intermediate loading velocities. Wang et al. [26] reported remarkable enhancement of plateau stress at high impact velocity (20-80 ms´1). Goldsmith and Sackman [27] found a 50% enhancement in plateau stress at dynamic velocities up to 35 ms´1. Zhao and Gary [28] observed significant enhancement (approximately 40%) in the plateau stress when the loading velocity increased from quasi-static to dynamic (2-28 ms´1). Similar enhancement of plateau stress with the loading velocity was also discussed by Hou et al. [29] and Zhao et al. [30]. In order to compare these results with the current FEA, plateau stresses of honeycombs were normalized as (σ pl / σ ys )/(t/l) 1.5 and plotted in Figure 9 in terms of strain rate. These current FEA results show significant enhancement of plateau stress at high impact velocities, which agree very well with the FEA results of Deqiang et al. [15].
Experiments and FEA of compression of aluminum honeycombs were conducted by various researchers [7,15,[25][26][27][28][29][30]. In previous experimental study [7], enhancement in the plateau stress was observed at low and intermediate loading velocities. Wang et al. [26] reported remarkable enhancement of plateau stress at high impact velocity (20-80 ms −1 ). Goldsmith and Sackman [27] found a 50% enhancement in plateau stress at dynamic velocities up to 35 ms −1 . Zhao and Gary [28] observed significant enhancement (approximately 40%) in the plateau stress when the loading velocity increased from quasi-static to dynamic (2-28 ms −1 ). Similar enhancement of plateau stress with the loading velocity was also discussed by Hou et al. [29] and Zhao et al. [30]. In order to compare these results with the current FEA, plateau stresses of honeycombs were normalized as (σpl/σys)/(t/l) 1.5 and plotted in Figure 9 in terms of strain rate. These current FEA results show significant enhancement of plateau stress at high impact velocities, which agree very well with the FEA results of Deqiang et al. [15]. Figure 10 shows the effect of strain rate on the dissipated energy of two types of honeycombs under indentation and compression loadings, respectively. Similar to the plateau stress, for two types of honeycombs the dissipated energy increased with strain rate in both indentation and compression. For honeycomb H42, the dissipated energies in both indentation and compression were found to be larger than those of honeycomb H31 due to the higher t/l ratio. Figure 10 shows the effect of strain rate on the dissipated energy of two types of honeycombs under indentation and compression loadings, respectively. Similar to the plateau stress, for two types of honeycombs the dissipated energy increased with strain rate in both indentation and compression. For honeycomb H42, the dissipated energies in both indentation and compression were found to be larger than those of honeycomb H31 due to the higher t/l ratio. Tearing energy, which is the difference between the total energies dissipated in indentation and compression, was plotted in Figure 11. Due to the higher t/l ratio, the magnitude of tearing energy is larger for honeycomb H42 than that for honeycomb H31 at the same strain rate. For both honeycombs, tearing energy increases with strain rate. The fitted curve for tearing energy per unit fracture area for honeycombs at different strain rates is shown in Figure 12. Tearing energy, which is the difference between the total energies dissipated in indentation and compression, was plotted in Figure 11. Due to the higher t/l ratio, the magnitude of tearing energy is larger for honeycomb H42 than that for honeycomb H31 at the same strain rate. For both honeycombs, tearing energy increases with strain rate. The fitted curve for tearing energy per unit fracture area for honeycombs at different strain rates is shown in Figure 12. Tearing energy, which is the difference between the total energies dissipated in indentation and compression, was plotted in Figure 11. Due to the higher t/l ratio, the magnitude of tearing energy is larger for honeycomb H42 than that for honeycomb H31 at the same strain rate. For both honeycombs, tearing energy increases with strain rate. The fitted curve for tearing energy per unit fracture area for honeycombs at different strain rates is shown in Figure 12. The relation between the tearing energy per unit fracture area and the relative density and strain rate is described by the following equation: Figure 13 shows the enlarged isometric and front (sectional plane) views of honeycombs H31 under out-of-plane indentation and compression loads. Three images of deformation were taken at a displacement of 0 mm, 20 mm, and 40 mm, respectively, from the animation of FEA by using LS-Prepost software [31]. In Figure 13a it is seen that the progressive buckling of the cell wall occurs from both ends of the honeycomb simultaneously and propagates to the middle region of the honeycomb, which is similar to that observed in the previous experimental study [7]. Deformation mode is found to be independent of strain rate. Xu et al. [6] also observed a negligible effect of strain rate on the buckling of honeycomb cells in the out-of-plane compression.
Deformation Pattern of Aluminum Honeycombs Subjected to Compression and Indentation
In the previous experimental study, it was impossible to observe the deformation of honeycomb under the indenter. In the current FEA, the deformation of honeycomb in indentation is observed from the front sectional plane view, as shown in Figure 13b. It is found that the progressive buckling The relation between the tearing energy per unit fracture area and the relative density and strain rate is described by the following equation: Figure 13 shows the enlarged isometric and front (sectional plane) views of honeycombs H31 under out-of-plane indentation and compression loads. Three images of deformation were taken at a displacement of 0 mm, 20 mm, and 40 mm, respectively, from the animation of FEA by using LS-Prepost software [31]. In Figure 13a it is seen that the progressive buckling of the cell wall occurs from both ends of the honeycomb simultaneously and propagates to the middle region of the honeycomb, which is similar to that observed in the previous experimental study [7]. Deformation mode is found to be independent of strain rate. Xu et al. [6] also observed a negligible effect of strain rate on the buckling of honeycomb cells in the out-of-plane compression. In the previous experimental study, it was impossible to observe the deformation of honeycomb under the indenter. In the current FEA, the deformation of honeycomb in indentation is observed from the front sectional plane view, as shown in Figure 13b. It is found that the progressive buckling of cell walls initiates from the top end of the honeycomb, which is immediately beneath the indenter, and propagates in the same manner till densification. Progressive buckling takes place in the middle portion of the honeycomb model underneath the indenter, which is associated with the tearing of cell walls along the four edges of the indenter. No significant difference is observed in the buckling pattern at different strain rates.
Conclusions
In this finite element analysis, different honeycomb models have been developed by using ANSYS/LS-DYNA to study the mechanical behavior of honeycombs under out-of-plane indentation and compression loads over a wide range of high strain rates from 1ˆ10 2 to 1ˆ10 4 s´1. The FE models have been validated by the previous experimental results (compression and indentation) in terms of deformation, stress-strain curves, plateau stress, and dissipated energy. A reasonable agreement between the FEA and experimental results has been found for both honeycombs H31 and H42.
It is found that the plateau stress, dissipated energy, and tearing energy increase with the t/l ratio. For a constant strain rate of 1ˆ10 3 s´1, the plateau stresses increase with t/l ratio by power laws with exponents of 1.47 and 1.36 for compression and indentation, respectively.
Moreover, the plateau stress, dissipated energy, and tearing energy increase gradually for low and intermediate strain rates. Significant enhancement in the plateau stress, dissipated energy, and tearing energy is observed at high strain rates for honeycombs subjected to either compression or indentation loads. An empirical formula is proposed for the tearing energy per unit fracture area in terms of strain rate and relative density of honeycombs.
The current FEA reveals that at velocities at 5 ms´1, under indentation, plastic buckling of the honeycomb cell walls occurs from the end that is adjacent to the indenter, while under compression the buckling of honeycomb cell walls occurs from both ends of the honeycomb.
It is found that under quasi-static indentation, the empirical formula proposed by Shi et al. for foam can be used for honeycombs as well.
|
2016-03-22T00:56:01.885Z
|
2016-03-01T00:00:00.000
|
{
"year": 2016,
"sha1": "a4fbc75e904502d39b5ff39514ab433d9c668adf",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/9/3/162/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a4fbc75e904502d39b5ff39514ab433d9c668adf",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
}
|
229426201
|
pes2o/s2orc
|
v3-fos-license
|
Performance Analysis of NoSQL and Relational Databases with CouchDB and MySQL for Application’s Data Storage
: In the current context of emerging several types of database systems (relational and non-relational), choosing the type and database system for storing large amounts of data in today’s big data applications has become an important challenge. In this paper, we aimed to provide a comparative evaluation of two popular open-source database management systems (DBMSs): MySQL as a relational DBMS and, more recently, as a non-relational DBMS, and CouchDB as a non-relational DBMS. This comparison was based on performance evaluation of CRUD (CREATE, READ, UPDATE, DELETE) operations for di ff erent amounts of data to show how these two databases could be modeled and used in an application and highlight the di ff erences in the response time and complexity. The main objective of the paper was to make a comparative analysis of the impact that each specific DBMS has on application performance when carrying out CRUD requests. To perform the analysis and to ensure the consistency of tests, two similar applications were developed in Java, one using MySQL and the other one using CouchDB database; these applications were further used to evaluate the time responses for each database technology on the same CRUD operations on the database. Finally, a comprehensive discussion based on the results of the analysis was performed that centered on the results obtained and several conclusions were revealed. Advantages and drawbacks for each DBMS are outlined to support a decision for choosing a specific type of DBMS that could be used in a big data application.
Introduction
Due to the occurrence and expansion of web applications, the requirements for quick data storage and processing increased drastically. Until recently, the relational model has been the most widely used approach for managing data; many of the most popular database management systems (DBMS) implemented the relational model. However, the relational model has several limitations that can be problematic in certain use cases [1,2].
The main issue is that relational databases are not effective when handling large volumes of data. Due to the need for high performance, a new type of database has emerged: NoSQL (Not Only SQL (Structured Query Language)). NoSQL is a generic name for database management systems that are not aligned to the relational model and is widely used by the industry.
This study of performance metrics included latency and database size. The comparison was based on the performance evaluation of inserting and retrieving a huge amount of IoT (Internet of Things) data, while assessing the performance of these two types of databases to work on resources with different specifications in cloud computing.
The challenge to find a balance between characteristics of classical relational databases management systems and opportunities offered by NoSQL database management systems (MongoDB) were investigated in [11,12], which proposes an integration approach to support hybrid database architecture (MySQL, MongoDB, and Redis).
In [13], the authors presented a study between MongoDB as a non-relational database and MySQL as a relational database describing the advantages of using a non-relational database compared to a relational database integrated into a web-based application, which needs to manipulate large amounts of data.
The authors of [3] conduct a performance evaluation for CRUD operations for several non-relational and relational databases (MongoDB, CouchDB, Couchbase, Microsoft SQL Server, MySQL, and PostgreSQL); the performance was evaluated based on the CRUD operations in terms of the time of queries and the size of data with and without replication; however, in this case, the analysis was oriented only on simple queries over a simple database structure. In [14], a comparison between Elasticsearch and MySQL via searching values in larger key-value datasets was made.
In this idea, our paper makes a detailed analysis of all CRUD operations and shows how the performance of the application can be influenced by increasing the complexity of the queries and the amount of data on which those operations were applied. Besides the well-known MySQL database, the paper also approaches CouchDB, a non-relational database that was not so much studied in the literature. The analysis considers a complex database with multiple joins and considers different data structure approaches: two structures for MySQL, one for relational and one for document support, that was added more recently in MySQL, and two structures for CouchDB.
Method and Testing Architecture
The testing architecture implies the implementation of three applications in Java, using IntelliJ IDEA [15] one for CouchDB, one for relational MySQL, and one for document-based MySQL. Even if all the applications contain similar information, structured in different forms, for the MySQL case, two different applications were needed due to the widely different structure of the application considered in these two approaches. In the case of the relational database, the application is built out of entities, repositories, and services besides the main class, according to the relational database structure from Figure 1. Queries in MySQL were executed from classes that have the Repository annotation and each method has a @Query where the command is placed [16].
For the non-relational databases (CouchDB and document-based MySQL), each application contains the main class where the methods are implemented, and the objects are described in the structures from Figures 2 and 3. For MySQL non-relational approach, commands were predefined by the MySQL Connector Java version 8.0.16 library [17]. For the non-relational approach, the CouchDB commands were either predefined by the Ektorp library [18] or by requests. For the relational approach, MySQL version 8.0.21 was used. As shown in Figure 1, a complex database structure with multiple joins was used in order to be able to highlight the possible differences between the two types of databases in case of a For MySQL non-relational approach, commands were predefined by the MySQL Connector Java version 8.0.16 library [17]. For the non-relational approach, the CouchDB commands were either predefined by the Ektorp library [18] or by requests. For the relational approach, MySQL version 8.0.21 was used. As shown in Figure 1, a complex database structure with multiple joins was used in order to be able to highlight the possible differences between the two types of databases in case of a large and complex application and different query complexity. The deleted field was used to be able to mark an element as soft deleted, so in queries, we can ignore or not these elements, depending on its context. Created_at and updated_at fields were useful to see if an item has been updated or not, if the two dates are different, we can know that the item has been changed. For non-relational databases, Apache CouchDB version 3.1.0 was used. To transpose the structure of the similar database from MySQL in CouchDB we considered two possible structures (first and second).
CouchDB First Structure
CouchDB's first structure is presented in Figure 2. By using this structure, the duplication of data was eliminated, in each document that must contain a reference to another, the id of the document on which it depends is saved. Using this type of structure reduces the amount of data saved for each document, when deleting a document that is referred we should take care in order to delete it with all those documents that refer to it, otherwise it could end up having inconsistent data.
By default, documents that do not contain the _deleted field are not deleted, therefore this field was not added from the beginning, it could be added only when this action is intended to take place. The _rev field is entered automatically when an element is inserted, therefore we do not need to enter it. Using this structure, the documents are dependent on each other.
CouchDB Second Structure
Each document must contain all the data for every entity, so, for example, to enter a city, the document must contain all the data of the continent, the country, and later of the city. Those can contain a single country, a single city, a single hotel, or a single restaurant. A document that contains information about a hotel represented using CouchDB's second structure is presented in Figure 3.
Starting with version 8.0, MySQL also provides support for non-relational, document-based databases. The structure used for the document-based MySQL approach is identical to CouchDB's second structure presented in Figure 3. For all structures, first, we added the continents, then the countries, cities, restaurants, and hotels, and finally the restaurants that have a hotel.
Performance Tests
The database comparison involved testing performance time for all the CRUD (CREATE, READ, UPDATE, DELETE) operations over the four versions of databases: relational MySQL, document-based MySQL, CouchDB first structure, and CouchDB second structure.
In order to have the most conclusive and relevant results, each operation was applied to a different number of elements, from 1000 to 1,000,000. The elements were generated randomly, but we have taken into account the fact that when using the first structure in CouchDB for selections many requests are made so if the volume of data obtained from the filter would have been very big, response times would have been too long to be compared with other approaches. For each number of elements, each operation was repeated five times and the results listed in the resulting tables represent their average. When using documents in MySQL, the connection for the document-based MySQL approach is made as follows: SessionFactory sFact = new SessionFactory (); Session session = sFact.getSession ("mysqlx://name:password@localhost:33060"); Schema schema = session.createSchema ("demo", true); Collection collection = schema.createCollection ("master", true); Appl. Sci. 2020, 10, 8524 6 of 21 The responses to the requests made for the NoSQL database come in the form of a JSON. For CouchDB, each request was made through OkHttpClient, which is an efficient HTTP and HTTP/2 client for Java applications [19], according to the model of the client for Java applications: OkHttpClient client = new OkHttpClient ().newBuilder (). username ("username").password("password").build(); MediaType mediaType = MediaType.parse ("application/json"); String credential = Credentials.basic ("username", "password"); RequestBody body = RequestBody.create ("requestBody", mediaType); When looking for items: Request request = new Request.Builder ().url ("http:// 127.0.0.1:5984/ master/ _find") When we want to insert or update items: "http:// 127.0.0.1:5984/ master/ _bulk_docs" .method ("POST", body).addHeader ("Content-Type", "application/json") .addHeader ("Authorization", credential) .build (); Response response = client.newCall (request).execute (); All the tests presented further were conducted on a computer with the following configuration: Windows 10 Pro 64-bit, processor Intel Core i5-8250U CPU @1.60GHz, 16GB RAM, and a 512 GB SSD.
INSERT Operation
For each database structure, the insert operations are presented in Table 1.
COUCHDB SECOND STRUCTURE: Insert operation
Map<String, Object> continent = new HashMap<String, Object>(); Map<String, Object> country = new HashMap<String, Object>(); country.put("name", "Italy"); country.put("population", 12345); country.put("surface", 123455); country.put("currency", "EUR"); country.put("capital", "Roma"); continent.put("_id", 10); continent.put("name", "Europe"); continent.put("population", 123456789); continent.put("surface", 1234567); continent.put("created_at", date); continent.put("updated_at", date); continent.put("country", country); db.create(continent); In the case of data insertion in the CouchDB database, we have to create a key-value map with the fields used, and then insert through a predefined command. If we consider the scenario of using document-based MySQL, we created the object to be inserted, and by the predefined command add we add it after we mapped it as a string. The difference between the first structure in CouchDB and relational MySQL and between the second structure in CouchDB and document-based MySQL is the fact that, for the first two, we have to verify that there is the continent for which we want to add the country, and for the other two this is not necessary. This automatically takes longer. Figure 4 presents the average execution time of the INSERT operation. It shows that the NoSQL approach has the best performance for this operation when the number of elements increases to over 1,000,000; however, when the number of elements is decreased, the times were relatively close to the ones obtained for a relational approach.
UPDATE Operation
A simple, single update made on an indexed field, in which we change the country name with the id x can be done; for each database structure, the update operations are presented in Table 2.
UPDATE Operation
A simple, single update made on an indexed field, in which we change the country name with the id x can be done; for each database structure, the update operations are presented in Table 2.
COUCHDB SECOND STRUCTURE: Single update operation
Send a request with: requestBody:"{\"selector\": {\"_id\": \"2\"}}" parse the response using the second structure of ResponseDTO: List<Continent> updatedList= responseDTO.getDocs().stream().peek (doc->{ doc.setName("Romania"); }) .collect(toList()); UpdateDTO updateDTO = new UpdateDTO(); updateDTO.setDocs(updatedList); String asString = new ObjectMapper().writeValueAsString(updateDTO); Send a second request with requestBody: asString As shown in Figure 5, the relatively big time difference between the first and the second structure in CouchDB is because, in the case of the first one, we made the selection and updated through a predefined order; but in the second case, the update was made through a request that automatically takes longer. When using MySQL, in the back, the update first looks for that item and then changes it. If we had defined indexes on the table, automatically the time required for the update would have been much shorter because indexes improve the speed of operations in the database. For document-based MySQL, we searched for the document to be modified, and with the replace command, we modified the desired field. A more complex, multiple update operation, that will update one hundred elements, changing the currency of all the countries on the European continent and setting it as having the value "test", is presented in Table 3.
COUCHDB FIRST STRUCTURE: Multiple update operation
Send a request to get the continent id using requestBody: "{\"selector\":{\"name\":\"Europe\"}}" parse the response using first structure of ResponseDTO; Send a second request to get all documents with this continent id, requestBody: "{\"selector\":{\"continent_id\":" + continentId + "}}" parse the response using first structure of ResponseDTO and get all documents: 1000 10,000 100,000 1,000,000 When using MySQL, in the back, the update first looks for that item and then changes it. If we had defined indexes on the table, automatically the time required for the update would have been much shorter because indexes improve the speed of operations in the database. For document-based MySQL, we searched for the document to be modified, and with the replace command, we modified the desired field. A more complex, multiple update operation, that will update one hundred elements, changing the currency of all the countries on the European continent and setting it as having the value "test", is presented in Table 3. Table 3. Multiple update operations.
COUCHDB SECOND STRUCTURE: Multiple update operation
Send a request to get all documents with continent name Europe using requestBody: "{\"selector\":{\"name\":\"Europe\"}}" parse the response using the second structure of ResponseDTO: List<Continent> updatedList= responseDTO.getDocs().stream().peek(doc-> { doc.getCountry().setCurrency("test"); }) .collect(toList()); UpdateDTO updateDTO = new UpdateDTO(); updateDTO.setDocs(updatedList); String asString = new ObjectMapper(). writeValueAsString(updateDTO); Send a second request with requestBody: asString It can be noticed that in the body of the request for CouchDB second structure we have to send each document with all the fields, even if we have modified only one field. When only the modified field is added, all documents will be saved in this form, namely, all other fields will disappear. In this form, documents can be entered into the database.
As can be seen in Figure 6, the time difference between the first structure and the second one is represented by the way updating the elements takes place.
In the case of the first structure, two requests have to be made to get all the countries, and then iterating through the list obtained and updating each element. First, one has to create a new modified statement for the elements that satisfy the search content and then update the desired field with the new value.
However, MySQL performance is the best when a small number of elements are involved because in CouchDB both obtaining and saving the modified elements is made through requests, which automatically takes longer and depends on the response times to these requests. In the case of the second structure, which takes significantly less time, the list is obtained through only one request, followed by updating the elements of the list locally and afterwards, through another request, by saving the new data in the database.
It can be noticed that in the body of the request for CouchDB second structure we have to send each document with all the fields, even if we have modified only one field. When only the modified field is added, all documents will be saved in this form, namely, all other fields will disappear. In this form, documents can be entered into the database.
As can be seen in Figure 6, the time difference between the first structure and the second one is represented by the way updating the elements takes place. In the case of the first structure, two requests have to be made to get all the countries, and then iterating through the list obtained and updating each element. First, one has to create a new modified statement for the elements that satisfy the search content and then update the desired field with the new value.
However, MySQL performance is the best when a small number of elements are involved because in CouchDB both obtaining and saving the modified elements is made through requests, which automatically takes longer and depends on the response times to these requests. In the case of the second structure, which takes significantly less time, the list is obtained through only one request, followed by updating the elements of the list locally and afterwards, through another request, by saving the new data in the database.
In MySQL, joining between tables decreases its performance because it first looks for the element in the two tables and then updates it. Using document-based MySQL, the update is done through the predefined modify command where the search was made for the elements to be modified and the set command through which we modify the value of the desired key. But when the number of elements increases, the performance of MySQL decreases, increasing the times of crossing the elements compared to CouchDB's second structure. In MySQL, joining between tables decreases its performance because it first looks for the element in the two tables and then updates it. Using document-based MySQL, the update is done through the predefined modify command where the search was made for the elements to be modified and the set command through which we modify the value of the desired key. But when the number of elements increases, the performance of MySQL decreases, increasing the times of crossing the elements compared to CouchDB's second structure.
SELECT Operation
Several types of selections were made to better observe the response differences between the different types of databases. The queries used were called by a method in which the required parameters were sent. In the case of CouchDB, complex selections were made in the form of a post request in which we can pass conditions, filters, fields, and limitations.
Simple SELECT
A simple select, that returns a country with id 222 is considered; for CouchDB, using both the first and second structures, the search is made as presented in Table 4. In the case of this simple selection, the search in the CouchDB database is much faster than the relational MySQL but almost 50% slower than the document-based MySQL. In the case of using relational MySQL, response times are larger because it scans the entire table until it finds the item.
Simple SELECT with Single Inner Join
A simple select with a single inner join that returns all countries from Europe is presented in Table 5. In the case of this simple selection, the search in the CouchDB database is much faster than the relational MySQL but almost 50% slower than the document-based MySQL. In the case of using relational MySQL, response times are larger because it scans the entire table until it finds the item.
Simple SELECT with Single Inner Join
A simple select with a single inner join that returns all countries from Europe is presented in Table 5. Table 5. Select with single inner join operations.
DOCUMENT-BASED MYSQL: Select with single inner join operation
DocResult res = collection.find("name = 'Europe' and country.name is not null and country.city.name is null").execute();
COUCHDB SECOND STRUCTURE: Select with single inner join operation
Send a request to get the continent id using requestBody: "{\"selector\":{\"name\":\"Europe\"}}" parse the response using first structure of ResponseDTO; Send a second request to get all the documents with this continent id, requestBody: "{\"selector\":{\"continent_id\":" + continentId + "}}" ResponseDTO responseDTO = new ObjectMapper().readValue(string, ResponseDTO.class); List<Country> countries = responseDTO.getDocs().stream().map(Continent::getCountry).filter(doc-> doc.getCity()== null).collect(toList()); Using the first structure, the search is done by the fields contained in a document. In this case, the reference to a document is done by id, thus we cannot search by the fields of that document, only by its id. Therefore, the first time searching by the document that has that name is done, and then look for all the countries that have this continent_id. Consequently, response times were automatically longer.
Using the second structure, the selection is made through a single request in which all the documents from the continent of Europe were obtained. The next step was to interpret the answer and extract them. If only the countries were needed, all the documents that do not have a city were taken and finally, a list of all the countries was obtained. The command used for document-based MySQL is found for which we give as a parameter the search condition. The resulted response times differ greatly between MySQL and CouchDB, as shown in Figure 8, but there may be situations where the request may fail for various reasons (e.g., TimeoutException, Connection refused), and if we have dealt with this case by retrieving it several times, the times for CouchDB will increase automatically.
parse the response using first structure of ResponseDTO; Send a second request to get all the documents with this continent id, requestBody: "{\"selector\":{\"continent_id\":" + continentId + "}}" ResponseDTO responseDTO = new ObjectMapper().readValue(string, ResponseDTO.class); List<Country> countries = responseDTO.getDocs().stream().map(Continent::getCountry).filter(doc-> doc.getCity()== null).collect(toList()); Using the first structure, the search is done by the fields contained in a document. In this case, the reference to a document is done by id, thus we cannot search by the fields of that document, only by its id. Therefore, the first time searching by the document that has that name is done, and then look for all the countries that have this continent_id. Consequently, response times were automatically longer.
Using the second structure, the selection is made through a single request in which all the documents from the continent of Europe were obtained. The next step was to interpret the answer and extract them. If only the countries were needed, all the documents that do not have a city were taken and finally, a list of all the countries was obtained. The command used for document-based MySQL is found for which we give as a parameter the search condition. The resulted response times differ greatly between MySQL and CouchDB, as shown in Figure 8, but there may be situations where the request may fail for various reasons (e.g., TimeoutException, Connection refused), and if we have dealt with this case by retrieving it several times, the times for CouchDB will increase automatically. In this case, when a small number of elements were involved, relational MySQL has better or similar performance (up to 10,000); the chosen structure for CouchDB influences performance only when large numbers of elements are involved, but the document-based MySQL has the best response times regardless of the number of elements.
Complex Select with Two Inner Joins
A complex select using two inner joins that returns all cities from Asia is presented in Table 6. In the case of the first structure, a request is executed to get a list of all the existing countries on this continent, and for each country, another request is called in which it searches all the documents with the specified country_id. When using the second structure, it can be seen that everything has been simplified to a single request, identical to the one above, the difference being made in the code, when we filter the data we need. Table 6. Complex select with two inner join operations.
DOCUMENT-BASED MYSQL: Select with two inner join operation
DocResult res = collection.find("name='Asia' and country.city.name is not null and (country.city.restaurant.name is null or country.city.hotel.name is null)").execute();
CouchDB SECOND STRUCTURE: Select with two inner join operation
Send a request to get all documents with continent name Asia using requestBody: "{\"selector\":{\"name\":\"Asia\"}}" parse the response using the second structure of ResponseDTO: List<City> cities = responseDTO.getDocs().stream().map(doc-> doc.getCountry().getCity()) .filter(doc-> (doc.getRestaurant()== null && doc.getHotel()== null)).collect(toList()); As shown in Figure 9, the first structure in the case of CouchDB has the longest times because a lot of requests are made, and if one fails, it has to be repeated. For a large number of elements, the best time obtained was for the second structure used in CouchDB because only one request was made; however, overall, document-based MySQL has always better results. Once again, relational MySQL exhibits better or similar performance when a small number of elements were involved (up to 10,000). For CouchDB's first structure, the performance decreases as the number of elements increases.
Complex Select with Three Inner Joins
Another complex select operation with three inner joins that returns all hotels on the Asian continent is presented in Table 7. RELATIONAL MYSQL: Select with three inner join operation @Query(value = "select h.* from continent con inner join country ctr on con.id=ctr.continent_id inner join city c on ctr.id=c.country_id " + "inner join hotel h on c.id=h.city_id where con.name =:name", nativeQuery = true) 1000 10,000 100,000 1,000,000 Once again, relational MySQL exhibits better or similar performance when a small number of elements were involved (up to 10,000). For CouchDB's first structure, the performance decreases as the number of elements increases.
Complex Select with Three Inner Joins
Another complex select operation with three inner joins that returns all hotels on the Asian continent is presented in Table 7. Table 7. Complex select with three inner join operations.
DOCUMENT-BASED MYSQL: Select with three inner join operation
DocResult res = collection.find("name = 'Asia' and country.city.hotel.name is not null").execute();
CouchDB SECOND STRUCTURE: Select with three inner join operation
Send a request to get all documents with continent name Asia using requestBody: "{\"selector\":{\"name\":\"Asia\"}}" parse the response using the second structure of ResponseDTO: List<Hotel> hotels = new ArrayList<>(); responseDTO.getDocs().stream().map(doc-> doc.getCountry().getCity().getHotel()); Using the CouchDB first structure, first, we have to execute the query passed in point 3 to get the list of cities, and then for each city, we look for the documents that contain this city_id. If no other conditions were added, the list of both hotels and restaurants in that city is obtained. Afterward, all the documents with a number_of_rooms greater than 0 were filtered so that only the documents of the hotel type were obtained.
Using the second structure, the same request was executed, and at the end, after interpreting the answer, all the hotels were extracted and checked to add it only once to the final list. From Figure 10, it can be seen that again, the first structure used in CouchDB requires the longest response time, due to a large number of requests. Overall, document-based MySQL has the best performance. For a large number of elements being still comparable with CouchDB's second structure. Also, it can be noticed that the relational MySQL approach exhibits good results for a small number of elements. it can be seen that again, the first structure used in CouchDB requires the longest response time, due to a large number of requests. Overall, document-based MySQL has the best performance. For a large number of elements being still comparable with CouchDB's second structure. Also, it can be noticed that the relational MySQL approach exhibits good results for a small number of elements.
Complex Select with Four Inner Joins
Another complex select operation with four inner joins that returns all restaurants that have a hotel and the number of rooms is more than 12 is presented in Table 8.
Complex Select with Four Inner Joins
Another complex select operation with four inner joins that returns all restaurants that have a hotel and the number of rooms is more than 12 is presented in Table 8. Table 8. Complex select with four inner join operations.
CouchDB SECOND STRUCTURE: Select with four inner join operation
Send a request using requestBody: "{\"selector\": {\"name\": \"Asia\", country.city.restaurant.hotel.number_of_rooms\" :{\"$gt\": 12}}}" parse the response using the second structure of ResponseDTO List<Restaurant> restaurants = responseDTO.getDocs().stream().map (doc->doc.getCountry().getCity().getRestaurant()) .collect(toList()); Using the CouchDB first structure, we filtered in code: we assumed that we have already executed the above query, through which we obtained all the documents that have a city_id, and then we filtered in the code all the documents that contain the field number_of_rooms, after which all the documents that contain the hotel_id field and is not empty. In this way, all the restaurants that also have a hotel were obtained, and in the end, a filtering according to these two lists obtained was made. Using the second structure, the request is much simpler, being passed directly in the body of the request the condition that the restaurant has a hotel and its number of rooms is greater than 12.
As it is shown in Figure 11, the document-based MySQL has the best response times in this case as well. The time required when using relational MySQL increases due to the joins between the tables comparing the first table with the second based on the join condition, and if the condition is met, it creates a new row that contains the data from both tables.
In the case of the first structure, the times increase due to the three filters considered in which all the elements are crossed. Using the second structure, the selection was simpler and faster, since we passed all the conditions in the request body. After all the results from select operations were analyzed, we noticed that the response times for the second CouchDB structure and for document-based MySQL have relatively close values because in the CouchDB case, the difference is made in the code when mapping the document and extracting the necessary data and in the MySQL case, only the condition given to the find method differs.
have a hotel were obtained, and in the end, a filtering according to these two lists obtained was made. Using the second structure, the request is much simpler, being passed directly in the body of the request the condition that the restaurant has a hotel and its number of rooms is greater than 12.
As it is shown in Figure 11, the document-based MySQL has the best response times in this case as well. The time required when using relational MySQL increases due to the joins between the tables comparing the first table with the second based on the join condition, and if the condition is met, it creates a new row that contains the data from both tables. In the case of the first structure, the times increase due to the three filters considered in which all the elements are crossed. Using the second structure, the selection was simpler and faster, since we passed all the conditions in the request body. After all the results from select operations were analyzed, we noticed that the response times for the second CouchDB structure and for documentbased MySQL have relatively close values because in the CouchDB case, the difference is made in the code when mapping the document and extracting the necessary data and in the MySQL case, only the condition given to the find method differs.
DELETE Operation
Generally, two types of deletion operations can be applied to each element: soft delete, when the item is marked as deleted (deleted = true), and hard delete when the item is completely deleted. The soft delete approaches, where a hundred elements were deleted, is presented in Table 9. Table 9. Soft delete operations.
DELETE Operation
Generally, two types of deletion operations can be applied to each element: soft delete, when the item is marked as deleted (deleted = true), and hard delete when the item is completely deleted. The soft delete approaches, where a hundred elements were deleted, is presented in Table 9. Table 9. Soft delete operations.
CouchDB SECOND STRUCTURE: Soft delete operation
Send a request with requestBody: "{\"selector\": \"name\": \"Asia\"}}" parse the response using the second structure of ResponseDTO: List<Continent> updatedList= responseDTO.getDocs().stream().peek (doc-> doc.getCountry().setDeleted(true)).collect(toList()); UpdateDTO updateDTO = new UpdateDTO(); updateDTO.setDocs(updatedList); String asString = new bjectMapper().writeValueAsString(updateDTO); Send a second request with requestBody: asString; In MySQL, for relational as well as for non-relational, the elements were marked as deleted by an update in which the deleted field was set as true. For CouchDB, by default, all documents that do not have the deleted field were not deleted, so there is no need to enter a new field, therefore, in order to mark a document as deleted, it is required to enter this new field with an update.
In the case of CouchDB's first structure, a select is used after which an update request with the _deleted field added was made. In this case, all the countries on continent x were marked as deleted. Using the second structure, the steps were similar, only when the list of cities was taken, the field _deleted was set as true and a new request with these documents was called.
Because the soft delete operation was made as an update, by which only the concerned elements are marked as deleted, in this case, the second structure in CouchDB becomes more efficient as the number of elements increases, as shown in Figure 12.
List<Continent> updatedList= responseDTO.getDocs().stream().peek (doc-> doc.getCountry().setDeleted(true)).collect(toList()); UpdateDTO updateDTO = new UpdateDTO(); updateDTO.setDocs(updatedList); String asString = new bjectMapper().writeValueAsString(updateDTO); Send a second request with requestBody: asString; In MySQL, for relational as well as for non-relational, the elements were marked as deleted by an update in which the deleted field was set as true. For CouchDB, by default, all documents that do not have the deleted field were not deleted, so there is no need to enter a new field, therefore, in order to mark a document as deleted, it is required to enter this new field with an update.
In the case of CouchDB's first structure, a select is used after which an update request with the _deleted field added was made. In this case, all the countries on continent x were marked as deleted. Using the second structure, the steps were similar, only when the list of cities was taken, the field _deleted was set as true and a new request with these documents was called.
Because the soft delete operation was made as an update, by which only the concerned elements are marked as deleted, in this case, the second structure in CouchDB becomes more efficient as the number of elements increases, as shown in Figure 12. The first structure in CouchDB is inefficient due to a large number of requests, whereas MySQL is efficient only at a small number of items, for large numbers becoming very inefficient.
To permanently delete an item, the commands were presented in Table 10. If there is any reference to this object when using relational MySQL an exception will appear because this is not a cascading delete, is a simple delete of an element. 1000 10,000 100,000 1,000,000 The first structure in CouchDB is inefficient due to a large number of requests, whereas MySQL is efficient only at a small number of items, for large numbers becoming very inefficient.
To permanently delete an item, the commands were presented in Table 10. If there is any reference to this object when using relational MySQL an exception will appear because this is not a cascading delete, is a simple delete of an element. In the case of relational MySQL, before deleting an element, the element is searched and checked to verify if it is not used as a foreign key by another table and then is deleted. There is no check in CouchDB or in document-based MySQL.
The actual deleting of an element is made using the same command in CouchDB for both structures, thus, they have similar performance. As shown in Figure 13, CouchDB is more efficient than relational MySQL for a large number of elements because the latter one checks the foreign key constraint; however, CouchDB actual deleting is slightly more inefficient than document-based MySQL because, in order to be able to delete a document, we must first look for it in order to be able to give it as a parameter to the delete method (this method accepts as parameters either a document or the id and rev fields, both assuming a search first), while in document-based MySQL we have directly the removeOne method in which we can give as a parameter the document id. collection.removeOne("444");
CouchDB SECOND STRUCTURE: Hard delete operation
Map<String, Object> resultMap = db.find(Map.class, "444"); if (resultMap != null) { db.delete(resultMap); } In the case of relational MySQL, before deleting an element, the element is searched and checked to verify if it is not used as a foreign key by another table and then is deleted. There is no check in CouchDB or in document-based MySQL.
The actual deleting of an element is made using the same command in CouchDB for both structures, thus, they have similar performance. As shown in Figure 13, CouchDB is more efficient than relational MySQL for a large number of elements because the latter one checks the foreign key constraint; however, CouchDB actual deleting is slightly more inefficient than document-based MySQL because, in order to be able to delete a document, we must first look for it in order to be able to give it as a parameter to the delete method (this method accepts as parameters either a document or the id and rev fields, both assuming a search first), while in document-based MySQL we have directly the removeOne method in which we can give as a parameter the document id.
Discussion
The results obtained showed that when we increased the volume of data in case of using relational MySQL, this leads to a considerable loss of performance that is greater than in the case of using CouchDB for insert, select and delete operations; however, the document-based MySQL has its best response time for overall operations.
Response times are more favorable for CouchDB and document-based MySQL in case of an insert operation because for relational MySQL, when an element is inserted, it checks all the constraints applied on the tables, while in CouchDB or document-based MySQL only the id is checked. For the insert operation, the command used is identical for the two types of CouchDB
Discussion
The results obtained showed that when we increased the volume of data in case of using relational MySQL, this leads to a considerable loss of performance that is greater than in the case of using CouchDB for insert, select and delete operations; however, the document-based MySQL has its best response time for overall operations.
Response times are more favorable for CouchDB and document-based MySQL in case of an insert operation because for relational MySQL, when an element is inserted, it checks all the constraints applied on the tables, while in CouchDB or document-based MySQL only the id is checked. For the insert operation, the command used is identical for the two types of CouchDB structures; however, if simultaneous insertion of several data is intended, a similar request to the update should be used.
A simple update is much faster in document-based MySQL than in relational MySQL or CouchDB, regardless of the number of elements involved. For the second structure CouchDB, regardless of whether a simple update or multiple updates are carried out, a request is made, which automatically depends on other factors (such as the Internet, exceptions may occur, such as TimeoutException or Connection refused). The most common exception was TimeoutException and it can occur because the internet is weak: it takes too long for the CouchDB server to respond and if it gives a timeout, a retry is made a certain number of times, the reason for which execution times are longer than MySQL and the first structure of CouchDB.
For complex updates, when fewer elements are involved, the document-based MySQL can be much faster than the relational MySQL. But, as the number of elements increases, the second structure in CouchDB becomes the fastest. The CouchDB first structure performance was significantly degrading when complex updates were involved and as the number of elements increases because, after obtaining the elements through a request they are further crossed with a for, and for each one, a simple update command is called.
For simple select operations, regardless of the number of elements, the response times are favorable for document-based MySQL and CouchDB, but when increasing queries complexity, the times start to vary. The increasing complexity of the selection query has a more significant impact on the MySQL approach than on CouchDB's second structure or document-based MySQL, as the number of elements increases; therefore, for a small number of elements, regardless of the complexity of the select, MySQL responds faster, while CouchDB keeps somewhere a close average for as many elements as possible. The main reason why the second structure in CouchDB has a longer response time to a small number of elements and does not have a major increase with the number of elements is due to the request, which is the same, and differs only in how the answer is interpreted. In the case of these complex selections, the first structure in CouchDB is completely inefficient due to the multiple requests that were executed.
Regarding soft and hard deletion, response times, in this case, are longer for relational MySQL compared to CouchDB or document-based MySQL, as the number of elements is increasing; however, the differences between performance times between CouchDB first and second structure were not so big as for the other types of operations.
Using the first type of structure reduces the amount of data saved for each document and eliminates the duplication of data. Thus, each document must contain a reference to another, for this, the id of the document on which it depends was saved. Using the second structure a lot of duplicate data is obtained, but each document is independent and thus any operation was performed more efficiently in terms of time performance when large amounts of data were involved.
Among the advantages of the first structure in CouchDB is the way of organizing the data, data are not duplicated, which automatically decreases the storage space, while the second structure contains a lot of duplicated data. For the second structure in CouchDB, in the case of many complex documents, the necessary storage space may increase considerably, depending on their complexity.
Conclusions
In this paper, a comparison between CouchDB and MySQL has been implemented in order to evaluate the performance of CRUD operations for a different amount of data and different query complexity. It also presents how the two databases could be modeled and used in an application.
Two data structures for the non-relational CouchDB approach were introduced: the first structure, in which each document must contain a reference to another, and the second structure, in which each document contains all the data of each entity.
The first structure in CouchDB proves to be more inefficient than the second structure in CouchDB regardless of the number of elements; moreover, the first structure tends to be also, in many cases, more complex from the command syntax point of view. Relational MySQL is efficient in some cases for a small number of elements, taking into account that no indexes were defined on any table. If we would have defined indexes on the tables, response time would have been much shorter because indexes improve the speed of operations in the database, while the document-based MySQL is the most efficient in most cases when a large number of elements are involved, representing a very good alternative for applications with large amounts of data.
The results show that using CouchDB's second structure leads to a better performance than when compared to the relational MySQL when performing the insert, select, and delete operations especially for a large amount of data, being still more inefficient than the document-based MySQL. The main difference between document-based MySQL and CouchDB's second structure is that, in CouchDB, most operations are performed by requests and in MySQL, there are methods already defined by the library, not involving any external factors.
However, relational MySQL could lead to good performance, sometimes comparable with CouchDB in certain circumstances. The question that arises is related to the effective performance
|
2020-12-03T09:07:36.666Z
|
2020-11-28T00:00:00.000
|
{
"year": 2020,
"sha1": "157bebc7216d5b8138fbda21a5e08cbd0600555e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/10/23/8524/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "a775f37accb120166002d58120a910e3f3999863",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
227247769
|
pes2o/s2orc
|
v3-fos-license
|
Reconstruction Theorem for Germs of Distributions on Smooth Manifolds
The reconstruction theorem is a cornerstone of the theory of regularity structures [Hai14]. In [CZ20] the authors formulate and prove this result in the language of distributions theory on the Euclidean space $\mathbb{R}^d$, without any reference to the original framework. In this paper we generalize their constructions to the case of distributions over a generic $d$-dimensional smooth manifold $M$, proving the reconstruction theorem in this setting. This is done having in mind the extension of the theory of regularity structures to smooth manifolds.
Introduction
The Reconstruction Theorem is one of the cornerstones of the Theory of Regularity Structures [Hai14], the framework in which this theorem was first formulated. This theory provides a milestone in the analysis of stochastic partial differential equations on the Euclidean space R d , which is the original motivation for this theory, since it allows to apply fixed point techniques to such equations. Stochastic partial differential equations are also closely related to quantum field theory, in particular through stochastic quantization [PW81], which links stochastic PDEs with the path integral formulation of Euclidean field theory. The idea at the heart of stochastic quantization is construct the path integral measure of an Euclidean interacting field theory as the invariant measure of a stochastic process whose dynamics is ruled by a parabolic non-linear stochastic PDE. Recently, again in the interplay between stochastic PDEs and quantum field theory, there are also some efforts to apply the techniques proper of the latter to problems of the former, in particular for renormalization [DDRZ20].
Nonetheless, on account of general relativity, a natural and more general setting for quantum field theory is represented by curved spacetimes. As a consequence, from the point of view of quantum field theory on curved spacetimes [BFDY,BF09,FR16,DDR20], in order to extend this fruitful interaction with stochastic PDEs also to this framework, it would be desirable to have a formulation of the theory of regularity structures on a smooth manifold and a first step in this direction should be the formulation of the reconstruction theorem on a smooth manifold. This is the aim of the present paper. There are already some efforts to this end [DDK19], where the authors consider the Riemannian case.
Recently, in [CZ20], the authors proved that this result can be formulated as a result of distributions theory on the Euclidean space R d , i.e., in D ′ (R d ), without any reference to the theory of regularity structures. In particular, the problem is formulated in the following way. If for any x ∈ R d we are given a distribution F x ∈ D ′ (R d ), one may wonder whether there exists a distribution f ∈ D ′ (R d ) which is locally approximated, in a suitable sense, by F x in a neighbourhood of x ∈ R d , for any x ∈ R d . In their paper, the authors proved that this is actually the case under a further hypothesis, dubbed coherence, providing a bound for the difference F x − F y for y and x sufficiently close. This condition, which is closely related to the generalized Hölder condition, is inspired, in the language of regularity structures, by the notion of model.
From the viewpoint of the extension to the manifold setting, the main advantage of the formulation of [CZ20] is that it is formulated in a purely distributional language. Having distributions an intrinsic local nature, they can be considered in a very natural way also on generic smooth manifold M [Hör03,BGP07]. This argument makes this version of the reconstruction theorem the most convenient for the extension to the smooth manifold setting.
To this end we translate the notion of germ of distributions and the key notion of coherence to the case of a smooth manifold, yielding a more local definition of this notions. Nonetheless, in Proposition 13 we prove that the notion of coherence we give is actually independent of the atlas, in agreement with the locality of the whole construction. As a consequence we have a geometric notion of coherence.
The main result of this paper is the reconstruction theorem, see Theorem 18, for γ-coherent germs of distributions, with γ > 0, see Section 2 for details. In particular, with the notation of Definition 4, the main result is the following.
Theorem 1: Let M be a d-dimensional smooth manifold and let A = {(U j , φ j )} j be an atlas over M . Let γ > 0 and let F = (F p ) p∈M be a γ-coherent germ of distributions on (M, A). There exists a unique distribution RF ∈ D ′ (M ) such that, for any local chart (U, φ) ∈ A, φ * (RF ) ∈ D ′ (φ(U )) satisfies, for any compact set K ⊂ U and for any h ∈ D(φ(U )), uniformly for p ∈ K and for λ ∈ (0, 1]. This result is proven as a consequence of a localized (on an open set) version of the reconstruction theorem of [CZ20] and of the very characterization of the notion of distribution on a smooth manifoldsee Appendix A and [Hör03]. A further advantage of our result is that it shows that the reconstruction theorem holds true already at the level of smooth manifolds, without calling for further structures, such as Riemannian ones.
Eventually, we discuss in detail the dependence of the reconstruction on the the atlas for γ-coherent germs of distributions with γ > 0. In particular, in Theorem 19 we prove that, in such a scenario, the reconstruction is independent of the atlas.
In the Euclidean space R d setting, if the coherence parameter γ is negative, one has existence of the reconstruction, yet without uniqueness. This result can be achieved also on the smooth manifold setting, as we discuss in Theorem 20, where we prove existence without uniqueness of the global reconstructed distributions. We underline that, in addition to being non-unique, these global reconstructed distributions depend on the atlas and on the partition of unity used to construct them.
Outline of the Paper The paper is organized as follows: in Section 2 we introduce the notion of germ of distributions on a smooth manifold M and the notion of coherence, which is the key to the reconstruction. In this section we also discuss enhanced coherence. Moreover, in this section we prove that coherence does not depend on the atlas. In Section 3 we state and prove the reconstruction theorem for a γ-coherent germ of distributions on a smooth manifold, with γ > 0. In the same section we also discuss the independence of the reconstruction from the atlas for γ > 0. Eventually, we state and prove the reconstruction theorem for γ-coherent germs of distributions with γ ≤ 0. Finally, on the one hand, in Appendix A we shall recall some notions of distributions theory on smooth manifolds in order for the paper to be self-contained. On the other hand, in Appendix B we discuss coherence and enhanced coherence on an open set of the Euclidean space R d , which is a propedeutical case study to the case of a smooth manifold.
Notation In the following, we will denote with M a d-dimensional connected smooth manifold such that ∂M = ∅. Moreover, we will endow the manifold M with the Borel σ-algebra. We denote with (U, φ) a generic local chart of U : i.e., U ⊂ M is an open set and φ : U → φ(U ) ⊂ R d is a diffeomorphism, representing a coordinate on U . Given a generic function f : M → N , with M and N smooth manifold, we shall denote with f * and f * the pull-back and the push-forward, respectively, via this map.
We denote with D(M ) the space of smooth and compactly supported functions over M , endowed with the usual locally convex topology and we denote with D ′ (M ) the space of distributions over M , see Appendix A for further details. With B(0, 1) ⊂ R d we denote the unitary ball centred at the origin. Given U ⊂ R d and a function f ∈ D(U ), we introduce the following rescaled version of this function, for for λ ∈ (0, 1]. We shall also adopt the following convention: f λ ≡ f λ 0 . In the following, we shall integrate test-functions f ∈ D(R d ) with respect to the Lebesgue measure dx on R d . This is just for convenience, a priori we could consider any measure on R d which is absolutely continuous with respect to the Lebesgue measure. Eventually, with we shall denote the inequality up to a multiplicative finite constant.
Main Definitions
In this section we shall define the main tools we are going to use in the paper. We shall also prove some of their properties.
Following [CZ20], we start by introducing the notion of germ of distributions. We shall now define the notion of coherent germ of distributions on a manifold, which is the key for the reconstruction theorem.
Remark 5: At first sight, the above definition depends on the atlas A. Nonetheless in Proposition 13 we shall prove that the above definition is actually independent of the atlas.
Remark 6: In the previous definition, we adopted the constraint λ ∈ (0, 1]. Nonetheless, this can be replaced by λ ∈ (0, η], for any η > 0. Indeed, all bounds are given up to a multiplicative constant. We shall use this fact in the following when discussing enhanced coherence in Appendix B. Moreover, in the following we shall be interested in the behaviour of all structures for λ → 0 + . These are not influenced by the choice of η > 0. First of all, we can refine the dependence on the atlas of the notion of coherence. In particular, it is independent of the coordinates. Proposition 7: With the notation of Definition 4, let F be a γ-coherent germ on (M, A) and let (U, φ) ∈ A. Let (U, ψ) be a second chart on the same open set U ⊂ M . Then the γ-coherence condition holds true also with respect to the chart (U, ψ). Moreover, also the α U parameters are independent of the coordinates.
Proof. First of all, coherence on (M, A), entails the existence of a test-function f ∈ D(φ(U )) with uniformly for p, q ∈ K, and for λ ∈ (0, 1]. We prove that there exists a test-functiong ∈ D(ψ(U )) such that the coherence condition with respect to the chart (U, ψ) is satisfied. For any g ∈ D(ψ(U )), it holds where · ∞ denotes the supremum norm and Jac(ψ • φ −1 ) denotes the Jacobian of coordinates change. We can now chooseg ∈ D(ψ(U )) such that (ψ • φ −1 ) * g = f , with f ∈ D(φ(U )) as above. As a consequence, we get where in the last inequality we exploited the uniform bound Remark 8: The notion of coherence of a germ of distributions can be stated in an equivalent form by splitting the two cases |φ(p) − φ(q)| ≤ λ and |φ(p) − φ(q)| > λ. In particular, Equation (2.1) can be rewritten as Enhanced Coherence In this paragraph we shall refine the notion of coherence on a smooth manifold. This leads to the notion of enhanced coherence. In the same spirit of [CZ20], the idea is to drop the dependence on the particular test function f ∈ D(φ(U )). To this end, we resort to the same argument for the case of an open subset of R d . Indeed, on account of Definition 4, given a γ-coherent germ F p on a smooth manifold (M, A) and a local chart in the sense of Definition 24. As a consequence, we can apply locally Proposition 29 to get the following definition of coherence on a smooth manifold, which is equivalent to Definition 4cf. Appendix B.
Remark 10: Although Definition 4 and Definition 9 are equivalent, the latter is more advantageous. Indeed, it establishes a bound which is independent of the test function. As a by product, it also provides the space of γ-coherent germs of distributions with a vector space structure. At the same time, Definition 4 is preferable from a computational point of view, since it allows to establish coherence by only checking the defining property for a single test-function.
Remark 11: Observe that the equivalence between Definition 4 and Definition 9 entails that also the notion of enhanced coherence is independent of the coordinates -see Proposition 7. Alternatively, this independence can be proven directly following an approach similar to that of Proposition 7 and exploiting the boundedness property of the Jacobian of the change of coordinates.
Remark 12: On account of Proposition 31, also in the case of a smooth manifold, the notion of coherence is stable under restriction of an open set. More precisely, adopting the same notation of Definition 9, if we consider V ⊂ U , then F p is a γ-coherent germ of distributions in the sense of Definition 9 also with respect to any local chart (V, φ).
We are now in position to prove that coherence is independent of the atlas.
Notice that by independence of the coordinate, cf. Proposition 7, and by stability of coherence under restriction of the open set, cf. Remark 12, F p satisfies the bound of γ-coherence, given by Equation (2.3), on all the open sets U ′ i , for i ∈ I. Moreover, being all these sets contained in U ′ , we can set on all of them a unique coordinate, which we call φ. In order to prove the thesis, we first prove the following claim: the coherence bound of Equation In this scenario, we can split the compact set as K = K j ∪ K ℓ , with K j ⊂ U ′ j and K ℓ ⊂ U ′ ℓ compact sets and K j ∩ K ℓ = ∅. In the next step, we shall prove the coherence bound, Equation (2.3), uniformly on p, q ∈ K. Notice that whether this two points were both contained in one of the two compact set K j or K ℓ , then the proof is already complete since these two compact sets are contained in U ′ j and U ′ ℓ respectively. As a consequence, it only remains to discuss the case with p ∈ K j \ U ′ ℓ and q ∈ K ℓ \ U ′ j . For any u ∈ D(B(0, 1)), by triangular inequality, We separately estimate |A| and |B|. First, on account of the choice of r and of a, we have, by Equation uniformly for a, q ∈ K ℓ and for λ ∈ (0, 1]. Moreover, notice that as a consequence of the estimate The estimate for |A| requires some more steps: first of all, we notice that in |A| the test-function is centred at φ(q). Nonetheless, we can center it at the point φ(a) by exploiting the argument used in the proof of [CZ20, Prop. 6.2]. This is achieved by noticing that On account of this and of the coherence on U ′ j , we get By definition ofũ, Hence, we get where, in the last inequality we set α With a bound analogous to Equation (2.5), we conclude Finally, on account of Equations (2.4), (2.6) and (2.7), setting α uniformly on p, q ∈ K. This concludes the proof of the claim. In order to conclude the proof of the proposition, we distinguish two cases. On the one hand, if the open set U ′ is bounded, then the proof is complete since U ′ can be covered by a finite number of open sets U i ∈ A and it suffices to iterate the above procedure for a finite number of times. On the other hand, if U ′ is unbounded, it suffices to notice that for any compact set K ⊂ U ′ , there exists a finite subset J ⊂ I of indices such that K ⊂ ∪ j∈J U ′ j . As a consequence, we can get a coherence parameter α U ′ K for K by iterating the above claim a finite number of times.
We give two simple examples of coherent germs on a smooth manifold.
Example 14: A simple example of coherent germ of distributions on a smooth manifold M is the following. Consider a distribution t ∈ D ′ (M ) and set F p := t for any p ∈ M . Since F p − F q = 0 for any p, q ∈ M , we conclude that, on any U ⊂ M , {F p } p∈M is coherent with any parameters (α U , γ).
Example 15: Notice that our construction is a generalization of the one of [CZ20]: indeed, we recover their construction if we consider the case M = R d endowed of the trivial atlas (R d , Id). As a consequence, all examples discussed in [CZ20], such as Taylor polynomials, are coherent germs with respect to this atlas.
Eventually, we introduce a homogeneity parameter for coherent germs of distributions.
Theorem 18: Let M be a d-dimensional smooth manifold and let A = {(U j , φ j )} j be an atlas over M . Let γ > 0 and let F = (F p ) p∈M be a γ-coherent germ of distributions on (M, A). There exists a unique distribution RF ∈ D ′ (M ) such that, for any (U, φ) ∈ A, φ * (RF ) ∈ D ′ (φ(U )) and it satisfies, for any compact set K ⊂ U and for any h ∈ D(φ(U )), To this end, we fix a compact K ⊂ U ∩ V and we notice that for any g ∈ D(φ(U ∩ V )), By construction, on the one hand |A| λ γ uniformly on p ∈ K and λ ∈ (0, 1]. On the other hand, again uniformly on p ∈ K and for λ ∈ (0, 1], where in the first inequality we exploited Jac(φ • ψ −1 ) ∞ 1, whereas in the last inequality we used the defining inequality, Equation (3.2), of (RF ) ψ(V ) together with (φ • ψ −1 ) * g ∈ D(ψ(V )). Summarizing, uniformly on p ∈ K and λ ∈ (0, 1]. Finally, being γ > 0, Applying Lemma 23 to the distribution T : This concludes the proof.
In the next theorem we investigate in detail the dependence of the reconstructed distribution on the atlas. In particular, we shall prove that given a germ of distributions which is γ-coherent, with γ > 0, then the reconstruction is independent of the atlas.
Theorem 19: Let M be a d-dimensional smooth manifold and let A and A ′ be two atlases over M . Let γ > 0 and let F = {F p } p∈M be a γ-coherent germ of distributions. Denote with R A F ∈ D ′ (M ) and R A ′ F ∈ D ′ (M ) the reconstructed distributions associated with the germ F with respect to A and A ′ , as per Theorem 18. Then R A F = R A ′ F , i.e., the reconstruction is independent of the atlas.
Proof. Exploiting Theorem 18, we can associate with the germ F and with the atlas (M, A) the global distribution R A F ∈ D ′ (M ), which is identified by means of Theorem 22 by the family ). Moreover, let (U ′ , φ ′ ) ∈ A ′ be a local chart in the atlas A ′ . On the one hand, we can introduce the distribution φ ′ * (R A F ) ∈ D ′ (φ ′ (U ′ )) while, on the other hand, as a consequence of Theorem 18 applied with reference to the atlas A ′ , it descends that the distribution Recalling the notion of distribution on a manifold, cf. Appendix A, to conclude the proof of this theorem it suffices to show that To this end, there exists a family of local charts {(U i , φ i )} i∈I ⊂ A, for some index set I, such that We consider the restriction of Equation (3.4) to a subset U ′ i . Recalling that coherence is stable with respect to restrictions, entailing by uniqueness Via a partition of unity argument, this yields Equation (3.4). For any compact set K ⊂ U ′ i and for any h ∈ D(φ ′ (U ′ i )), it holds, uniformly on p ∈ K and λ ∈ (0, 1], where in the first inequality we performed a change of coordinates whereas in the last inequality we exploited that R A F reconstructs F with respect to the atlas A. Finally, we recall that on account of Theorem 18, being γ > 0, ( is the unique distribution satisfying Equation (3.6). Hence, Equation (3.5) holds true by uniqueness. This concludes the proof.
Eventually, we discuss the reconstruction theorem for the case of γ-coherent germs of distributions with γ ≤ 0.
Theorem 20: Let M be a d-dimensional smooth manifold and let A = {(U j , φ j )} j∈J be an atlas over M , with J index set. Let γ ≤ 0 and let F = (F p ) p∈M be a γ-coherent germ of distributions on (M, A). There exists a distribution RF ∈ D ′ (M ) such that, for any (U, φ) ∈ A, φ * (RF ) ∈ D ′ (φ(U )) and it satisfies, for any compact set K ⊂ U and for any h ∈ D(φ(U )), uniformly for p ∈ K and λ ∈ (0, 1]. This distribution RF ∈ D ′ (M ) is non-unique.
Proof. The proof of this theorem is similar in spirit to that of Theorem 18 and it uses a localized version of the same result of [CZ20]. As a consequence, we only sketch the proof. Moreover, we only discuss the case γ < 0, the proof of the case γ = 0 being analogous. As a consequence of [CZ20,Thm. 4.4], for any (U, φ) ∈ A there exists a distribution (RF ) φ(U) ∈ D ′ (φ(U )) such that, for any compact set K ⊂ U and for any h ∈ D(φ(U )) uniformly for p ∈ K and for λ ∈ (0, 1]. Notice that, being γ < 0, this distribution is non-unique. Nonetheless, we can choose for any (U j , φ j ) ∈ A a reconstructed local distribution (RF ) φj (Uj ) ∈ D ′ (φ j (U j )). We can now introduce a partition of unity {ρ j } j∈J subordinated to the covering {U j } j∈J of the manifold M . In this way, similarly to [CZ20, Sect. 11], one can construct a global reconstructed distribution R A,ρ ∈ D ′ (M ) satisfying the bound of Equation (3.7). We underline that this distribution is non-unique. Indeed, it depends on the the choice of the local reconstructed distributions (RF ) φj (Uj ) ∈ D ′ (φ j (U j )), on the atlas A and on the partition of unity {ρ j } j∈J . The dependence on the partition of unity is a consequence of the lack of the overlapping condition, cf. Equation
A Distributions on Smooth Manifolds
In this appendix we shall recall some basic notions and results regarding distribution theory on smooth manifolds, in order to keep the paper self-contained. In particular, adopting [Hör03] as a reference, we shall recall the definition of distribution on a smooth manifold M and we recall also a very useful characterization of this concept, which we use extensively in the main body of the paper, in particular in the proof of the reconstruction theorem.
Definition 21: Let M be a d-dimensional smooth manifold. For any local chart (U, φ) on M , let t φ(U) ∈ D ′ (φ(U )) be a distribution satisfying the overlapping condition we call the family t φ(U) a distribution t on the manifold M , denoting t ∈ D ′ (M ). The next theorem is very useful since it allows to verify the overlapping condition, Equation (3.3), only on one atlas in order to construct a global distribution in D ′ (M ) instead of considering all possible local charts over M .
Theorem 22: [Hör03,Theor. 6.3.4] Let M be a smooth d-dimensional manifold and let A = {(U j , φ j )} j∈J be an atlas for M . Assume moreover that for any local chart (U, φ) ∈ A there exists a distribution t φ(U) ∈ D ′ (φ(U )) such that the overlapping condition, holds true for any pair of local charts (U, φ), (U ′ , φ ′ ) ∈ A. Then there exists one and only one distribution Now we prove a standard result of distribution theory which we use in the main body of the paper.
Lemma 23: Let U ⊂ R d be an open set and let T ∈ D ′ (U ) be a distribution such that, for any K ⊂ U compact and any h ∈ D(K) such that dx h(x) = 1, |T (h λ x )| → 0 for λ → 0 + , uniformly for x ∈ K. Then, for any f ∈ D(U ), T (f ) = 0.
Proof. Let K ⊂ U be a compact set. On account of the hypotheses, for any test-functions u, h ∈ D(K) such that dx h(x) = 1, u * h λ → u in D for λ → 0 + , where * denotes the convolution. It follows, by sequential continuity of T , that T (u * h λ ) → T (u) for λ → 0 + . Furthermore, Finally, on account of the hypothesis, we have sup x∈K |T (h λ x )| → 0 for λ → 0 + , implying T (u) = 0. Since this argument holds true for any K ⊂ U compact, this proves the thesis.
B Coherence on an Open Set
In this appendix we discuss the notion both of coherent germ of distributions on an open set U ⊂ R d and of enhanced coherence on an open set. This "local discussion" will be useful to prove enhanced coherence on a smooth manifold.
We start by introducing the notion of coherence on an open set U ⊂ R d .
Definition 24: Let U ⊂ R d and let γ ∈ R. We say that a germ of distributions uniformly for z, y ∈ K and for ε ∈ 0, DK 4 , where D K := dist(∂U, K). Here ∂U denotes the boundary of U .
Remark 25: Since ∂U is a closed subset and K is a compact subset with ∂U ∩ K = ∅, then D K := dist(K, ∂U ) > 0.
Remark 26: In the previous definition, with respect to the case of a smooth manifold, cf. Definition 4, we exploited the argument of Remark 6 for the supremum among the possible values taken by the scaling parameter ε, i.e., ε ∈ 0, DK 4 . In particular, this choice of the supremum is convenient for the following discussion.
Remark 27: Henceforth, we shall use the following notation. Given a compact set H ⊂ R d and ε > 0, we denote withH ε the ε-enlargement of H, which is the setH ε := {z ∈ R d : |z − x| ≤ ε for some x ∈ H}. Notice thatH ε is compact.
The idea at the base of enhanced coherence is that of removing from the notion of coherence the dependence on the test-function ϕ. This can be achieved working in the same spirit of [CZ20], i.e., extending the class of test-functions paying the prize of suitably modifying some coherence parameters. As a premise, we state the following proposition. Then, for any compact set K ⊂ U and any r > −α K , uniformly for z, y ∈ K, ε ∈ 0, DK 4 and ψ ∈ D(B(0, 1)), where B(0, 1) ⊂ R d denotes the unitary ball centred at the origin.
Proof. The proof follows the same lines of [CZ20, Prop. 13.1]. As above, we do not report it. We highlight only the main difference. This lies in the fact that when we consider the enlargement of the compact set K we need to make sure that this is still contained U . This is guaranteed by the definition of D K . This, together with the notion of coherence on U , guarantees the existence of the coherence parameters α K D K 2 associated with the DK 2 -enlargement of K. Proposition 29 shows that coherence implies enhanced coherence. The inverse implication holds true trivially. As a consequence, we have the following equivalent definition of coherence on an open subset U ⊂ R d .
Definition 30: Let U ⊂ R d be an open subset and let γ ∈ R. We say that a germ of distributions F = {F x } x∈U is γ-coherent on U if for any compact set K ⊂ U there exists a real number α K ≤ min{γ, 0} such that, for any r > −α K , uniformly for z, y ∈ K and for ε ∈ 0, DK 4 , where D K := dist(∂U, K) for any ψ ∈ D (B(0, 1)).
|
2020-12-03T02:47:47.183Z
|
2020-12-02T00:00:00.000
|
{
"year": 2021,
"sha1": "eafc70aa4cf987ecf23a2adbfbc621b17301a2a5",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2012.01261",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "fa2275a3b3f96f15437248f42da3bc2f12b7d737",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
}
|
255644800
|
pes2o/s2orc
|
v3-fos-license
|
Study on Adsorption Characteristics of Deep Coking Coal Based on Molecular Simulation and Experiments
To study the effect of high temperature and high pressure on the adsorption characteristics of coking coal, Liulin coking coal and Pingdingshan coking coal were selected as the research objects, and isotherm adsorption curves at different temperatures and pressures were obtained by combining isotherm adsorption experiments and molecular dynamics methods. The effect of high temperature and high pressure on the adsorption characteristics of coking coal was analyzed, and an isothermal adsorption model suitable for high-temperature and high-pressure conditions was studied. The results show that the adsorption characteristics of deep coking coal can be well characterized by the molecular dynamics method. Under a supercritical condition, the excess adsorption capacity of methane decreases with the increase of temperature. With the increase of pressure, the excess adsorption capacity rapidly increases in the early stage, temporarily stabilizes in the middle stage, and decreases in the later stage. Based on the classical adsorption model, the adsorption capacity of coking coal under high-temperature and high-pressure environments is fitted. The fitting degree ranges from good to poor. The order is D–R > D–A > L–F >BET > Langmuir, and combined with temperature gradient, pressure gradient, and the D–R adsorption model, it can be seen that after 800 m deep in Liulin Mine and 400 m deep in Pingdingshan Mine, the adsorption capacity of coking coal to methane decreases with the increase of depth.
INTRODUCTION
The development and utilization of coking coal plays a vital role in the development of China's coal industry and is an indispensable resource for China's economic construction. 1 In recent years, with the increase in mining depth, the environmental conditions of high ground temperature and high gas pressure have appeared in the storage of coal. 2,3 Coking coal mines that originally belonged to the low-gas category have been upgraded to high-gas or even outburst mines. For a long time, the focus of gas control has mainly been on highly metamorphic coal represented by anthracite, and there are few studies on coking coal. In particular, lack of research on the gas adsorption characteristics of coking coal under high-temperature and high-pressure conditions seriously restricts the development of coalbed methane, gas disaster control, and safe mining in coking coal mining areas. 4−6 Therefore, it is necessary to study the adsorption characteristics of coking coal on methane under high-temperature and high-pressure environments.
Many scholars have performed much research on the methane adsorption characteristics of coal through laboratory tests, theoretical analyses, and numerical simulations. In these experimental tests, the main focus has been on the effect of constant temperature and constant pressure, especially singlefactor temperature and pressure, on the gas adsorption characteristics. 7, 8 Levy, Bustin, Sakurovs et al. 9−11 found that the adsorption amount of gas on the coal surface increased with increasing pressure and decreased with increasing temperature. Zhaofeng et al. 12 studied the adsorption/ desorption characteristics of anthracite from 244.15 to 304.15 K and found that the lower the temperature, the greater the gas adsorption capacity of coal. When the temperature is constant, the methane adsorption capacity increases with increasing gas pressure, but there is a limit to the gas adsorption capacity for a certain quality coal sample. Liu Gaofeng 13 tested the gas adsorption capacity of anthracite at a pressure of 0−10 MPa and found that the rate of change of the adsorption capacity showed a rapid decrease in the early stage, a slow decrease in the middle stage, and a stable later stage as the pressure increased. Zhang 14 conducted adsorption experiments on dry anthracite coal samples with different particle sizes and anthracite coal samples with equilibrium water and found that with increasing pressure, the adsorption capacity did not always increase, but when the pressure reached a certain value, the adsorption capacity decreased instead. With an increase in the coal seam mining depth, the temperature and pressure of the coal seam will increase significantly, which together affect the gas adsorption characteristics of coal, thus restricting the development of deep coalbed methane, gas disaster control, and safe mining. Therefore, it is of great significance to study the combined effect of temperature and pressure on the adsorption of methane by coking coal.
Coal is a complex porous medium, and the adsorption of gas on the pore surface is mainly physical adsorption. 15 Many scholars have conducted research on this issue and proposed a series of adsorption models. The commonly used adsorption models include the Langmuir adsorption model based on single-molecule adsorption theory, the Langmuir−Freundlich (L−F) adsorption model and the BET adsorption model based on bilayer adsorption theory, and the Dubin−Radushkevich (D−R) and Dubin−Astakhov (D−A) adsorption models based on adsorption potential theory. 16 Pan et al. 17 studied the adsorption relationship between coal and methane with different degrees of metamorphism at different temperatures and pressures and found that coal with different degrees of metamorphism showed different adsorption capacities and adsorption isotherms. Divo-Matos 18 studied the isotherm adsorption model of high-pressure gas and found that the adsorption isotherm model derived from the Redlich−Kwong equation can better characterize the adsorption behavior under a high-pressure environment. Xie 19 studied the methane adsorption characteristics in the range of 253.15−293.15 K and found that the adsorption model based on the adsorption potential theory could well characterize the methane adsorption behavior; Lu et al. 20 studied the methane adsorption characteristics of tectonic coal in the Huaibei coalfield and found that coals with different metamorphic degrees have different suitabilities for the same adsorption model; Dengand others 21 studied the methane adsorption heat of coal and found that the D−A adsorption model has the best fitting effect in the low-pressure range; Hou et al. 22 studied the change of the adsorption force of high-temperature and highpressure gas on the coal surface and found that under the action of high temperature and high pressure, the Langmuir equation can still be used to accurately describe the gas adsorption process of coal. Whether the traditional model or the deduced adsorption model can well characterize the adsorption characteristics of coking coal under the joint influence of high temperature and high pressure is still controversial and has considerable room for development. Therefore, it is of great significance to find an adsorption model considering both temperature and pressure.
The most fundamental reason for the change in the macroscopic properties of coal is the change in its microstructure. Therefore, many scholars have explored the adsorption characteristics of coal for methane from a microscopic perspective. 23−25 Hu 26 studied the adsorption and diffusion of methane and other small molecular gases on coal by molecular simulation and found that the adsorption isotherm obtained by molecular simulation was similar to the experimental results. Khaddour 27 studied the adsorption of pure methane in activated carbon by combining classical canonical integrated Monte Carlo molecular simulation and gravimetric-based isotherm adsorption experiments and found that the combination of the two can completely characterize the activated carbon substrate and its methane storage capacity. Song 28 simulated the adsorption of methane by coal molecules and compared it with the adsorption of methane on graphene and found that the adsorption of methane by macromolecular vitrinites mainly depends on the adsorption site, adsorption site orientation, and adsorption orientation. Both the data and the adsorption data are in good agreement with the Langmuir and DA isotherm adsorption models. The above research shows that molecular simulation has an important theoretical value for gas adsorption research, and the combination of experiments and molecular simulation can better characterize the adsorption behavior of coal.
In summary, to study the adsorption characteristics of coking coal under high-temperature and high-pressure conditions, a typical coal macromolecular model was established and optimized by molecular simulation software, and the optimized model with the lowest energy was selected to simulate adsorption isotherms under different temperature conditions. Methane adsorption was measured and corrected by isothermal experiments. The isothermal adsorption model suitable for high temperature and high pressure was obtained by comparing the commonly used adsorption models. Through the combination of molecular simulation and experimental methods, the methane adsorption characteristics of coking coal are quantitatively analyzed, the influence of high temperature and high pressure on the methane adsorption of coking coal is revealed, and the methane adsorption capacity of coal seams at different depths is predicted, which provides a theoretical basis for the safe mining of deep coal seams in coking coal mining areas and the prevention and control of gas disasters.
Collection and Preparation of Coal Samples.
North China, East China, and Central South China are important coking coal production and reserve bases in China. The coking coal in Liulin, Shanxi, has low ash content and excellent quality and is a rare type of conventional coking coal. The Xingwu Coal Mine is located 6 km east of Liulin County, Shanxi Province, and the average thickness of the coal seam is 0.89 m. The Pingdingshan mining area of Henan Province is the main coking coal production base in central South China, with rich coking coal reserves. The Pingdingshan No. 12 Mine is located in the middle and west of the Pingdingshan coalfield of Henan Province and is a severely protruding mine. Based on adsorption isotherm experiments and molecular simulations, this paper explores the gas adsorption characteristics of deep coking coal in the above two main coking coal producing areas. (Figure 1). The collected coal samples were pulverized and sieved with a particle size of less than 0.2 mm for basic parameter determination. The measurement results of the basic parameters of the experimental coal samples are given in Table 1. Table 1 shows the TRD and ARD (the true density and apparent density) of coal, respectively. The true density refers to the density obtained by dividing the powder mass (W) by the volume, excluding the voids inside and outside the particles (true volume V t ). The mass per unit apparent area of an object is called apparent density.
Φ represents porosity. Porosity is the ratio of the pore volume of coal to the total volume of coal and can also be expressed by the pore volume (cm 3 /g) contained in the unit mass of coal. Porosity is equal to (true density − apparent density)/(true density) × 100%.
M ad is the water in coal; A ad is the ash content in coal; and V ad is the volatile matter in coal. These constitute the fundamental basis for understanding and mastering the nature of coal. 15 K, the gas pressure is in the range of 0−20 MPa, the adsorbent is the optimized coking coal stereomolecular model, and the adsorbate is methane gas molecules. A total of eight sets of simulation experiments were carried out. The parameter settings of the coking coal molecular simulation are shown in Table 2. When the number of simulated configurations reaches a certain numerical value and enters an equilibrium state, the simulation ends.
Experimental Scheme of Isothermal Adsorption.
The experiment uses the Hsorb-2600 high-temperature and high-pressure gas adsorption instrument to carry out the isothermal adsorption experiment of coking coal under hightemperature and high-pressure conditions. The experimental equipment is shown in Figure 2.
The instrument can work in the temperature range of 77.15−873.15 K and the pressure range of 0.04−20 MPa and can choose the temperature and pressure for the coal to methane adsorption/desorption measurement experiment. In terms of temperature, pressure, accuracy, and other common parameters compared with the gas adsorption instrument, the instrument has a wider testing temperature area and can analyze more types of items. According to the survey data, 29,30 the average geothermal gradient in the Liulin mining area is 278.05 K/100 m, and the average reservoir pressure gradient is 0.76 MPa/100 m. The average geothermal gradient in the Pingdingshan mining area is 281.95 K/100 m, and the average reservoir pressure gradient is 0.86 MPa/100 m. Combined with the purpose of the research and the actual situation of the experimental device, the maximum test pressure of the experiment in this paper is 11 MPa, and the test temperatures are 303.15, 323.15, 343.15, and 363.15 K. In the experiment, coking coal with a particle size of 3−6 mm was selected, the coal sample was dried and put into a sample tube, and the sample tube was installed in the sample test area of the instrument. The maximum adsorption equilibrium pressure was set to 11 MPa, and the experimental temperature was 303.15 K. The instrument automatically records the adsorption amount when adsorption equilibrium is reached, thereby obtaining the adsorption data at the adsorption equilibrium pressure point, and the experiment ends. The above experimental steps were repeated to carry out coking coal adsorption experiments at temperatures of 303. 15 Figure 3. The model structures of the two are mainly composed of aliphatic side chains, aliphatic bridge bonds, and aromatic skeleton structures. The correct combination of the types of aromatic carbon atoms and the distribution of the aliphatic carbon structure can reasonably characterize the molecular structure of coking coal. Therefore, this paper directly cited the molecular structure of coking coal for further study.
Coal Molecular Optimization and Boundary Determination.
Geometric optimization and annealing optimization are performed on the initial model using the Geometric Optimization and Anneal items in Materials Studio. The precision is set to Fine, the force field is selected to Figure 5. Relationship between total potential energy and calculated density.
ACS Omega
http://pubs.acs.org/journal/acsodf Article COMPASS, and the charge is selected to USE Current. The geometric optimization is usually conducted 5−10 times, and the number of annealing cycles is set as 10. After each cycle, a molecular configuration is output, and the molecular structure is optimized to obtain the geometrically most stable energy configuration and the annealed most stable energy configuration. 33 The lowest energy structure of the coking coal molecule after geometry optimization ( Figure 4a) and annealing ( Figure 4b) is shown in Figure 4. The optimized structural model reaches the lowest energy and the most stable state, with twist and deformation, good three-dimensional structure, and all aromatic layered structures existing in the form of parallel and overlapping. To clarify the change characteristics of the coking coal molecular structure model before and after simulated annealing, the potential energy parameters of the coking coal molecular structure model before and after annealing were compared. The results are shown in Tables 3 and 4.
Both the bonding and nonbonding energies of the annealed coking coal molecular structure were significantly reduced. In terms of nonbonding energy, the van der Waals energy is smaller than the electrostatic energy, which indicates that the reduction in van der Waals energy determines the reduction in nonbonding energy. The reduction in bond energy depends on a large reduction in bond energy, bond angle energy, bond torsion energy, and bond inversion energy because the molecular structure model of coking coal has obvious torsion and deformation after annealing optimization.
Best Density Choice.
Density is one of the most basic physical properties of coal and rock, and it is an important basis for evaluating whether the coal rock structure model is reasonable. 34 The periodic boundary conditions of coking coal macromolecules are established by the Amorphous Cell calculation module in Materials Studio. 35,36 The simulated density was set in the range of 0.5−1.5 g/cm 3 . To choose the optimal density, the density increase step in the range of 0.5−1 g/cm 3 was set to 0.1 and the density increase step in the range of 1−1.5 g/cm 3 was set to 0.05.
On the basis of the above density settings, the cell size is continuously adjusted to obtain the variation law of the potential energy of the structural model under different periodic conditions, that is, the relationship between density and potential energy.
As shown in Figure 5, relevant literature 37,38 shows that the density of the lowest energy configuration cannot represent the true density of coal, and the lowest point of local energy after the lowest energy configuration should be the density of coal under formation conditions. The simulation results show that when the molecular density of LX coking coal is 1.0 g/cm 3 , the energy reaches the lowest point; when the density is 1.3 g/cm 3 , the energy increases sharply. Therefore, the final density of the LX coking coal molecular model is 1.3 g/cm 3 . When the molecular density of PS coal is 0.9 g/cm 3 , the energy reaches the lowest point; when the density is 1.3 g/cm 3 , the energy increases sharply. Therefore, the final density of the PS coking coal molecular model is 1.3 g/cm 3 . According to related literature, 39,40 the measured density range of coking coal is 1.15−1.50 g/cm 3 . Therefore, the coal molecular densities simulated by LX and PS coking coals are reasonable. The crystal structure model is shown in Figure 6. 3.3. Construction of the Adsorbate. The best energy configuration of the adsorbent under the periodic boundary condition of the coking coal molecular structure is adopted. The adsorbate is CH 4 . First, CH 4 is drawn under the MS software Visualizer module. After geometric optimization, energy optimization, and annealing, CH 4 with neutral surface charge and minimum energy is obtained. The parameter settings are consistent with the molecular structure of coking coal in the previous chapter. The CH 4 molecular structure model is shown in Figure 7 Figure 8). The excess adsorption amount represents the adsorption amount left by the actual adsorption-phase density minus the gas-phase density. It can be seen from Figure 8 that under the conditions of constant temperature and 0−11 MPa pressure, the methane adsorption of coal samples increases with increasing pressure, which can be roughly divided into two stages: rapid increase and slow increase. The reasons for this phenomenon are as follows: The adsorption equilibrium of gas is a dynamic equilibrium process of adsorption and desorption. Thus, under the influence of the adsorption force, the free gas is adsorbed, and under the influence of the molecular force, the adsorbed gas overcomes the physical adsorption force and desorbs into the free gas.
With increasing adsorption pressure, the probability of gas molecules impinging on the pore surface of the coal body increases, and the adsorption speed is accelerated, resulting in an increase in gas adsorption. As the gas adsorption pressure continues to increase, the distance between molecules decreases, the gas and coal surface molecules repel each other, and the force between them is repulsive. The larger the repulsive force, the less easily the gas is adsorbed on the coal surface, which shows that the gas adsorption rate decreases in this pressure range. At constant pressure with increasing temperature, the adsorption capacity of coking coal gas decreases. This is because the adsorption of gas by coal samples is an exothermic
Molecular Simulation Isothermal Adsorption Line.
Because the adsorption amount obtained by simulation is absolute adsorption (Figure 9), the adsorption amount measured by the laboratory is the excess adsorption amount. To verify the feasibility of the model, the absolute adsorption amount obtained by simulation should be transformed into excess adsorption amount, and the simulation results and experimental results should then be compared and analyzed.
Conversion of Absolute Adsorption Capacity.
According to the definition of absolute adsorption capacity, the following relationship exists between excess adsorption capacity and absolute adsorption capacity 41 where V ad is the excess adsorption capacity, cm 3 /g; V ex is the absolute adsorption capacity, cm 3 /g; ρ g is the density of gas phase, g/cm 3 ; and ρ a is the density of adsorbed phase, g/cm 3 . According to eq 1, the density of the free phase (ρ g ) and the density of the adsorbed phase (ρ a ) must be determined to achieve the conversion from excess adsorption to absolute adsorption.
The free phase density can be calculated by the formula PV = nRT. The free phase density of methane in the temperature range of 303.15−363.15 K and equilibrium pressure range of 0−20 MPa is shown in Figure 10. As seen from Figure 10, since the simulated temperature has exceeded the critical temperature of methane, when the equilibrium pressure is greater than the critical pressure, methane transforms from the gaseous to the supercritical state, and the methane density does not change sharply during the phase-state transition. At the Note: ρ a is the adsorption-phase density, g/cm 3 ; M is the molecular mass of the gas, g/mol; R is the universal gas constant, J/(mol*k); Tc is the critical temperature, K; P c is the critical pressure, Pa; ρ lp is the density of the liquid phase at the normal pressure boiling point, g/cm 3 ; ρ c is the critical density, g/cm 3 ; ρ b is the boiling point density, g/cm 3 ; T b is the boiling temperature, K; n ex is the excess adsorption capacity, cm 3 /g; n abL is the Rankine volume of absolute adsorption capacity, cm 3 /g; and p abL is the Rankine pressure of the excess adsorption capacity, MPa. same time, with increasing pressure, the difference in methane density at different temperatures becomes more obvious.
Under supercritical conditions, the adsorption-phase density cannot be measured directly, so it is mainly calculated by theoretical estimation and equation fitting. The commonly used calculation methods of adsorption-phase density ( Table 5) mainly include the approximation method, empirical formula method, and excess adsorption amount curve fitting method. 42,43 The selection of the calculation method of the adsorption-phase density has a great influence on the correction result of the absolute adsorption amount. Therefore, considering the accuracy of the correction result of the absolute adsorption amount, it is necessary to correct the calculation method of the adsorption-phase density.
In the range of supercritical temperature, adsorbent molecules lose their average translational energy due to the effect of adsorption potential but still have high rotational and vibrational energy. Therefore, the density of the adsorbent phase under supercritical conditions should be between the critical density and the density of the liquid at the atmospheric boiling point. 44 In addition, the absolute adsorption capacity is theoretically monotonic and does not have a maximum value. Therefore, the rationality of the different adsorption-phase density estimation methods in Table 5 can be verified from two perspectives: the value range of the adsorption-phase density and the monotonicity of the absolute adsorption amount. Figure 11 shows the relationship curve of the methane adsorption-phase density and temperature calculated using different adsorption-phase density methods. Figure 11 shows that within the temperature range of 303.15−363.15 K, the approximate value of the atmospheric boiling point density of methane is 0.424 g/cm 3 , and the approximate value of the critical density is 0.163 g/cm 3 . Therefore, the adsorption-phase density of methane should be between 0.163 and 0.424 g/cm 3 . The methane adsorption-phase density calculated by the van der Waals formula is 0.692 g/cm 3 , which is not within a reasonable range. The methane adsorption-phase density obtained by Ozawa's empirical formula method is 0.267− 0.231 g/cm 3 , which is within a reasonable range, and the adsorption-phase density decreases with increasing temperature. Since the temperature is 303.15−363.15 K and pressure is 0−11 MPa, the experimental data do not show a point of decline, so the linear fitting of the descending section of the curve cannot be used to obtain the adsorption-phase density in this paper. The adsorption-phase density obtained by the L−F regression method is between 0.3 and 0.6 g/cm 3 Therefore, the Ozawa empirical formula was used to calculate the adsorption-phase density of methane in this paper. The comparison between the simulation results and the experimental results is shown in Figure 12. Figure 12 shows that the adsorption capacity of LX coking coal and PS coking coal molecules in the range of 0−11 MPa obtained by molecular simulation tends to be consistent with the change in laboratory test results, the consistency between them is high, and the difference between them decreases with increasing temperature and pressure. It can be considered that the simulation results of the molecular chemical structure model of coking coal are reliable under the set parameters. As shown in Figure 13, with increasing pressure, the adsorption capacity of gas increases rapidly in the early stage of adsorption. In the middle stage of adsorption, the gas adsorption capacity appeared to be transiently stable. At the later stage of adsorption, the amount of gas adsorption decreases. This is because in the early stage of adsorption, with the increase in adsorption pressure, the density of the free phase increases, the density of the adsorbed phase increases continuously, and the excess adsorption capacity increases accordingly. When the adsorption pressure increases to just the adsorption saturation, the adsorption density increases to the maximum value, and the excess adsorption amount reaches the maximum value at this time, that is, the peak of the isotherm. In the late stage of adsorption, after adsorption saturation occurs, the pressure continues to increase, and the density of the free phase continues to increase, but the density of the adsorbed phase tends to be stable and does not change, and the excess adsorption must decrease linearly.
Isothermal Adsorption Simulation under High
When the pressure is the same, the excess gas adsorption of coal samples decreases with increasing temperature, but the decrease range is not obvious in the high-pressure section. This is because the adsorption vacancies in the coking coal molecules are almost saturated under the high-pressure section, and the temperature increases at this time. The kinetic energy of methane molecules can be increased, and the adsorbed methane molecules can escape from the weaker adsorption sites of the coal molecules. However, due to the existence of a large number of adsorption vacancies in the lowpressure section, the adsorbed methane molecules are strongly attracted by the wall surface, and it becomes more difficult for them to escape.
Isosteric Heat of Adsorption.
Adsorption heat is the comprehensive result of energy changes in the adsorption process, which reflects the strength of the adsorption of the adsorbate by the adsorbent. It can be used to judge the type of adsorption and analyze the heterogeneity of the adsorbent surface.
At present, the calculation of adsorption heat mainly includes direct calorimetry, the Clausius−Clapeyron equation method, gas chromatography, etc. 45 The direct method uses a calorimeter attached to the adsorption equipment to measure the adsorption heat. Although the adsorption heat can be measured directly, it is only suitable for measuring the adsorption process with greater specific heat. Gas chromatography is used to calculate the heat of adsorption by measuring the time and retention volume of gas in coal. This method will cause large errors in the determination of the volume of gas adsorbed on coal. Generally, the Clausius−Clapeyron equation method is used to calculate the adsorption heat. The adsorption heat is obtained by measuring the corresponding pressure and temperature under the same adsorption amount and plotting with Ln where Q st is the equal adsorption heat, KJ/mol; T is the temperature, K; P is the pressure, MPa; R is the gas constant, 8.314; and C is a constant.
Taking a different adsorption amount between each group of coal samples (since the selected adsorption amount should cover the whole adsorption process, the selected adsorption amount of different coal samples should also be different), the adsorption capacity of coking coal in the LX mine is on the high side, so the value is on the high side. The adsorption capacity of PS mine coking coal is generally low, so the value is low. Since the two groups of coal samples cannot obtain the same adsorption capacity value in their respective simulation results at different temperatures, it is necessary to bring the set adsorption capacity of each group into the fitting curve in Figure 9 to obtain the corresponding pressure. The logarithm of the obtained pressure is calculated, Ln P and 1/T are used as the ordinates and abscissa for drawing, respectively, and Origin software is used to fit them. The fitting results are shown in Figure 14. It can be seen from eq 3 that the slope of the line is the ratio of the equivalent adsorption heat and the gas constant, and the slope multiplied by the gas constant is the equivalent adsorption heat. The fitting equation, correlation Table 6, which shows that the adsorption heat range of the Liulin Xingwu coking-free coal sample is 6.7−10 kJ/mol. The adsorption heat range of coking coal samples of the No. 12 Coal Mine in Pingdingshan is 8.9−12.6 kJ/mol, and the adsorption heat in the whole adsorption process is less than the upper limit of the general physical adsorption heat of 40 kJ/ mol, 47 while the adsorption heat range in the chemical adsorption process is 84−600 kJ/mol. Thus, the adsorption of methane by coking coal belongs to physical adsorption, and it is a spontaneous exothermic process. Figure 15 shows that the adsorption heat of the two groups of coking coal for methane gas increases with increasing adsorption capacity, and the relationship between the two is linear. This is due to the influence of the heterogeneity of the adsorbent surface and the force between adsorbent molecules. The heterogeneity of the adsorbent surface determines that the adsorbent molecules first preferentially adsorb at the highly active sites and then gradually adsorb at the weakly active sites, and an increasing number of methane molecules are adsorbed. The mutual repulsion between methane molecules becomes increasingly stronger, and an increasing amount of energy is released, which causes the adsorption heat to increase with increasing adsorption amount.
Classical Adsorption Model and Adsorption Theory.
To accurately describe the methane adsorption characteristics under high-temperature and high-pressure conditions, this paper selected for research the most representative of the adsorption models among the Langmuir monolayer model, BET multimolecular layer model, L−F adsorption model, and micropore filling mode 48−50 based on the adsorption potential theory.
(1) Langmuir monolayer model where n ab is the excess adsorption capacity, cm 3 /g; n 0 is the limit adsorption capacity, cm 3 /g; b is a constant related to the adsorbent, adsorbate properties, and temperature, MPa −1 ; c, n, and D are coefficients; p is the pressure, MPa; and p 0 is the saturated vapor pressure, MPa.
Comparison of Fitting Effects of Adsorption Models.
In this paper, five representative adsorption models are fitted to the excess adsorption capacity, and the fitting results are shown in Figure 16. The fitting parameters of each model are shown in Tables 7−9. It can be seen from the table that the saturated adsorption amount n 0 and the micropore filling n 0 have the same change trend as the adsorption isotherm. The adsorption capacity of methane decreases with increasing temperature because the gas adsorption of coal samples is an exothermic reaction. As the temperature increases, the activation energy of methane molecules increases, and the number of methane molecules adsorbed on the coal surface per unit time is less than the number of methane molecules detached from the coal surface during the same period. The effect is manifested as a decrease in the limit of the adsorption capacity. With increasing temperature, the Langmuir pressure gradually decreased, indicating that the adsorption capacity of the inner surface of coal for methane gradually decreased. According to the relative error R 2 of each model in Tables 7−9 Table 10 shows the methane adsorption corresponding to different burial depths in the Xingwu Coal Mine and Pingdingshan Coal Mine.
As can be seen from 5.2, D−R model is relatively suitable for supercritical adsorption of coking coal.
is substituted into (7), the comprehensive adsorption model of temperature and pressure is as follows V n e D T P 0 (2 Ln Ln 12.04) 2 (9) As the depth increases, the gas pressure and temperature change. (11) where h is the burial depth (m).
In eqs 16 and 17, Ln n 0 is considered as coefficient A. Isothermal adsorption data at different depths in Table 10 are used to calculate logarithm values of adsorption capacity, that is, Origin is used to plot and fit the relationship between Ln V and Ln h; then, the coefficients A and D can be known, n 0 can be obtained by exponentiating A, and then n 0 and D can be substituted into eqs 14 and 15. The calculation model of coaladsorbed gas with depth can be obtained ( Figure 17).
Validation of the Adsorption Model.
The commonly used adsorption model has a good fitting effect on shallow coal seams, but poor fitting effect on deep coal seams. However, the current coking coal mining has tended to be deep, so it is particularly important to study the variation law of coal seam gas content with burial depth.
The calculation model of the variation of the amount of adsorbed gas in coking coal with depth derived based on this has a wide range of applications. To further verify the reliability of the model, eight groups of coking coal isotherm adsorption data were used to verify the model derived in this paper, and error bars were used to analyze the difference between the adsorption amount calculated by the model in this paper and the measured adsorption amount.
Eight groups of data in Table 10 are selected to verify the model: (1) Using the adsorption data of four groups of LX ore and PS ore at different depths, the logarithmic value of the adsorption amount, that is, ln V, is calculated and Origin is used to plot the relationship between ln V and Ln h, and then the coefficients A and D can be known (Table 11). (2) The exponent of the coefficient A is calculated to obtain n 0 , and n 0 and D are put into eqs 14 and 15 to obtain the calculation model of the variation of the amount of gas adsorbed by coking coal in the two mines with depth. The results of the predicted adsorption amount and the measured adsorption amount are shown in Figure 18. Error bars are shown in Figure 19. Figure 19 shows that the predicted values of coking coal samples from the LX mine and PS mine are in good agreement with the measured values, and the relative errors are both less than 10%. This shows that the calculation model of the adsorption gas volume of coking coal with the depth change proposed in this paper is of high accuracy and has certain feasibility. The final model is expressed as
CONCLUSIONS
(1) The adsorption data obtained by molecular dynamics simulation are in good agreement with the experimental data. The combination of simulation and experiment can better study the adsorption characteristics of coking coal to methane under supercritical conditions. (2) Under supercritical conditions, the excess adsorption capacity of methane decreased with the increase of temperature. With the increase of pressure, the change of methane excess adsorption capacity is mainly divided into three stages: rapid increase in the early stage, transient stability in the middle stage, and decrease in the late stage. Combined with the pressure gradient, temperature gradient, and adsorption isotherm, the amount of methane adsorption by coal at a certain depth can be predicted. After the Xingwu Coal Mine in Liulin, Shanxi, is 800 m deep, the adsorption capacity decreases with the increase of the depth. After the Pingdingshan Mine in Henan Province is 400 m deep, the adsorption capacity decreases with the increase of the depth.
|
2023-01-12T16:07:03.115Z
|
2023-01-10T00:00:00.000
|
{
"year": 2023,
"sha1": "d4b68d4237116a26d4099d9840d56c2c529de78a",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "afe9f42e546dd234bc33380a0dcfd021f8fa1d61",
"s2fieldsofstudy": [
"Chemistry",
"Environmental Science",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
114228086
|
pes2o/s2orc
|
v3-fos-license
|
Soft Electret Gel For Low Frequency Vibrational Energy Harvesters
A soft electret material was obtained by solidifying ionic liquid in a polymer network and immobilizing cations (+) on the surface. When a piece of soft electret gel is sandwiched between a pair of electrodes, a large amount of charge is induced in an electrical-double-layer capacitor (1.0-10 μF/cm2) appearing at the interface of the electrode and the ionic liquid gel without an external voltage source. By retracting the electrode repeatedly, we obtained a current output of a few μAp-p/cm2 stably.
Introduction
There has been intensive investigation on MEMS vibrational energy harvesters due to their high applicability to IoT (Internet of Things) [1]. These inertial harvesters can be divided into three categories: piezoelectric, electromagnetic and electrostatic. Compared to the bulky electromagnetic type and limited-frequency-ranged piezoelectric type, electrostatic energy harvesters have several advantages. In particular, separating the spring and the power generation regions provides better design options for optimized performance. Although low frequency vibration (<50 Hz) is abundant in our daily life, e.g. motions by human body, infrastructure and transportation vehicles, most studies using electrostatic energy harvesters have been aiming at high frequency, e.g. vibrations generated by compressors or engines. The mismatch seems to stem from the fact that harvesters based on conventional MEMS technology are too fragile to achieve high-output power (>100 µW) at low frequency [2]. Therefore, to harvest energy at low frequencies while maintaining the advantages of electrostatic energy harvesters, it is necessary to develop a new method for capacitive power generation.
A triboelectric nanogenerator can obtain high-output power from low-frequency human motion [3]. However, compared to conventional devices, this type of device requires large external mechanical energy input such as pushing on a buzzer or stepping on the floor. In order to overcome this limitation, we employ a novel energy harvesting technique that changes the contact area between the soft electret gel having immobilized cations on the surface and the electrode. In this study, we demonstrated contact/retract operation of the electrode as a verification of our new power generation principle. This work improves the harvester based on the electrical double layer of ionic liquid reported at PowerMEMS 2014 [4].
Characteristics of the ionic liquid
Ionic liquids are composed of cations (+) and anions (-) with no other diluting solvent, and a wide variety of ionic pairs can be combined as desired. These ionic liquids have various unique characteristics, e.g. extremely low vapor pressure, resistance to high temperature, and the formation of an electrical double layer when a voltage applied across the material. Within these characteristics, we focused on the formation of an electrical double layer within the applied voltage range called the potential window (Figure 1a). An ionic liquid works as an insulator in the potential window, as it forms a 1 nm-thick electrical double layer on the interface between the ionic liquid and the electrode. Thus, it is capable of generating quite high capacitances in the order of 10 µF/cm 2 (Figure 1b).
Principle of power generation
The capacitance is utilized as variable capacitor whose contact area changes as the electrode is moves.
In this way output current was obtained with the motion; this result was already reported in PowerMEMS 2014. However, this method has two primary problems: (1) Electrostatic attraction between the electrode and the ionic liquid prevented the change in contact area that is necessary for power generation and (2) external voltage source is needed to form the electrical double layer. Therefore, this work is the further extension of the previous study on utilizing the electrical double layer of ionic liquid while mitigating the aforementioned limitations. Soft electret material was obtained by solidifying ionic liquid and immobilizing cations on the surface. When a piece of the soft electret gel is sandwiched between a pair of electrodes, large amount of charge are induced in an electrical-double-layer capacitor appearing at the interface of the electrode and the ionic liquid gel. By retracting the electrode repeatedly, we obtained the output-current ( Figure 2).
Materials and manufacturing method
The soft electret gel consists of three materials: a base material, an ionic liquid and an initiator ( Figure 3a). The base material is a fluid polymer with a polymerizable functional group. Accordingly, the ionic liquid has a cation with the same functional group for binding to the polymer. The initiator allows polymerization after mixing the appropriate amounts of constituent materials and exposing to UV light. First, we mixed and put the ionic polymer solution between a pair of transparent ITO (Indium Tin Oxide) electrodes with spacers to define the height of the gel (Figure 3b). We then applied a bias voltage within the solution's potential window (2 V DC) in order to form an electrical double layer. Finally, we exposed the sample to UV light (total about 45 min) for immobilizing cation ions on the surface of the gel (Figure 3c) while keeping the bias voltage.
Immobilized cations
Through these processes, we monitored the current between the electrodes using a Source Measure Unit (SMU; Keysight B2900A, Figure 4a). Figure 4b shows the monitored current during cycles with and without UV exposure. This curve shows that the current reduced over time settling down to 0.1 µA/cm 2 , which means that cations were immobilized on the surface gradually. The soft electret gel was fabricated with a diameter of 10 mm and a height of 100 µm (Figure 4c).
Power generation principle verification
We prepared a pair of ITO electrodes and placed a sample of the soft electret gel between them (Figure 5a). Then, we used a Digital Multi Meter (DMM; Agilent 34410A) and LabVIEW-setup to monitor the output current between the electrodes (Figure 5b). The output current was measured using the 10 MΩ setting of DMM. When the gel surface with immobilized cations touched the ITO electrode, we obtained an output current.
Result and discussion
The output current peaked when we touched the cations-fixed surface with the electrode (Figure 5c).
We measured up to 2 µA p-p /cm 2 from the soft electret gel by using a DMM and LabVIEW-setup. Compared to previous study (PowerMEMS 2014), we were able to achieve an equivalent output current of 2 µA p-p /cm 2 without using any external bias voltage. This means the soft electret gel works as an alternative to the external power supply. In this study, we fabricated a relatively stiff sheet-like gel of about 10mm diameter and about 100 µm thickness. The shape, dimension and stiffness of the gel can be modified as desired allowing for optimization of the output current from a device.
Conclusion
In this study, we have developed a soft electret gel with immobilized cations on the surface. This technique provides robust devices by minimizing fragile regions. Unlike harvesters obtained by the conventional MEMS technology, the soft electret gel method shows superior performance at the low frequency range with low mechanical input power due to the characteristics of the ionic liquid based insulating material between the electrodes. Moreover, eliminating the external bias voltage to form an electrical double layer for high capacitance is crucial for low power applications. As a result, the proposed technology leads to a robust energy harvester at the low frequency range for low power applications such as wearable devices based on human motion. Currently, we are developing a MEMS energy harvesting device using this novel electret gel.
|
2019-04-15T13:07:01.102Z
|
2015-12-10T00:00:00.000
|
{
"year": 2015,
"sha1": "d11cc6a03d6891f33ca256f3eb59c86560ed83d8",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/660/1/012004",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "1d0a3d52d58c83d7b2cf7b37e902986166179e74",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
119222338
|
pes2o/s2orc
|
v3-fos-license
|
State diagram and the phase transition of $p$-bosons in a square bi-partite optical lattice
It is shown that, in a reasonable approximation, the quantum state of $p$-bosons in a bi-partite square two-dimensional optical lattice is governed by the nonlinear boson model describing tunneling of \textit{boson pairs} between two orthogonal degenerate quasi momenta on the edge of the first Brillouin zone. The interplay between the lattice anisotropy and the atomic interactions leads to the second-order phase transition between the number-squeezed and coherent phase states of the $p$-bosons. In the isotropic case of the recent experiment, Nature Physicis 7, 147 (2011), the $p$-bosons are in the coherent phase state, where the relative global phase between the two quasi momenta is defined only up to mod($\pi$): $\phi=\pm\pi/2$. The quantum phase diagram of the nonlinear boson model is given.
Cold atoms and Bose-Einstein condensates in optical lattices provide a versatile tool for exploration of the quantum phenomena of condensed matter physics on one hand, and, on the other hand, a way for creation of novel types of order in cold atomic gases [1].
Two remarkable recent achievements in this direction are the experimentally demonstrated novel types of atomic superfluids in the P - [2] and F -bands [3] of the bi-partite square two-dimensional optical lattice. The bi-partite optical lattice having a checkerboard set of deep and shallow wells (i.e. made of double-wells), used in Refs. [2,3] has a large coherence time in the higher bands, several orders of magnitude larger than the typical nearest-neighbor tunneling time [4]. The order parameter of these superfluids is complex, in contrast to the conventional Bose-Einstein condensates having real order parameter in accord with Feynman's no-node theorem for the ground state of a system of interacting bosons [5,6]. The p-bosons, for instance, are confined to the second Bloch band for a sufficiently shallow lattice amplitude, V 0 2.2E R , where E R is the recoil energy [4,7]. In Ref. [2] V 0 ≈ 1.55E R , however, a particular experimental technique was used which results in population of other Bloch bands. Nevertheless, the main results on the cross-dimensional coherence are obtained for the parameter values where the second band is by far the largest populated.
The purpose of this work is to show that, in the reasonable approximation, the quantum state of the p-bosons in the square bi-partite optical lattice is governed by the modified nonlinear boson model, which was already used before in the context of cold atoms tunneling between the high-symmetry points of the Brillouin zone [8][9][10]. However, there is an important difference: in the p-boson case there is a lattice asymmetry parameter which provides for the phase transition at the bottom of the energy spectrum, additionally to that at the top of the spectrum, studied before in Ref. [9]. The focus is on the quantum features of the p-boson superfluid, as different from Ref. [11] where a mean-field Gross-Pitaevskii approach was employed and the region of the complex order parameter was found. The nonlinear boson model derived below follows just from two basic conditions: the existence of two quasi degenerate energy states coupled by the boson pair exchange (tunneling) when the single-particle exchange is forbidden. Thus it applies to other contexts as well (see also Ref. [10]). For instance, it is equivalent to the nonlinear part of the so-called fundamental Hamiltonian (in the Wannier basis), describing the local two-flavor collisions in the first excited band of a two-dimensional single-well optical lattice [12]. Moreover, in the case of the optical lattice consisting of the one-dimensional double-wells [13] the many-body Hamiltonian can be cast as a set of linearly-coupled nonlinear boson models. Taking this into account, we consider the quantum features of the derived nonlinear boson model in the most general setting and using its natural parameters, besides analyzing the experimental setting of Ref. [2].
Consider the bi-partite square two-dimensional optical lattice of Ref. [2], which can be cast as follows (after dropping an inessential constant term) where the experimental values of the parameters read V 0 = V 0 /4 = 1.55E R in terms of the recoil energy E R = 2 k 2 2m , with k = 2π/λ, λ = 1064 nm and a z = 71 µm being the oscillator length of the transverse trap. The dimensionless Fourier amplitudes of the lattice are see Fig. 1. The experimental parameters are η ≈ 0.95 and ǫ ≈ 0.81.
As is found in Ref. [11] the observed cross-dimensional coherence [2] is the joint effect where the periodic Bloch functions u k (x) are chosen to be normalized on the 2D The band-limited expansion reads where the summation is over the Bloch indices inside the first Brillouin zone k ∈ of the transverse trap. Inserting this expression into the standard Bose-Hubbard Hamiltonian for the lattice potential (1) and using the Poisson summa- where E(k) is the Bloch energy of the second band, is the dimensionless coefficient which depends solely on the lattice geometry.
Since the points K 1,2 , the energy minima of the second band, are lying on the edge of the Brillouin zone ( Fig. 1(a)), the Bloch functions ϕ K 1,2 (x) are real. Moreover ∇ k E(k) = 0 and, hence, ∇ k ϕ K 1,2 (x) = 0. As the result, the expansion over k in Eq. (4) in some small neighborhoods about these points starts only with the second-order term ∝ (k − K 1,2 ) 2 [14]. On the other hand, one can verify that the experimental width of the Bragg peaks about the band minima K 1,2 is too narrow to give a significant second-order correction, i.e. Fig. 3 of Ref. [2]). Therefore, we can discard the spectral width of the Bragg peaks and keep in Eq. (3) only the two-mode expansion of the boson field operator (a similar expansion over the two nonlinear modes was also used in Ref. [11]) It is important to note that, since the summation in the nonlinear term of Eq. (4) is conditioned by ∆ k =0 mod(Q), all terms with with either three K 1 and one K 2 , or vice versa are zero (i.e. bosons tunnel between the minima by pairs [8]). Thus, only the following geometric parameters are nonzero: As a consequence, one obtains from Eq. (4) the two-mode Hamiltonian of the nonlinear boson model [8][9][10] except for the term proportional to the population imbalance due to the lattice asymmetry: We have denoted n j ≡ b † j b j . The parameters of Hamiltonian (8) are as follows. The energies of the two symmetric points K 1,2 read where E 1,2 = E(K 1,2 ) is the respective Bloch energy, N = n 1 +n 2 , U is the average interaction parameter per particle, and Λ is a pure geometric parameter defined as Note that at the symmetric point α = α iso we have σ = 0, hence E 1 = E 2 . We have just two independent parameters (γ, Λ), where γ is defined as Here we note that any 2D lattice which for some set of parameters possesses two nonequivalent points lying on the edge of the Brillouin zone and having equal Bloch energies can lead, under similar conditions, to the same model Hamiltonian (8).
The parameters Λ and σ, 0 ≤ Λ ≤ 1 and −1 ≤ σ ≤ 1, are independent of the interaction strength g and are functions only of the lattice shape. For the experimental lattice (1) their dependence on α and θ can be determined by numerically solving the 2D eigenvalue problem for Bloch energies, the result is given in Fig. 2. Except for the semicircle shaped plateau, both parameters vary significantly with variation of the lattice potential. Specifically, for the experimental value θ = 0.53π the parameters Λ, σ and the Bloch energy difference are given in Fig. 3. The other one is at the top of the spectrum (and corresponds to the zero relative phase).
For γ = 0 the phase transition at the top of the spectrum was studied before [9,10].
Consider first the number-squeezed states, which appear for the large population imbalance between the points K 1,2 and have a squeezed variance of the population imbalance (see also Refs. [9,10]). For instance, suppose that n 1 ≫ n 2 (i.e. n 1 ≈ N) and denote the respective class of states by B 1 . Following Bogoluibov's approach, one can replace b 1 → √ N − n 2 e iΦ , where Φ is an inessential random phase, and expand the Hamiltonian The Hamiltonian (13) is diagonalizable by the Bogoliubov transformation where β is the squeezing parameter. We havê For n 2 ≫ n 1 the number-squeezed states The existence diagram of the number-squeezed states B 1,2 is shown in Fig. 4(a), their existence is equivalent to existence of the Bogoliubov transformation (14). The states B 1,2 are thermodynamically stable for positive effective mass in Eq. (15), i.e. when 2Λ−1∓γ > 0, which condition is satisfied only in the regions Λ > 1 + γ and Λ > 1 − γ, respectively for B 1 and B 2 . The thermodynamically stable B 1,2 states are shown in Fig. 4(d). Note that the number-squeezed states have undefined relative phase φ = arg( (b † 1 ) 2 b 2 2 )/2 (this is reflected also in arbitrariness of Φ, see also the discussion of the quantum phase below).
Hamiltonian (8) also admits the phase states possessing definite values (i.e. with small variance) of the phase and the population imbalance. These states will be called coherent.
The existence diagram of the coherent states can be found by approximating the Hamiltonian by a quantum oscillator problem in the Fock space [9]. For N ≫ 1 the coherent states are essentially semiclassical in the sense of Ref. [16]. Thus, most of their properties can be studied by replacing the boson operators by scalar amplitudes: b 1 → N/2(1 + ζ)e −iφ/2 and b 2 → N/2(1 − ζ)e iφ/2 and considering the resulting classical model (save for the The stable stationary points of the classical Hamiltonian H cl correspond to the phase states of the quantum model. There are two stationary points: (2φ t = 0, ζ t = γ 3Λ−1 ) and (2φ b = π, ζ b = − γ 1−Λ ) and they correspond, respectively, the coherent phase states at the top (C 0 ) and at the bottom (C π ) of the quantum energy spectrum (this is clear from their energies).
The direct approach to study the coherent states is based on the discrete WKB in the Fock space, with the effective Planck constant h = 2/N [9]. One first factors out the classical phase φ b,t and then expands the Hamiltonian (8) about the classical stationary point ζ b,t (see also Ref. [17]). Representing the Fock-space "wave function" ψ(ζ) = n, N − n|ψ (here n ≡ n 1 ) with ζ = 2n/N − 1 as ψ = e iφζ/h ψ 0 (ζ) and defining the canonical with ζ momentum asp = −ih∂ ζ we get with a local HamiltonianĤ φ of a quantum oscillator (the discarded terms start with ∼ . The Hamiltonian about ζ b (for the phase 2φ b = π) readŝ while that about the point ζ t (for 2φ t = 0) can be obtained by replacing Λ by 3Λ in the first two terms in Eq. (18) and inverting the sign atp 2 due to the negative effective mass . The existence and stability analysis is straightforward from this point. First of all, the coherent states C 0 , i.e. with the classical phase satisfying 2φ t = 0, are thermodynamically unstable due to the negative effective mass, while the states C π are thermodynamically stable where they exist. The existence diagram of the coherent states is given in Figs. 4(b) and (c). Numerical simulations confirm that the Gaussian width of the oscillator "wave-function" ψ(ζ) reasonably approximates the width of the coherent states in the Fock space.
By considering the characteristic energies (up to ∼ 1/N) in terms of UN 2 /2 of all the above classes of states, i.e. E(B 1,2 ) = 1 ± γ, E(C π ) = 1−γ 2 −Λ 2 2(1−Λ) and E(C 0 ) = 1−γ 2 −9Λ 2 2(1−3Λ) , one obtains the state diagram of the model (8), see Fig. 4(d). Depending on the values of γ and Λ the ground state is either the coherent state C π or one of the squeezed states, B 1 or Let us now consider the state diagram versus the experimental parameter α. To compare the result also to the mean-field diagram of Ref. [11] (see Fig. 5) one has to identify the same interaction parameter (the product of the g and the density in Ref. [11]). The quantity gN/Ω can serve as an analog, though one has to remember that we have discarded the atoms of the condensate not represented by the Bragg peaks at the two points K 1,2 , thus the resulting approximate value of gN/Ω will be smaller than the actual value and the comparison can be only qualitative. The expressions for the borderlines Λ = 1 ± γ of the state diagram Fig. 4(d) can be rewritten using Eqs. (9), (10) and (12) as to give the interaction parameter gN/Ω. We obtain: The results are presented in Fig. 5, where the energy is given in the recoil energy units.
Qualitatively we have similar diagram to that of Ref. [11], though the corresponding quantitative value of the interaction parameter gN/Ω is significantly smaller (though the density parameters are not identical, as mentioned above, the difference is still significant). We note, however, that the values of the interaction parameter in Fig. 5 which was then used in the nonlinear part of the many-body boson Hamiltonian to produce the model Hamiltonian (8). For this very reason only the lower part of the figure around the critical α iso belongs to the validity region of the approximation. Finally, an analog of the relative populations of the two modes is the semiclassical imbalance ζ b (defined only for the coherent states). It can be cast as Finally, let us make some comments on the relative phase φ. Why the phase 2φ appears in the classical Hamiltonian H cl (16) is clear: the bosons tunnel by pairs, which is reflected in the splitting of the even and odd subspaces of the Fock space, with the respective basis states |2s, N − 2s and |2s − 1, N − 2s + 1 [9,10]. Since the state of the system is always expanded over the states differing by an even number of bosons, it is impossible to define the phase φ, but only the 2φ: 2φ = arg( (b † 1 ) 2 b 2 2 ). Hence 2φ and not φ appears in the exponent factor in Eq. (17): exp{iφζ/h} = exp{2iφ(n 1 − n 2 )}. The splitting of the Fock space into two subspaces also leads to the double degeneracy of the coherent states (quasi-degeneracy to be precise: the terms of order 1/N are neglected), since the same approximate "wavefunction" in the Fock space ψ(ζ) describes not one but two states, one of each subspace: C 2s = 2s, N − 2s|ψ and C 2s−1 = 2s − 1, N − 2s + 1|ψ with the discrete sets ζ 1 ∈ {(2s − 1)/N − 1} and ζ 2 ∈ {2s/N − 1}.
The mean-field approach, in contrast, produces a definite relative phase, see Ref. [11], where two equivalent order parameters of the nonlinear Gross-Pitaevskii equation are possible for the description of the same experiment with the phase either ±π/2, due to the broken superposition principle by the nonlinearity. However, the full many-body quantum Hamiltonian permits superposition of the eigenstates of the same energy. The resolution of this seemingly paradoxical situation is similar to the case of the random phase in the double-slit experiment with the Bose-Einstein condensate, see Ref. [18]. Indeed, since the atoms are detected one by one coherently from both modes b 1,2 , when the lattice is released, the atom detections probe the quantity b † 1 b 2 spontaneously projecting, as the detection process proceeds, on one of the two possible phases φ b = ±π/2 of C π .
In conclusion, we have shown that the experiment of Ref. [2] is describable by the quantum model (8) and that there is the quantum phase transition of the second order between the atom number-squeezed states and the coherent phase states of the p-bosons.
The results indicate that in the recent experiment [2] a phase transition of the second order was observed, where the isotropic experimental state observed for the symmetric point α = arccosǫ (and hence, for γ = 0) must be the coherent C π state of the relative phase 2φ = π.
|
2012-02-05T12:08:30.000Z
|
2012-01-10T00:00:00.000
|
{
"year": 2012,
"sha1": "146d07eb0644da54601feba49468de1b8e59030b",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1202.0955",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "146d07eb0644da54601feba49468de1b8e59030b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
28044625
|
pes2o/s2orc
|
v3-fos-license
|
Effect of DA-8159, a Selective Phosphodiesterase Type 5 Inhibitor, on Electroretinogram and Retinal Histology in Rabbits
DA-8159, a selective inhibitor of phosphodiesterase type 5, was developed as a new drug for erectile dysfunction. The effect of DA-8159 on the electroretinogram (ERG) and the retinal histopathology were evaluated in rabbits. The ERG was performed prior to, and 1 and 5 hr after DA-8159 (5 to 30 mg/kg) administration. The plasma concentration of DA-8159 was determined at each time point, and retinal microscopic examination was also performed. There was no statistically significant ERG change at any dose or at any time. Though the 30 Hz flicker showed a prolongation of the implicit time at 5 hr after the administration of either DA-8159 15 mg or 30 mg/kg (p<0.05), but concurrent amplitude decreases were not statistically significant. At a dose of 5 mg/kg, no test drug was detected in the blood after either 1 or 5 hr. At either 15 mg/kg or 30 mg/kg, there was a dose-dependent increase in the blood concentration after 1 hr of drug administration, which decreased with time. In light and electron microscopic examinations of the retina, there was no remarkable change at any dose. These results suggest DA-8159 has a low risk potential to the retina, but further evaluation on the visual functions in human is needed.
INTRODUCTION
DA-8159, a selective phosphodiesterase type 5 (PDE5) inhibitor developed by DongA Pharmaceutical Company (Kyunggi, Korea), is an oral agent for treating erectile dysfunction. DA-8159 induces penile erection dose-dependently in both anesthetized and conscious animals. It also induces smooth muscle relaxation and increases the endogenous cyclic guanosine monophosphate (cGMP) level in the rabbit corpus cavernosal smooth muscles (1). The data obtained from phase 1 clinical study showed DA-8159 is safe and well tolerated after a single oral dose in healthy males up to 300 mg without severe adverse effects (unpublished data). However, as with other PDE5 inhibitors, it may inhibit phosphodiesterase type 6 (PDE6) at a higher concentration. The inhibitory concentration of DA-8159 on the PDE6 receptor is 10 times higher than that of the PDE5 receptor.
PDE5 is present in human platelets and vascular smooth muscles. PDE5 inhibition causes a vascular dilatation by blocking cGMP hydrolysis in the vascular smooth muscle. PDE6 is present in retinal photoreceptor cells, and is essential for visual excitation, named phototransduction. The visual excitation begins with the absorption of a photon of light by the pigment rhodopsin. In this process, PDE6 hydrolyzes cGMP to guanosine monophosphate (GMP), resulting in a decrease in the intracellular cGMP levels. This light-dependent de-crease in cGMP leads to hyperpolarization of the photoreceptors through the closure of cation channels. The inhibition of PDE6 increases the intracellular concentration of cGMP, which leads to opening of the sodium channels resulting in depolarization of the photoreceptor cells. The alteration of sodium channels causes exchange of Ca ++ , Na + and Mg ++ through the photoreceptor cells. As a result, ionic conductance generates an electrical response, which is transmitted to the visual cortex of the brain and produces a visual sensation. The visual excitation process can be recorded using electroretinography. If DA-8159 acts as a PDE6 inhibitor in retinal photoreceptor cells and inhibits the phototransduction process, an electrical alternation should be recorded in an electroretinogram (ERG).
Sildenafil citrate (Viagra � , Pfizer, Inc., New York, NY, U.S.A.) was initially developed as a drug to treat angina, but it was found to be highly specific to PDE5. Recently, it has been widely used to treat patients with erectile dysfunction. However, variable systemic and ocular side effects have been reported. The ocular side effects include visual halo (2), third nerve palsy (3), nonarteritic anterior ischemic optic neuritis (4, 5), etc. As observed with sildenafil, DA-8159 may cause such ocular side effects. Theoretically, PDE inhibitor may change the retinal physiology in two ways; an alteration of the phototransduction process by PDE6 inhibition at the photoreceptor cells, and an alteration in vascular flow by PDE5 inhibition at the vascular smooth muscle. We have previously been able to assess the alteration of phototransduction by ERG or the subjective visual symptoms, and the alteration of the blood flow by Doppler flowmetry (6)(7)(8).
The objectives of this animal experiment were to investigate the effects of DA-8159 on the ERGs, and to examine the histological change after DA-8159 administration in rabbits.
MATERIALS AND METHODS
Twenty male rabbits (1.5 to 2.0 kg of body weight, bw) were used for the electroretinography and blood concentration measurements. The rabbits were divided into four groups; the DA-8159 5 mg/kg, 15 mg/kg, and 30 mg/kg bw treated groups and a control group. The test drug, DA-8159, was dissolved in 5 mL of saline and fed through an L-tube. The control rabbits were given equal amount of saline. Each group consisted of five rabbits.
To evaluate the ERG changes after DA-8159 administration, electroretinography was performed prior to administration, one hour after, and five hours after the drug administration. To analyze the relationship between the blood concentrations of DA-8159 and the ERG changes, 5 mL of blood was drawn from the ear vein prior to and immediately after the ERG recording. The eyeball was enucleated immediately after electroretinography for the histological examination.
For electroretinography, the rabbits were kept in the dark for twenty minutes for adaptation. The pupil was dilated with an eyedrop of 2.5% phenylephrine hydrochloride. The animal was anesthetized with an intramuscular injection of ketamine hydrochloride (65 mg/kg bw) and xylazine hydrochloride (15 mg/kg bw) mixture. The recording electrodes were placed on both corneas. The ERG jet (Universo SA, Switzerland) was used as the recording electrode. The reference electrode was placed centrally on the shaven forehead. For the stability of the electrode, one end of a reference electrode was replaced by a skin needle, and was inserted into the forehead skin. The ground electrode was placed on the earlobe. Corneal dehydration was prevented by the frequent application of hydroxy-propyl-methylcellulose. The ERG was recorded using a UTAS-E 2000 system (LKC Technologies, Inc, Gaithersburg, Maryland, U.S.A.). The light stimuli were generated by a Ganzfeld dome stimulator (LKC Technologies). Full-field electroretinography was performed according to the Standard for Clinical electroretinography recommended by the International Society for Clinical Electrophysiology of Vision (ISCEV) (9). As recommended by the ISCEV, five basic responses were obtained from each rabbit. The significance of the ERG changes after drug administration was analyzed using a Bonferroni test. In brief, the time-dependent ERG changes according to the drug concentrations were compared with that of the control group. The relationship between the blood concentration of DA-8159 and the ERG changes were analyzed using linear regression analysis.
For histological examination, the eyeball was enucleated 1 or 5 hr after the drug administration. The eyeball was enucleated under general anesthesia with an intramuscular injection of ketamine hydrochloride and xylazine hydrochloride mixture. A total of twenty rabbits, two rabbits in each group, were used for the histological examination. After enucleation, the animal was sacrificed by ketamine overdose. Immediately after enucleation, the eyeball was immersed in a fixative solution of 2% glutaraldehyde in 0.1 M phosphate buffers. One eye from each rabbit was used for optical microscopic examinations, and the other was used for electron microscopic examinations. For the optical microscopic examination, the eyeball was placed in 20 mL of 10% formaldehyde for 24 hr. For electron microscopic examination, the eyeball was bisected and a tissue block was cut into a 2×3 mm size using an dissecting microscope. The block was placed in 2% glutaraldehyde in 0.1 M phosphate buffer for 90 min in the cold, and post-fixed in 1% osmium tetraoxide for 90 min. After fixation, the block was dehydrated serially with ethanol, and embedded in epon. A semi-thin section was stained with uranyl acetate and lead citrate. The ultrastructural study was performed by transmission electron microscopy (ISI-LEM 2000, Akashi, Japan).
RESULTS
The five standard electroretinographic responses obtained from DA-8159 administrated and the saline ingested group are shown in Table 1. In the rod response, there was no remarkable ERG change at any dose or at any time after DA-8159 administration. In the maximal response, a statistically significant decrease in the b-wave amplitude (p<0.05) was noted at 5 hr after the administration of DA-8159 30 mg/kg, without a concurrent implicit time change. Otherwise no remarkable ERG changes were observed in the wave implicit time or amplitude at any dose or at any time after the test drug administration. The oscillatory potentials showed no significant change. In the cone response, there was no ERG change at any dose or at any time after the test drug administration. The 30-Hz flicker responses showed a statistically significant prolongation in the implicit time (p<0.05) after 5 hr of DA-8159 15 mg/kg or 30 mg/kg bw administration. The changes were associated with a concurrent decrease in the amplitude, even though this was statistically not significant (p>0.05).
The dose-dependent blood concentrations of DA-8159 as a function of time are shown in Table 2. No test drug was detected in the blood after 1 or 5 hr at dose of DA-8159 5 mg/kg bw. At doses of 15 mg/kg and 30 mg/kg bw, there was a dose-dependent increase of the blood concentration after 1 hr of drug administration. However, no difference in the blood concentration was found after 5 hr. There was no correlation between the blood concentration and the ERG change after 1 or 5 hr of test drug administration. Table 1. Implicit times and amplitudes of the five standard responses after administering DA-8159 in rabbits 1, difference between baseline value and value obtained 1 hr after DA-8159 administration; 5, difference between baseline value and value obtained 5 hr after DA-8159 administration; *, statistically significant, p<0.05.
No abnormal light and electron microscopic findings were observed in the DA-8159 treated groups. No abnormal findings were observed either in the sensory retina, the photoreceptor rod and cone cells, the retinal pigment epithelial cells, the Bruch's membrane, or the choroidal vessels. An electron microscopic examination showed the vascular endothelial cells of the choroid exhibit well-preserved terminal bars connecting the adjacent endothelial cells.
DISCUSSION
ERG has established itself for many decades as a routine diagnostic method in clinical ophthalmology. It is a summation of the electrical responses generated by the neural and non-neuronal cells within the retina. However, it is necessary to standardize the recording and interpreting protocols in order to make it possible to compare the results elicited in different institutions. In 1989 the ISCEV provided standard ERG test procedures and five basic responses (9). The five basic ERG responses are the rod response, the maximal rod and cone response, the oscillatory potentials, the cone response, and the 30-Hz flicker response. However, the standardization is suitable for humans, but not for rabbits. The anatomical and functional similarities of the rabbit retina have made it a simulated model for an ophthalmic study, but the retinal anatomy of the rabbit is slightly different from that of humans. There are three types of cone cells in the human retina, but only two, green cones and blue cones (10,11), are present in the rabbit retina. In addition, there is no standardized data for the rabbit ERG, which makes it unadvisable to use human ERG data in rabbits. Compared to the baseline values, a statistically significant difference was observed between each group. Therefore, it is advisable to compare the time-dependent ERG change in the given group with that of the control group. In this study, the statistical significance of the timedependent ERG change was analyzed by comparing the data from the test drug administrated group with that of the control group.
Luu et al. (12) studied the effect of sildenafil on the full-field ERG, color vision and the subjective visual symptoms on healthy volunteers. Two out of fourteen subjects who received 200 mg of sildenafil complained bluish vision. Those who showed a depression in the ERG cone function made more errors in the color vision test. In addition there was a correla-tion between the ERG changes and the occurrence of the subjective visual symptoms. They made a conclusion that 200 mg sildenafil caused rather mild acute alterations in the cone and rod function. Lee et al. (13) reported an alteration of ERG after a single dose of 100 mg sildenafil ingestion. Similar ERG changes were found in animal experiments. There was statistically significant prolongation of the implicit time in the 30-Hz flicker response at doses of 15 mg/kg bw and 30 mg/ kg bw DA-8159 with a concurrent decrease in amplitude, although this was statistically insignificant. Similar ERG changes were observed in the cone response at 30 mg/kg bw. Such ERG changes were not observed at 5 mg/kg bw. Because the 30-Hz flicker response or cone response represents the photoreceptor cone function, its alteration may cause color vision changes. These ERG changes are useful to explain the pathogenesis of color vision alterations after PDE inhibitor administration.
The mechanism of the ERG changes after PDE inhibitor administration is still unclear. Theoretically a higher concentration of a PDE inhibitor may cause an alteration in the phototransduction process, resulting in the ERG changes. Behn et al. (14) investigated the effect of sildenafil on the retina in knockout mice, which were heterozygous for a mutation causing an absence of the subunit of rod PDE6. They found that sildenafil significantly decreased the a-, and b-wave amplitudes in the heterozygous PDE6 subunit lacking mice. Further decrease in both the a-, and b-wave amplitude were observed with increasing sildenafil doses. The ERG decrease was more pronounced in the heterozygous PDE6 subunit lacking mice than in the wild mice containing normal PDE6. This result shows that the sildenafil-induced ERG change is not related to the PDE6 inhibition caused by sildenafil. However, they asserted that the heterozygous PDE6 subunit knockout mutation probably leads to a decrease in the amount of functional PDE6, creating an enhanced susceptibility to the inhibitory effects of sildenafil. In contrast, Vobig et al. (15) reported significant reductions in the a-, and b-wave amplitudes 1 hr after administering sildenafil, and these effects recovered to normal levels after 6 hr. The amplitude reduction correlated well with the slidenafil plasma concentration, which showed a peak 1 hr after administering the drug. Moreover, Behn et al. (14) reported that sildenafil has a significant dosedependent inhibitory action on the retinal function. The retinal inhibitory effect occurred with as little as twice the maximum equivalent dose recommended for humans. These results indicate a close correlation between the sildenafil blood concentration and the ERG change. However, this study found no significant ERG change at 5 mg/kg bw DA-8159, which was 3.5 times higher-dose recommended for a 70 kg person.
In this animal study, no correlation between the DA-8159 blood concentration and ERG changes could be found. The mean blood concentration at 1 hr after DA-8159 15 mg/kg bw administration was as high as 2.2 times than that observed 5 hr after (0.031 g/mL vs. 0.014 g/mL, p<0.05), but the Table 2. Mean blood concentrations ( g/mL) after administration in rabbits ERG amplitude changes were the opposite. There was a statistically significant ERG cone response change 5 hr after administering the test drug, not after 1 hr. As provided by manufacturer, the T max of blood DA-1859 is approximately 60 min after ingestion. If the ERG changes are due to the PDE6 inhibition effect of DA-8159, the changes will peak around 1 hr after administration, not 5 hr after. This suggests no significant correlation between the DA-8159 blood concentration and the ERG changes. Schneider et al. (16) reported a change in the ERG response related to the dose of the PDE inhibitor in cats. The phosphodiesterase inhibitor increased the rod bwave amplitude at low concentrations, but diminished the rod b-wave amplitude at high concentrations. In view of the results, it is possible that there is a correlation between blood concentration of PDE inhibitor and the ERG change at relatively high doses, positively or negatively. However, there is some uncertainty in the correlation between the blood DA1859 concentration and the ERG change.
In conclusion, there were no significant ERG changes after administering 5 mg/kg bw DA-8159. However, at 15 mg or 30 mg/kg bw, a statistically significant prolongation of the 30-Hz flicker implicit time was observed 5 hr after the test drug administration. Furthermore, there was a b-wave amplitude decrease in the cone response at 15 mg or 30 mg/ kg bw, although this was statistically insignificant. Otherwise, no remarkable ERG changes were observed in the rod, the maximal, and the oscillatory potentials. There was no correlation between the blood DA-8159 concentration and the ERG change. In light and electron microscopic examinations, there were no histological changes after DA-8159 administration at any dose or at any time. These data suggest DA-8159 has a minimal effect on ERG changes in rabbits, but further evaluation of the effects of DA-8159 on visual functions in human must be followed.
|
2014-10-01T00:00:00.000Z
|
2004-08-01T00:00:00.000
|
{
"year": 2004,
"sha1": "e5de589b4a9969a44d5b3d6b6e69d703cf4291dc",
"oa_license": "CCBYNC",
"oa_url": "https://europepmc.org/articles/pmc2816895?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "e5de589b4a9969a44d5b3d6b6e69d703cf4291dc",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
245060891
|
pes2o/s2orc
|
v3-fos-license
|
The Role of Green Technology Innovation on the Development of Sustainable Tourism: Does Renewable Energy Use Help Mitigate Environmental Pollution? A Panel Data Analysis
The tourism industry has been long blamed as the major driver to global warming due to it being the largest industry that uses more energy, most of which comes from sources that emits carbon-dioxide. However, despite all the blames on tourism for its negative effects on the environment, less work has been done to ascertain its impact on the environment. Unlike past studies that that alludes that tourism development exacerbates the emission of carbon-dioxide hence global warming, the current research shows that in the OECD countries, tourism does not have any signicant link with greenhouse gasses emissions. This is so because OECD nations have long started to shift from fossil fuel use, as sources of energy, to renewable energy use which doesn’t exacerbates greenhouse gasses emissions. However, the current research concurs with the ndings of past studies that renewable energy consumption signicantly decreases greenhouse gases emissions. Using renewable energy sources of energy instead of fossil fuels should continue to be encouraged in all nations for the purpose of achieving low carbon in the future. The current study uses dynamic GMM model for 38 OECD countries from 2008 to 2019. Dynamic GMM model remains one of the best models since it corrects endogeneity problem in a model. GMM model overcomes autocorrelation, heteroskedasticity and normality problems, hence the robustness and reliability of results obtained. Gross Domestic Product and population size negatively affect greenhouse gasses emissions while ination rate is observed to have a signicant strong positive link with greenhouse gasses emissions. emissions, tourism development, renewable energy, population size, ination rate, GDP and FDI of the OECD countries.
Introduction
Tourism industry is one of the world industries that is largely blamed for causing greenhouse gases emissions which in turn causes global warming. The work of Tian, Belaid and Ahmad (2021) and Yue, Liao, Zhang, Shao and Gao (2021) alludes that the tourism industry had been long blamed for providing a strong impact on environmental degradation. However, it is shocking that despite all these accusations of the effects of tourism industry on environmental degradation, very few researches have been undertaken to examine the nexus between the two (see, for instance, Yue, et al., 2021;Tian, et al., 2021). Tourism industry is believed to negatively affect the environment because it uses more energy to undertake its activities. Yue, et al. (2021) also provides that the papers available, on the implications of tourism industry on the emissions of greenhouse gasses, provides contradictory results. For example, Zhang and Liu (2019) postulates that in the case of North and South East Asia (NSEA 10) countries tourism strongly causes environmental degradation, while Tian, et al. (2021) alludes that increase in Tourism development in the long-run tends to reduce emissions. Therefore, there is still a gap in the literature study that needs to be covered by undertaking more researches in various nations and also by employing various methods for robust results. Due to this reason, the current research is aimed at covering the gap existing in the literature study by providing an alternative study on the impact of tourism development on greenhouse gases emissions. The current study differs from past researches in that it examines the impact of tourism development on greenhouse gases emissions in the Organization of Economic and Corporation Development (OECD) member countries which has not been examined before, to the best of our knowledge. The paper also employs dynamic Generalized Method of Moments (GMM) model which gives robust results in the existence of endogeneity in the speci ed model. shaped curve, thus the EKC holds (see also, Ma, Ahmad & Oei, 2021;Dietz, Rosa & York, 2012). All these ndings clearly point out that renewable sources of energy reduce environmental degradation and at the same time promoting economic growth.
As opposed to the many claims that tourism development exacerbates the emissions of greenhouse gasses, the current results in this research provides that in the OECD nations, tourism development does not have a signi cant impact on the emissions of greenhouse gasses. The reason behind the insigni cant relationship between the two is due to the change by countries from using fossil fuel sources of energy to using renewable energy sources in the tourism industry. Thus, instead of using fossil fuel energy in the tourism industry, renewable energy has begun to be used. The results of the study also clarify the signi cance of renewable energy in curbing greenhouse emissions, since a negative signi cant effect has been ascertained.
Literature Review
Environmental Kuznets Curve (EKC) Kuznets (1955) examined the association between income inequality and economic growth and came to a conclusion that the two exhibited for an inverted U-shaped relationship. According to the postulations by Kuznets (1955), as an economy grow from low GDP per capita to a higher one, income inequality tends to increase up until the turning point is reached. Further increases in economic growth beyond the turning point causes a decrease in income inequality in a nation. Almost forty years after the work of Kuznets (1955) other researchers such as, Seldon and Sang (1994); Sha k (1994); Grossman and Kruger (1995) Stern, Common and Babbier (1996) postulated that Kuznets' proposition is also applicable on environmental impacts, hence the EKC proposition was born. The EKC proposition, thus alludes that an increase in the economic growth of a nation will rst encourages environmental degradation as nations uses sources of energy and engage on activities that harm the environment up until the turning point is reached where environmental stress is relieved such that any further increases in economic growth tends to reduce environmental degradation. Dietz, et al. (2012) argues that the turning point is achieved due to the shift of nations from fossil fuel energy use to renewable energy among many other factors, hence explains the reason behind EKC shape.
The argument by Dietz, et al. (2012) can be used to come up with proper policies in the tourism industry.
Since the tourism industry heavily relies on energy to undertake its activities, if the industry uses fossil fuel them an improvement in the tourism industry will come along with more degradation of the environment, which is the upward sloping part of the EKC curve. However, when nations realize the harm of non-renewable sources of energy and resort to using renewable energy then an improvement in the tourism industry will bring about less stress on the environment, which is the downward sloping part of the EKC curve. Tourism industry is good for the nations since it contributes a greater percentage on the countries' GDP and if it's growth damages the environment then a trade-off situation exists between the two. Thus, nations are encouraged to go for renewable energy which is environment friendly, see also Dietz, et al. (2012).
Impact of tourism development on the environment
It is generally agreed that tourism industry plays a very crucial role in causing environmental degradation in the world. This is so because the industry uses a lot of energy to carry out its activities. Most of this energy is obtained from non-renewable renewable sources that pollutes the air, thereby causing ozone thinning and hence global warming. The studies by Chaoqun (2011);Yue, et al. (2021);Tian, et al. (2021); Zhang and Liu (2019) among many others concurs that tourism development signi cantly impact the environment. The tourism industry is very crucial for the growth of the world's economy as it contributes a greater percentage of GDP. At the same time environmental degradation is not good for the world and the future generation. As a result, we agree with Dietz, et al. (2012) that this poses for a trade-off situation between both tourism development and environmental stress, since factors that improves tourism and hence GDP has the tendency of negatively affecting the environment. Therefore, nations should strive to come up with measures that promotes tourism development without harming the environment.
Empirical studies have so far brought mixed results on the nexus between the carbon-dioxide emissions and tourism development. Tian, et al. (2021) in their study observed that in the long-run increases in tourism development tended to reduce emissions of carbon-dioxide, indicating that tourism does not negatively affect the environment, rather it helps reduce pollution in the G20 countries. These ndings are due to the reason that the G20 nations have started to shift from fossil fuel use to renewable energy use, hence the tourism industry which relies more on energy is having its activities handled through the use of renewable energy. However, these results contradict with the ndings of Yue, et al. (2021) who postulates that tourism is a major driver of greenhouse gasses emissions. Thus, nations are encouraged to shift from using non-renewable energy and use those energy sources that are renewable as these sources will help mitigate environmental degradation.
Some researchers observed that consumption of renewable energy provides a negative and signi cant effect on greenhouse gasses emissions which means that if more of renewable energy sources of energy are used in economic activities of nations the effects of greenhouse gasses emissions will be lowered (Khan, et al. 2020 These ndings provide overwhelming evidence that if nations seek to curb the effects of greenhouse gases, then renewable energy is the way to go. However, other few studies, for instance Mohsin, et al., (2021) ;Liu, et al., (2021, obtained a positive effect of renewable energy on greenhouse gasses emissions. These are some of the few studies that provides evidence that contradict with the wide literature studies and this anomaly might have raised due to models employed that might not be robust. Attiaoui, et al., (2017), and Toumi and Toumi, (2019) argues that the association between the two is neutral, while Saldi and Omri, (2020) alludes that no association exists between the two. Therefore, considering the overwhelming evidence provided by many studies that were carried out, as mentioned in the paragraph above, we ascertain that renewable energy use is capable of reducing greenhouse gasses emissions and should be used as a substitute to non-renewable sources.
Nexus between renewable energy and economic development
Recent studies on the nexus between use of renewable energy and the growth of the economy have shown that, renewable energy consumption provides a positive effect of economic growth (Wang & Wang, 2020;Smolovic et al., 2020;Rahman, 2020;Shahbaz, et al. 2020;Ivanovski, et al., 2021;Dogan, et al., 2021;Chen, et al., 2020). Thus, if world economies adopt the use of renewable sources, Gross Domestic Product will also be improved, on top of curbing greenhouse effect (see, Deka, Cavusoglu & Dube, 2021;. Nonrenewable energy has also been ascertained to have a signi cant positive effect on GDP despite it providing harm to the environment, Ivanovski, et al., (2021) ;Rahman, (2020), and this has left governments and policy makers facing a trade-off between the two. Both economic development and safe environment are of paramount importance to nations. Therefore, since renewable energy can be used in place of fossil fuel and can also improve GDP, then it is the way to go.
Various other researches have also ascertained renewable energy effect on employment. For example, Ge and Zhi, (2016) postulates that green economy positively affect employment, both in developing and developed nations. The association of renewable energy consumption and rate of foreign exchange as well as in ation has been ascertained and renewable energy use has been observed that it impacts both in ation and rate of foreign exchange negatively, showing that renewable energy use encourages appreciation of the foreign exchange value and stabilizes rate of in ation Deka, Cavusoglu & Dube, 2021). Therefore, in order for nations to achieve a clean environment in the future together with high economic growth, stable in ation rate and strong exchange value, renewable energy use should be encouraged.
Sample and Data
To achieve the aim of this research study our sample data is of 38 Organization of Economic and Corporation Development (OECD) countries. Having used 38 countries in this research, this implies that our research makes use of panel data for the variables employed. The period of study is from 2008 to 2019 and yearly data is used. Therefore, since the data used is panel, then each variable is going to consist of 456 observations (12×38) and this data is large enough to produce reliable results that are not biased. Moreover, secondary data is used and is retrieved from OECD website.
Variables
In this current research seven variables from 38 OECD member countries are used for the purpose of achieving research study's aim. The seven variables employed are Greenhouse gasses emissions (GHG), Tourism development (TOR), Renewable energy (RE), Population size (POP), In ation (INF), Gross Domestic Product (GDP) and Foreign Direct Investment (FDI).
The measurement of Greenhouse gasses emissions is thousand tones, tones per capita, while carbon-dioxide is measured in millions tones and tones per capita (www.data.oecd.org).
Independent variables
Three variables are expressed as explanatory variables in this study and these variables are: Tourism development, renewable energy and population size. The three variables have been chosen to be speci ed as explanatory variables of greenhouse gasses emissions because they are known to directly impact it. Tourism development indicator in this research is represented by Tourism receipts and spending. According to www.data.oecd.org, tourism receipts and spending constitutes of travel debits and credits and is the value of money spent by tourists on their visits outside of their own country. It is measured in United States (US) dollars. The tourism industry has been blamed as the major driver to greenhouse gasses emissions, Yue, et al., (2021) ;Tian, et al., (2021) and hence can be modeled to explain greenhouse gasses emissions. Renewable energy, according to www.data.oecd.org are those sources of energy that contributes to total primary energy supply, which are environmentally friendly and can be used over and over again. These sources of energy include: hydro, wave, geothermal, tide, solar and wind energy sources among many others.
It is measured in thousand tones or as total primary energy supply's percentage. Renewable energy sources of energy have been promoted to be used as alternatives to fossil fuels that emits greenhouse gasses and hence can contribute in explaining greenhouse gasses emissions. Moreover, population size is the number of people that are present in, or people that are temporarily out of the country, including aliens who have permanently settled in the country (www.data.oecd.org). As the population size grows this means that more energy is required, since people use energy in their day-to-day activities and some of this energy is obtained from fossil fuels which emits greenhouse gases.
Control variables
To control for the model and to avoid missing out other explanatory variables, gross domestic product (GDP), in ation rate and foreign direct investment (FDI) are speci ed as control variables. GDP is the total value of goods and services that are produced within the borders of a country irregardless of the citizenship status of the people involved in the production of those products. Thus, GDP include all products and services that might have been produced by local and foreign rms, as long as those products are produced within the boundaries of a country and not outside. In ation is the rate at which prices of goods and services of a country changes over time, say in one year and in this research consumer price index (CPI) is taken to represent the rate of in ation (www.data.oecd.org). FDI ows is the value of cross border transactions that are related to direct investment and these take the form of equity, intercompany debt and earnings reinvestment transactions (www.data.oecd.org). It is measured in US dollars and as GDP share.
Method
As has been mentioned earlier on, this paper examines the impact of tourism development and renewable energy use of greenhouse gasses emissions therefore we follow the model below, as expressed in the form of a linear function: GHG = f(TOR, RE, POP, INF, GDP, FDI) (1) Where, GHG represents greenhouse gasses emissions, TOR represents tourism development, RE represents renewable energy use, POP represents population size, INF represents rate of in ation, GDP represents gross domestic product and FDI represents foreign direct investment.
Due to the nature of our data sample size and time period of the study the best method to be used for robust results is dynamic Generalized Method of Moments (GMM) model. This is so because the number of countries included in our panel data (38) are more than the time period (12) under study. Therefore, when the number of countries or subjects under study is larger than the time period GMM model is the most suitable method to use. Anderson and Hsiao (1982); Holtz-Eakin, Newey and Rosen (1988); Arellano and Bond (1991); Arellano and Bover (1995); and Blundell and Bond (1998) in their series studies pioneered Dynamic GMM model. There are basically two types of GMM models, that is, rst-difference GMM (GMM-DIF), by Anderson and Hsiao (1982); Holtz-Eakin, et al. (1988); and Arellano and Bond (1991), and Systems GMM model, by Arellano and Bover (1995); and Blundell and Bond (1998). The difference between the two is that rstdifference GMM corrects endogeneity problem on the model through differencing the regressors and removing xed effects, Arellano and Bond (1991), while systems GMM uses orthogonal deviations that subtracts variables' average of all future observations available (Arellano & Bover, 1995;Blundell & Bond, 1998). Systems GMM is generally preferred over rst-difference GMM because it minimizes data loss and works well in both balanced and unbalanced panel data. First-difference GMM, in unbalanced data sets magni es the gap because it subtracts previous data from contemporaneous one.
Generally speaking, GMM model is preferred over Ordinary Least Square methods because it overcomes the problems of heteroskedasticity, autocorrelation and normality problems (Fraj, Hamdaoui & Maktouf, 2018). Heteroskedasticity, autocorrelation and normality problems are very serious problems in time series data modelling since their presence will result in biased results being obtained. Therefore, any model that overcomes these problems is preferred. In addition to that, GMM model corrects endogeneity problem (see, Arellano & Bond, 1991;Arellano & Bover, 1995;Blundell & Bond, 1998). Endogeneity emanates from various and different channels and is a situation where by one or more explanatory variables correlates with the error term and these channels include but not limited to omitted variable(s) on the right-hand side of the regression, measurement errors of explanatory variables and simultaneity, where both the explained and the explanatory variable simultaneously affect each other.
In this research study we apply both systems and rst-difference GMM for comparison purposes. The Jstatistic and Arellano and Bond test of serial correlation are also employed as diagnostic tests. Before running dynamic GMM model in this study we start by checking the descriptive statistics of the variables, check unit root of the variables by employing Augmented Dickey Fuller (ADF) test, by Dickey and Fuller (1979) and Phillips Peron (PP) test, by Phillips and Perron (1988). Unit root test will help ascertain the integration order of the variables. Pedroni test of cointegration will also be used to check if the variables have a long run relationship. The equation below is the statistical representation of the GMM model employed: GHG t = β 0 + β 1 TOR t + β 2 RE t + β 3 POP t + β 4 INF t + β 5 GDP t + β 6 FDI t + et (2) The statistical representation in equation 2 above represents the GMM model speci ed in this study. In the equation 2 above, GHG is the dependent variable while TOR, RE, POP, INF, GDP and FDI are explanatory variables. β 0 to β 6 are the Coe cient parameters of the models and et is the error term.
Descriptive statistics results
In Table 1 Table 2 below. ADF and PP unit root test methods have been identi ed as the best and most reliable methods, see Granger (1986). Greenhouse gasses emissions in Table 2 below is not stationary at level and stationary at rst difference as per ADF unit root results. Both ADF and PP test of unit root agrees that greenhouse gasses emissions is integrated of order 1. Tourism spending according to PP test is stationary at rst difference at 1% signi cant level, hence it is integrated of order 1 and ADF test also con rm the same results at 1% signi cant level. Renewable energy use as per the ndings in Table 1 is not stationary at level and stationary at rst difference as per ADF test results, while PP test also con rms that at rst difference it is indeed stationary. Moreover, Population size, as per PP test results, is not stationary at level but stationary at rst difference at 1% signi cant level. The ADF test results also con rms that indeed at rst difference it is stationary. In ation rate according to both ADF and PP test results is stationary at both level and rst difference at 1% signi cant level. The log of GDP (lnGDP) is not stationary at level and stationary at rst difference as per both ADF and PP test. FDI at 1% signi cant level is not stationary at level but stationary at rst difference as per ADF test, while PP test also con rms that indeed it is stationary at rst difference, see Table 2 below.
Pedroni cointegration test
Coi ntegration test is one crucial test in economic modelling that needs to be checked in order to ascertain the long-run relationship between variables, Granger (1986); Engle and Granger (1987). According to Table 3 below in this research, Pedroni cointegration test is used. The results of Group ADF t-Statistic, Group PP t-Statistic, Panel PP t-Statistic and Weighted Statistic, Panel ADF t-Statistic and Weighted Statistic are signi cant at 1% level of signi cant showing that the null hypothesis of no cointegration should be rejected and accept that the variables are cointegrated. Group rho-Statistic, Panel v-Statistic and Panel rho-Statistic results show that null hypothesis of no cointegration should be accepted. However, this is overcome by the overwhelming evidence from Group PP-Statistic and ADF Statistic, and Panel PP-statistic and ADF-statistic which shows that the variables are cointegrated. Therefore, there is a long-run equilibrium relationship between greenhouse gasses emissions, tourism development, renewable energy, population size, in ation rate, GDP and FDI of the OECD countries.
Panel GMM Results and discussion
In this research study we give the ndings of dynamic panel GMM model in Table 4 below. Both systems GMM and rst-difference GMM model's ndings are presented. The dependent variable in this model is greenhouse gasses emissions while the other variables are explanatory variables. The rst difference of greenhouse gasses emissions is employed as an explanatory variable to cater for endogeneity problems that might exist. The second difference of greenhouse gasses emissions is also automatically employed as the model's instrument. Systems and rst-difference GMM provides that one lag value of greenhouse gasses emissions signi cantly affects current value of greenhouse gasses emissions negatively. This shows that if greenhouse gasses emissions was high in the past it will drop in the future. This is a good sign since nations are working towards a low carbon environment in the future. The results of systems and rst-difference GMM indicates that there is no signi cant association between Tourism receipts and spending in the OECD nations with greenhouse gasses emissions. This shows that tourism development does not signi cantly impact the emissions of greenhouse gasses. These ndings oppose the postulations of Zhang and Liu (2019);Yue, et al. (2021) who alludes that tourism development positively affect carbon-dioxide emissions. The Coe cient value of systems GMM is positive in this research, see Table 4, indicating that a rise in tourism development should increase greenhouse gasses emissions, however it is not signi cant. The difference in the ndings with past studies maybe due to the fact that the studies are done in different countries with different policies. OECD member nations have already started shifting to renewable energy use hence the reason behind no signi cant impact of tourism industry on greenhouse gasses emissions.
Renewable energy consumption provides for a very signi cant negative impact on greenhouse house gasses emissions. The results for both systems and rst-difference GMM are signi cant at 1% level. Therefore, there is a strong negative link between renewable energy use and greenhouse gasses emissions. This shows that renewable energy use is the major driver towards reducing greenhouse gasses emissions, Yue, et al. (2021);Tian, et al. (2021);Chaoqun (2011) and countries must be encouraged to adopt green technology to achieve low carbon in future (Salim & Ra q, 2012;Becker & Fischer, 2013;Deka, Cavusoglu & Dube, 2021;. The results of systems GMM in Table 4 below gives that population size of the OECD nations negatively affect greenhouse gasses emissions showing that an increase in population size signi cantly reduce greenhouse gasses emissions at 10% signi cant level. The Coe cient of rst-difference GMM is negative but not signi cant. The ndings on the nexus between population size and greenhouse gasses emissions differs from past studies such as Yue, et al. (2021) who observed that population positively affect carbon-dioxide emissions. The difference may be due to different population sizes and policies on population growth, for example some countries have adopted the one child policy which has seen population decreasing in some parts of Europe, together with the adoption of renewable energy use which does not emit greenhouse gasses.
In ation rate is observed to provide a signi cant positive in uence on greenhouse gasses emissions. This shows that high rate of in ation tends to in uence an increase in the emission of greenhouse gasses. Therefore, if In ation rate is stabilized together with using renewable energy sources, then a low carbon future accompanied with low rates of in ation will be achieved Deka, Cavusoglu & Dube, 2021). In addition, Gross Domestic Product is observed to have a signi cant negative link with greenhouse gasses emissions. An increase in GDP of the OECD nations has the effect of reducing carbon-dioxide emissions. These ndings are favorable in that no trade-off situation is faced by policy makers as nations seek to improve GDP and at the same time reduce greenhouse gasses emissions. All credit goes to green technology use as most of these OECD nations have resorted to using renewable energy and hence enjoying its fruits. FDI is found to have no signi cant impact on greenhouse gasses emissions. The Coe cient is negative indicating that an increase in FDI should reduce greenhouse gasses emissions, however its impact is not signi cant. Greenhouse gasses emissions (GHG) is the dependent variable.
The second lag of greenhouse gasses emissions (GHG(-2)) is the model's instrument.
The J-statistic results that is used for diagnostic testing of GMM model to see if the model is correctly speci ed gives that its value is less than the critical value and its p-value is greater than 10% signi cant level, showing that we should accept the null hypothesis that the model is speci ed correctly. Arellano and Bond test of serial correlation is also employed for rst-difference GMM and its value is less than the critical value, while it's p-value is greater than 10% signi cant level. Therefore, we accept the null hypothesis, that there is no serial correlation problem in the model. Thus, the ndings provided in this model are robust, reliable and valid.
Conclusion
The current study is undertaken for the purpose of covering the gap existing in the literature study on the association between Tourism development and environmental degradation. The tourism industry has been long blamed as the major driver to global warming since it is one among other industries that uses more energy (Chaoqun, 2011), most of which comes from sources that emits carbon-dioxide. Unlike past studies that alludes that tourism development exacerbates the emission of carbon-dioxide hence global warming (Tian, et al. 2021;Yue, et al. 2021), the current research shows that in the OECD countries, tourism does not have any signi cant link with greenhouse gasses emissions. This is because OECD nations have long started the shift from fossil fuel use as sources of energy to renewable energy use which doesn't exacerbates greenhouse gasses emissions. However, the current research concurs with the ndings of past studies that the consumption of renewable energy signi cantly reduces greenhouse gases emissions (Hdom, 2019;Mohsin, et al., 2021;Xiaosan, et al., 2021;Bhat;2018;Kahia, et al., 2019;Khan;. Renewable energy use should continue to be encouraged in all world nations for the purpose of achieving low carbon in the future (Salim & Ra q, 2012;Becker & Fischer, 2013;. The current study uses dynamic GMM model for 38 OECD countries from 2008 to 2019. Dynamic GMM model remains one of the best models since it corrects for endogeneity problem in a model, Arellano and Bover (1995); Arellano and Bond (1991); and Blundell and Bond (1998). GMM model also overcomes autocorrelation, heteroskedasticity and normality problems (Fraj, et al., 2018), hence the robustness and reliability of results obtained. Gross Domestic Product and population size negatively affect greenhouse gasses emissions while in ation rate is observed to have a signi cant strong link with greenhouse gasses emissions. The results of Pedroni cointegration test show that the indicators under study have a signi cant long run relationship because they are cointegrated, Granger (1986).
The limitations of the study are that it might have omitted other crucial explanatory variables that might provide signi cant impact on greenhouse gasses emissions, such as urbanization, and fossil fuels use among many others. However, the results are robust because dynamic GMM model corrects for endogeneity problem that might arise due to omission of some regressors. Moreover, the ndings of this research can be generalized to other developed nations with the similar conditions to those of the OECD. Therefore, there is need for more work to be done to examine on how tourism development, population size, renewable energy use and other regressors, affect greenhouse gasses emissions in developing nations such as African countries.
Declarations Funding
No funding was received from any organization.
Competing interests
The authors declare that they have no competing interests.
Availability of data and materials
The data used in this paper is secondary data and were retrieved from the Organization for Economic Cooperation and Development (OECD) website www.oecd.org
Ethical Approval
Not Applicable Consent to Participate
|
2021-12-12T16:38:50.624Z
|
2021-12-09T00:00:00.000
|
{
"year": 2021,
"sha1": "28a89b301b2e3d2c2cf73c07908b6bfda062008a",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-1077825/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "d99aff9e8cc201511da239f76e2f4172fcb400b4",
"s2fieldsofstudy": [
"Environmental Science",
"Economics"
],
"extfieldsofstudy": []
}
|
252273903
|
pes2o/s2orc
|
v3-fos-license
|
Poly(2-alkyl-2-oxazoline)s: A polymer platform to sustain the release from tablets with a high drug loading
Sustaining the release of highly dosed APIs from a matrix tablet is challenging. To address this challenge, this study evaluated the performance of thermoplastic poly (2-alkyl-2-oxazoline)s (PAOx) as matrix excipient to produce sustained-release tablets via three processing routes: (a) hot-melt extrusion (HME) combined with injection molding (IM), (b) HME combined with milling and compression and (c) direct compression (DC). Different PAOx (co-)polymers and polymer mixtures were processed with several active pharmaceutical ingredients having different aqueous solubilities and melting temperatures (metoprolol tartrate (MPT), metformin hydrochloride (MTF) and theophylline anhydrous (THA)). Different PAOx grades were synthesized and purified by the Supramolecular Chemistry Group, and the effect of PAOx grade and processing technique on the in vitro release kinetics was evaluated. Using the hydrophobic poly (2-n-propyl-2-oxazoline) (PnPrOx) as a matrix excipient allowed to sustain the release of different APIs, even at a 70% (w/w) drug load. Whereas complete THA release was not achieved from the PnPrOx matrix over 24 h regardless of the processing technique, adding 7.5% w/w of the hydrophilic poly (2-ethyl-2-oxazoline) to the hydrophobic PnPrOx matrix significantly increased THA release, highlighting the relevance of mixing different PAOx grades. In addition, it was demonstrated that the release of THA was similar from co-polymer and polymer mixtures with the same polymer ratios. On the other hand, as the release of MTF from a PnPrOx matrix was fast, the more hydrophobic poly (2-sec-butyl-2-oxazoline) (PsecBuOx) was used to retard MTF release. In addition, a mixture between the hydrophilic PEtOx and the hydrophobic PsecBuOx allowed accurate tuning of the release of MTF formulations. Finally, it was demonstrated that PAOx also showed a high ability to tune the in vivo release. IM tablets containing 70% MTF and 30% PsecBuOx showed a lower in vivo bioavailability compared to IM tablets containing a low PEtOx concentration (7.5%, w/w) in combination with PsecBuOx (22.5%, w/w). Importantly, the in vivo MTF blood level from the sustained release tablets correlated well with the in vitro release profiles. In general, this work demonstrates that PAOx polymers offer a versatile formulation platform to adjust the release rate of different APIs, enabling sustained release from tablets with up to 70% w/w drug loading.
Introduction
Oral solid dosage forms with sustained-release features are of high interest as they allow to maintain therapeutically optimal plasma drug concentrations for an extended time, therefore, decreasing the dosing frequency and improving patient compliance. Although sustained release dosage forms offer many advantages, the formulation of such products is mainly challenging for highly dosed and soluble active pharmaceutical ingredients (APIs) as the drug release is often too fast and/or shows a burst release [1,2].
Hot-melt extrusion (HME) is considered an essential drug formulation development technique in the pharmaceutical field. HME can be used to increase the bioavailability of poorly soluble APIs, mask the taste and control the release of specific APIs, develop enhanced drug delivery systems, and others. Consequently, therapeutic goals and patient compliance can be enhanced [3][4][5][6]. HME is also an eco-friendly technique that does not involve solvents. During HME, the API is embedded in a polymeric carrier under controlled conditions of elevated temperature and pressure. Subsequently, the material is forced through a well-defined die to form a uniform geometry and density product. After extrusion, extrudates can be processed into the desired dosage form (e.g. tablets, mini-matrices, granules, films, pellets, or others). The selection of the downstream approach is highly dependent on the intended application, the final dosage form's geometry, production cost, and material behaviour [7]. HME is an effective manufacturing technique to prepare sustained release dosage forms due to the intense mixing of crystalline drug particles with the release retarding matrix carriers [3,6]. However, there is only a limited number of thermoplastic pharmaceutical polymers with suitable physicochemical properties to allow successful HME: hydroxypropylmethylcellulose [8], xanthan gum [9], methacrylic acid co-polymers [10][11][12], ethylcellulose [13][14][15][16][17] and ethylene-vinyl acetate. However, commercially available polymers lack the possibility to tune their chemical structures as in most cases only the molecular weight is varied to obtain a polymer grade with different properties. Most polymers also require plasticizers to improve the processing conditions, and only a few allow to incorporate a drug load up to 50% w/w without processing issues or burst-release concerns [18]. Hence, expanding the range of polymers suitable for HME will support the development of alternative dosage forms.
Poly (2-alkyl-2-oxazoline)s (PAOx) are a polymer class comprising biocompatible, thermoresponsive, and amphiphilic polymers, depending on the side chain, that feature a tertiary amide group in the repeating units [19]. The synthesis of PAOx was developed by different research groups and is performed by cationic ring-opening polymerization (CROP) of 2-substituted 4,5-dihydrooxazoles, referred to as 2-oxazolines [20][21][22]. PAOx is an interesting polymer group due to its high tunability resulting from changing the alkyl substituent (R) that constitutes the polymer side chain (Fig. 1), affecting the overall hydrophilicity, thermal properties, and processability. The chain length of the functional group at (R) position highly affects the polymer physicochemical properties, which can be further fine-tuned by copolymerization of different 2-oxazoline monomers (Fig. 1). By increasing the length of the alkyl component on the 2-position, more hydrophobic polymers will be obtained (Fig. 2) leading to polymers with different water solubility and some of them exhibit lower critical solution temperature (LCST) behaviour [23][24][25]. Therefore, PAOx might be interesting for developing sustained-release dosage forms.
Recently, Claeys et al. proved the suitability of poly-2-ethyl-2oxazoline (PEtOx) as a matrix excipient to produce controlled-release tablets using HME followed by injection molding [26]. Metoprolol tartrate and fenofibrate were used as water-soluble and poorly water-soluble APIs, respectively. HME of both formulations resulted in solid dispersions, and drug release showed slower MPT release from the PEtOx matrix compared to the pure API due to the slower dissolution rate of the polymeric matrix. However, the poorly water-soluble fenofibrate showed faster dissolution from the PEtOx matrix compared to the pure API. After this first report on using PAOx as excipient for drug formulation, various PAOx grades have been demonstrated for the efficient preparation of solid dosage forms [27][28][29][30][31][32][33]. However, to the best of our knowledge, PAOx have not been reported as excipient for sustained release formulations despite that they appear to be ideally suited for this purpose based on the tunability of their physical properties in combination with a good processability.
Therefore, in this work we studied the effectiveness of different PAOx polymers to sustain the release of highly dosed tablets. Poly (2-n-propyl-2-oxazoline) (P n PrOx), poly (2-ethyl-2-oxazoline) (PEtOx), poly (2-secbutyl-2-oxazoline) (P sec BuOx), and poly (2-cyclopropyl-2-oxazoline) (P c PrOx) were chosen based on their solubility behaviour ranging from water-soluble, thermoresponsive to almost water-insoluble (Fig. 2). These polymers were investigated as a formulation platform to control the sustained release behaviour of three APIs with different aqueous solubility: metoprolol tartrate (MPT), metformin hydrochloride (MTF) and theophylline anhydrous (THA) (Fig. 3). Several downstream processing techniques were used to prepare tablet-shaped dosage forms from the PAOx/API mixtures: (a) HME in combination with IM, (b) HME in combination with milling and compression, and (c) direct compression. After establishing the optimal polymer-API combinations which allowed to sustain the in vitro release of tablets with 70% w/w API loading, the most promising formulations were used for in vivo studies using beagle dogs.
Materials
Model drugs with different aqueous solubility and melting temperature were used to examine their effect on the processability and the release kinetics. Metoprolol tartrate (MPT), metformin hydrochloride (MTF) and theophylline anhydrous (THA) were purchased from Utag (The Netherlands), Sigma-Aldrich (Germany) and Siegfried (Switzerland), respectively. The aqueous solubility at 25 C is > 1000, 50 mg/mL and 8.3 mg/mL for MPT, MTF and THA, respectively, while the melting temperatures are 121, 231 and 273 C for MPT, MTF and THA, respectively.
(Co-)polymer synthesis
Full experimental details are included in the supporting information. Five homopolymers and three co-polymers were synthesized for this study based on our recently published optimized protocol for preparing defined high molar mass PAOx (Table 1) [34]. The homopolymers P n PrOx with 50 kg/mol, P n PrOx with 80 kg/mol and P c PrOx with 50 kg/mol were structurally similar. PEtOx with 50 kg/mol was also considered in this study due to its higher hydrophilicity, while a more hydrophobic polymer containing sec-butyl groups in the side chains was also included (P sec BuOx with 50 kg/mol). Furthermore, three co-polymers containing different ratios of 2-ethyl-2-oxazoline and 2-n-propyl-2-oxazoline were synthesized. These co-polymers presented similar molar masses~50 kg/mol (Table 1) with different hydrophobic properties depending on the ratio of the monomers. On the other hand, polymer blends were prepared by mixing P n PrOx (50 kDa) and PEtOx using mortar and pestle. 1 H NMR spectroscopy, size exclusion chromatography, thermal gravimetrical analysis and differential scanning calorimetry of all polymers is presented in the supplementary data and is summarized in Table 1. After polymer synthesis, the (co-)polymers were cryo-milled using liquid nitrogen in a simple coffee blender to obtain a sufficiently small particle size in order to ensure good homogeneity of the drug/polymer mixtures.
Size exclusion chromatography
Size exclusion chromatography measurements were performed in an Agilent 1260-series equipped with an online degasser, an ISO-pump, an automatic liquid sampler, a thermostatted column compartment at 50 C equipped with a precolumn and two PL gel 5 μm mixed-D columns in series, a 1260 diode array detector and a 1260 refractive index detector (RID). Measurements were performed in N,N-dimethylacetamide as an eluent containing 50 Â 10 À3 M LiCl to suppress interactions between the analyte and the packing material. The flow rate was set at 0.500 mL/min. To analyse the chromatograms, Agilent Chemstation software was used with a GPC add-on. Molar masses were calculated by light scattering detector, while dispersity values were calculated against poly (methylmethacrylate) (PMMA) standards.
Light scattering (LS) measurements are performed on a 3-angle static light scattering detector (miniDAWN TREOS, Wyatt Technology). The detector is coupled online to an Agilent 1260 infinity HPLC system (vide DMA-SEC) and used to determine the absolute molar mass of the polymer samples. The measurements are performed at ambient temperature, without a temperature control unit installed. The refractive index (RI) increment (dn/dc) values are either used as reported for certain polymers in N,N-dimethyl acetamide containing 50 Â 10 À3 M LiCl or determined via online size-exclusion chromatography (SEC) equipped with an RI detector, which measures the RI increase for a 1-10 mg/mL concentration series of the mentioned polymers. The LS results are further analyzed with the Astra 7 software from Wyatt Technology. 2.3.2. Thermal analysis Thermogravimetric analysis (TGA) was performed using a TGA 2 (Mettler-Toledo, Switzerland) with a large furnace and an autosampler, using 70 μL alumina cubicles. Samples (5-10 mg) were heated at 10 C/ min from 25 to 800 C under nitrogen atmosphere (80 mL/min). Evaluation was performed via the STARe software (Switzerland).
Modulated differential scanning calorimetry (MDSC) (Q2000 TA instruments, United Kingdom) was performed to study the physical state and the glass transition of all PAOx (co-) polymers using a heating rate of 2 C/min. The modulation period and amplitude were set at 1 min and 0.32 C, respectively. Samples of (5-10 mg) were placed in Tzero pans (TA instruments, Belgium) and heated from À10 to 120 C. Dry nitrogen (50 mL/min) was used to purge the MDSC cell.
Preparation of extrudates by HME
HME was performed on selected PAOx in combination with different APIs (MPT, MTF and THA) using a co-rotating twin-screw extruder. Physical mixtures (70% or 80% drug load, w/w) were extruded using an Xplore micro-compounder (DSM, The Netherlands), operating at 100 rpm and using the processing temperatures that are listed in Table 2. Afterward, part of the extrudates was milled to prepare tablets by compression and the other part was used for injection molding.
Injection molding
After HME, the extrudates were immediately processed into IM tablets via injection molding using a Haake MiniJet System (Thermo Electron, Germany) at a temperature in function of the formulation as shown in Table 2. During the IM process, an injection pressure of 800 bar for 10 s forces the material into the mold. A post-pressure of 400 bar for 5 s avoids expansion by relaxation of the polymer. Convex tablets were produced (mass: 410 AE 10 mg; diameter: 10 mm; height: 5 mm).
Compression
Both physical mixtures and milled extrudates were compressed using a STYL'One compaction simulator (Medelpharma, France) to produce DC and ME tablets, respectively. The compaction simulator was equipped with a single punch station. A 10 mm round punch set was used to compress convex tablets (350 AE 10 mg) at a compression force of 10 kN for all formulations. A dwell time of 100 ms was used without precompression.
2.6. Tablet characterization 2.6.1. Thermal analysis DSC (Q2000 TA instruments, United Kingdom) was performed to evaluate the percentage of drug crystallinity after tablet preparation. Tzero pans (TA instruments, Belgium) were filled with approximately 5-10 mg sample and placed in the DSC equipment after being nonhermetically sealed with Tzero lids using a Tzero Press (TA instruments, United Kingdom). An empty Tzero pan was used as a reference. A single heating run was performed at a heating rate of 10 C/min from 0 to 150, 260 and 280 C for MPT, MTF and THA formulations, respectively. The DSC apparatus was equipped with a refrigerated cooling system and dry nitrogen at a flow rate of 50 mL/min. The percentage of drug crystallinity was calculated by means of Equation (1) using the melt enthalpy obtained in DSC experiments.
Xc: the percentage of drug crystallinity (%) Δ H 1 : melt enthalpy of the drug in the tablet (J/g) Δ H 2 : melt enthalpy of the drug in the physical mixture (J/g)
Content uniformity
The content uniformity test was performed to check the degree of dose uniformity in the prepared tablets. A UV/VIS spectroscopy method was used for the determination of the API content in the prepared tablets. A pre-weighed tablet was crushed and transferred into a 100 mL volumetric flask containing simulated intestinal fluid without enzymes (SIF, pH 6.8). After shaking the flask for 48 h, the concentrate was filtered and diluted. The absorbance was measured using a UV-1650PC spectrophotometer (Shimadzu Benelux, Belgium) at a wavelength of 234, 223 and 273 nm for MTF, MPT and THA, respectively. The test was done in triplicate. Results were evaluated according to the European Pharmacopoeia [35].
Disintegration
Disintegration tests were performed per the USP standards [36] using a DIST-3 disintegration tester (Pharma Test, Germany) with discs. All experiments were conducted over 8 h in simulated intestinal fluid (SIF, pH 6.8) at a temperature of 37 C. The disintegration time of 3 individual tablets was recorded. The time was recorded until no tablets were left on the mesh.
Table 2
Overview of the formulation composition, the extrusion (T ext ) and IM temperatures (T IM ) ( C). F, FM and FC stands for formulations prepared using homopolymer, polymer mixtures, and co-polymers, respectively.
. Porosity
The porosity of tablets (n ¼ 3) was calculated based on Equation (2) by comparing the apparent density of the tablet calculated by dividing the mass by the volume of the tablet with the true density of tablets. The latter was measured using an AccuPyc 1330 helium pycnometer (Micrometrics, USA) at an equilibration rate of 0.0050 psig/min with the number of purges set to 10. The tablet volume (Equation (3)) was calculated by measuring four dimensions with a 96/0226 projection microscope (Reickert, Austria) as shown in Fig. 4.
In vitro dissolution
The impact of formulation composition and processing technique on the in vitro release was determined using USP apparatus II (paddle) on a VK 7010 dissolution system (VanKel Industries, USA). Drug release from all tablets (n ¼ 3) was determined using the paddle method on a VK 7010 dissolution system (VanKel Industries, USA) with a speed of 100 rpm.
The similarity factor f 2 was used to measure the similarity between release profiles of two different formulations. As reported by Shah et al., the similarity factor can be calculated using Equation (4) [37]. Taking into consideration that only one sample point with a cumulative drug release higher than 85% can be included. Two release profiles are considered identical when f 2 ¼ 100, while an average difference of 10% at all measured time points results in an f 2 value of 50. Dissolution profiles with f 2 values higher than 50 are considered similar.
f 2: similarity factor R t : the cumulative percentage of drug released at each of the selected n time points from the reference S t : the cumulative percentage of drug released at each of the selected n time points from the sample 2.6.6. Micro-computed tomography analysis (μCT) High resolution X-ray tomography (μCT) was used to study the effect of hydrophilic PEtOx on the pore distribution of IM tablets of F8 and FM3 before and after dissolution. Imaging was performed using the High Energy CT system optimized for research at the Ghent University Centre for X-ray Tomography (UGCT) [38] in which the source was operated at a voltage of 90 kV and a target power of 10 W. 2400 projections were taken with an exposure time of 1 s per image for a full 360 rotation. All scans were reconstructed using Octopus Reconstruction into a 3D volume (stored as a stack of 2D images) at a voxel size of 5.47 3 μm 3 . At the given tube settings, the spatial resolution is almost not affected by the focal spot size. The in-house developed Octopus Analysis software package was used for 3D analysis of the reconstructed data to characterize the tablet porosity and pore distribution [39,40]. To segment the pore structure, thresholding was performed using the Octopus Analysis software. To identify the individual pores, labelling and watershed separation was performed. The total porosity was measured as the ratio of a tablet's pore volume to its total volume. To analyse the size of the pores the maximum opening and the equivalent diameter were used. The maximum opening and the equivalent diameter are the diameter of the largest sphere that fits in the pore space and the diameter of a sphere with the same volume as the pore space, respectively. Finally, the pores were classified based on their size (maximum opening) and VGStudio Max (Volume Graphics, Heidelberg, Germany) was used to visualize the virtual tablet in 3D.
In vivo experiments
In vivo studies were performed after the approval of the ethical committee of the Faculty of Veterinary Medicine (application ECD 2018-32).
Two IM tablets were studied to investigate the influence of the polymer grade on the in vivo release: P sec BuOx:MTF (30:70%, w/w: F5) and P sec BuOx:PEtOx:MTF (22.5:7.5:70%, w/w: F7). The commercially available Glucophage™ SR 500 mg (½ tablet) was previously tested by our research group and used as a sustained release reference formulation [41]. Tablets were administered orally with 20 mL water to beagle dogs after a wash-out period of 1 week. The dogs fasted for 12 h before the tablet administration with only access to water. Blank blood samples were collected before the tablet administration. Plasma samples were collected in dry heparinized tubes at 1, 2, 3, 4, 5, 6, 8, 12, 18 and 24 h post-administration. Afterward, blood was centrifuged for 10 min at 1500 g and frozen at À25 C until analysis. Formulations based on PAOx were recovered from the faeces to determine the remaining amount of MTF. The gastro-intestinal residence time was also recorded. mobile phase (acetonitrile: potassium dihydrogen phosphate buffer pH 6.5 (34:66%, v/v) and 3 mM SDS) was set at 0.7 mL/min. The detection wavelength was 236 nm.
Data analysis
The chromatograms were recorded by the software package D-7000 HSM Chromatography Data Manager data collection and processing. The peak plasma concentration (C max ), time to reach C max (T max ), half value duration (HVD t50%Cmax ), and area under the curve (AUC 0-12h ) were calculated from the plasma concentration curve. To compare the extent of sustained release between the tablets, the Rd ratio was calculated by dividing the HVD t50%Cmax values of tested tablets over the HVD t50%Cmax of an immediate-release formulation derived from the literature [43]. The half-value-duration (HVD) is defined as the total time in which the plasma concentration is above one-half of C max. Low, intermediate and strong sustained release characteristics are defined as R D ratios of 1.5, 2 and > 3, respectively.
Outcomes were statistically analyzed by repeated-measures ANOVA (univariate analysis) using SPSS 27 (SPSS, Chicago, USA). To compare the effects of the different formulations on the pharmacokinetic parameters, multiple comparisons among pairs of means were performed using a Bonferroni post-hoc test with p < 0.05 as the significance level.
Polymer synthesis and characterization
The synthesis of pure and defined high molar mass PAOx is quite challenging, but has been recently achieved through careful optimization of the polymerization conditions [34]. These optimized conditions were here applied for the preparation of PEtOx, P n PrOx, P c PrOx and P sec BuOx polymers as well as PEtOx-stat-P n PrOx co-polymers with a targeted molar mass around 50 kg/mol as well as 80 kg/mol for P n PrOx (see supporting information for full experimental details). The co-polymers were prepared to allow a direct comparison with polymer blends consisting of PEtOx and P n PrOx co-polymers. The P sec BuOx homopolymer was selected as the branched, racemic sec-butyl side-chain leads to an amorphous polymer with a higher glass transition temperature (T g ) than poly (2-n-butyl-2-oxazoline) that has a T g of 25 C and is semi-crystalline [44,45]. SEC analysis confirmed the relatively low dispersity for all synthesized polymers. P sec BuOx was synthesized by bulk polymerization and had a somewhat higher dispersity of 1.56. The calculated molar masses for the synthesized polymers were all rather close to the targeted molar mass (Table 1). TGA data indicated that all polymers were stable up to at least 310 C, confirming the high thermal stability of the PAOx used in this study. MDSC data showed no endothermic peaks, confirming the amorphous structure of these polymer grades with the glass transition reported in Table 1.
Processability of different PAOx grades
PAOx are highly stable polymers with physicochemical properties and solubilities that make them highly processable using diverse techniques as all formulations could be successfully processed via HME, IM and DC without the addition of plasticizers or other excipients. Different PAOx formulations could be processed with various drug loads during preliminary extrusion experiments, indicating that processing via HME was possible at 70% w/w drug load. Minimum processing temperatures were used to obtain good quality extrudates, keep the torque value below 80% of the maximum torque of the extruder (5000 N), and allow the melt to flow into the tablet molds during IM. Table 2 displays the extrusion and injection molding temperatures that were used.
Since the different PAOx have different T g 's (Table 1), the extrusion temperature was adjusted based on the PAOx grade. P sec BuOx formulations were processed at a higher temperature than P n PrOx due to the higher T g for P sec BuOx. Similarly, formulations containing more PEtOx required higher extrusion temperatures based on the higher T g of PEtOx compared to P n PrOx. The processing temperature also depended on the model drug: MTF and THA formulations were extruded at a higher temperature than the MPT-based formulations due to the higher melting temperature of MTF (231 C) and THA (273 C) compared to MPT (121 C), requiring more energy to soften the high drug-loaded mixtures. Note that for the preparation of the sustained release formulations it was aimed to keep the processing temperature below the melting temperature of the API to retain the crystalline form of the drug in the final tablets.
Tablet quality
All tablets were successfully prepared via DC, HME/milling/ compression and HME/IM to produce DC tablets, ME tablets and IM tablets, respectively. The drug content of each individual tablet was between 96% and 104% of the average content, which complies with the acceptance criteria of the European Pharmacopoeia. Good content uniformity was obtained due to adequate mixing before extrusion or compression, while HME provided additional intensive mixing due to the shear provided by the screws.
P n PrOx matrix
During the first series of experiments, P n PrOx (50 kg/mol) was chosen as a matrix excipient and processed with 70% w/w of MPT, MTF and THA. The drug release kinetics depended on the matrix composition, the manufacturing technique, and the model drug (Fig. 5). The release of MPT from the P n PrOx matrix (F1) was complete within 1, 4, and 4 h for DC, ME, and IM tablets, respectively (Fig. 5A). It was previously reported by different research groups that tablets prepared by DC show faster release compared to tablets prepared by heat processing techniques such as HME and IM [13,15,46]. This is due to the densification during heat-involved processing, yielding tablets with fewer pores, less water penetration and slower drug release. The porosity of the F1 formulations was 17.0 AE 2.3%, 11.6 AE 2.2% and 2.2 AE 0.8% for DC, ME and IM tablets, respectively. Moreover, the extrusion process provides intensive mixing of crystalline drug particles with the release retarding matrix excipient, resulting in more sustained release profiles and reduced intergranular (pores between particles) and intragranular (pores within particles) porosity. In a study by Crowley et al., ethylcellulose was used as a matrix excipient in tablets containing 30% w/w of the highly water-soluble drug guaifenesin. The study revealed fewer pores and a smaller median pore radius for IM tablets than DC tablets regardless of the compression force used [15]. Quinten et al. indicated a faster burst drug release from DC formulations compared to IM formulations [13], due to the denser matrix obtained after injection molding. In another study, the release of caffeine from DC, IM and 3D printed tablets was compared. The formulation comprised 28.5% polyvinylpyrrolidone, 57% polycaprolactone, 9.5% polyethylene oxide and 5% w/w caffeine. Results revealed an immediate release behaviour from the DC tablets. However, the IM tablet showed sustained release over 48 h [47]. Fuenmayora et al. concluded that the processing technique affected the final tablet quality, including the release kinetics, and that tablets prepared by IM were densely packed, exhibiting more extended-release profiles compared to tablets prepared by DC. However, in the current study IM tablets of F1 had a similar release profile to the ME tablets (Fig. 5A). This was attributed to the loss of MPT crystallinity in the matrix after the second heat treatment during IM, while DC and ME tablets showed 100% MPT crystallinity. IM was performed at a temperature of 130 C ( Table 2) which was higher than the melting temperature of MPT, preventing a fraction (AE15%) of the MPT content from recrystallizing upon cooling (Table 3).
To tune the release of MPT from the P n PrOx matrix, a higher molar mass P n PrOx (80 kg/mol) was also tested (F2) to study the effect of polymer molecular weight on the release kinetics. As shown in Fig. 5A, there was no significant difference (f2 value > 50) between tablets prepared from 50 or 80 kDa P n PrOx, regardless of the processing technique.
On the other hand, the release of MTF from the P n PrOx matrix (F4) displayed different release patterns with the highest release rate for the DC tablet, followed by the ME and IM tablet (Fig. 5B). IM tablets showed slower release due to matrix densification after tablet preparation using high temperature and pressure, leading to a lower porosity, tortuosity, and water penetration. The porosity of DC, ME and IM tablets was 19.0 AE 2.7%, 14.5 AE 1.9% and 3.2 AE 1.1%, respectively. As the processing temperature of MTF-based formulations was 30-70 C below the melting temperature of MTF, this prevented crystallinity loss during heat processing. DC tablets prepared using P n PrOx matrix excipient showed a disintegration time of 25 AE 5 and 60 AE 15 min for MPT and MTF DC tablets, respectively. In contrast, ME and IM tablets did not disintegrate throughout the 12 h test period. DC tablet disintegration might be correlated with the low glass transition (17.8 C) of P n PrOx before heat processing, which is reflected by the first MDSC heating run ( Figure S9), making P n PrOx tablet rubbery at the temperature of the test medium (37 C) and more prone to mechanical stress during disintegration testing.
However, P n PrOx showed a higher glass transition after heat treatments which is reflected by the second MDSC heating run, making the polymer Graph (C) also shows the in vitro release of THA DC tablets from a P n PrOx matrix with 80% (w/w) drug load () and from a P c PrOx matrix with 70% (w/w) drug load ( ).
in the tablet core glassy at the disintegration test temperature and less affected by the mechanical stress. The release of the more hydrophobic THA (F8) was significantly slower and also depended on the processing technique with 76, 51, and 14% THA released from the P n PrOx matrix after 24 h from DC, ME, and IM tablets, respectively. However, complete THA release from the hydrophobic P n PrOx matrix was not obtained, regardless of the processing technique. THA tablets prepared by DC showed a faster release behaviour due to their higher porosity (17.9 AE 1.9%) compared to tablets formed by IM (1.9 AE 0.9%). While sustained but incomplete THA release was achieved from a DC tablet, a higher drug load (i.e. 80% w/w) enhanced the drug release rate with complete THA release after 12 h (Fig. 5C). However, the noticeable burst release of the formulation with 80% w/w THA indicated that 70% w/w THA load was the maximum concentration resulting in sustained THA release, as a higher drug load increased the release rate due to the formation of more pores in the hydrophobic matrix. In addition, a lower fraction of hydrophobic polymer resulted in easier wetting of the tablet. Moreover, DSC data indicated that THA remained mainly crystalline after processing (the crystallinity varying between 97.3 and 99.1%), regardless of the processing technique and the formulation composition.
Theophylline release from P c PrOx matrix
PAOx is tuneable by modifying the side chain at the 2-substituent of the 2-oxazoline monomer; this allows to control the hydrophilicity and lower critical solution temperature (LCST) [19]. This means that they are fully soluble at low temperatures and phase separate at temperatures beyond the LCST. The polymer chains are dehydrated, collapse and establish intramolecular hydrophobic interactions above the LCST. The higher hydrophilicity of P c PrOx compared to P n PrOx is reflected by the higher LCST (25 C for P n PrOx and 30 C for P c PrOx), which is a consequence of the more compact arrangement of the cyclic side chain (Fig. 2). A cyclic topology makes the cyclo-propyl group's rotation much more restricted than a linear propyl group [24,48]. A complete release of THA was not achieved using the P n PrOx matrix. Therefore, P c PrOx (F10) was evaluated as matrix excipient to tune the release by preparing DC tablets with 70% w/w THA. As shown in Fig. 5C, the release of THA from P c PrOx tablets was significantly faster compared to P n PrOx tablets, indicating the importance of minor changes in the polymer structure that influence the polymer hydrophilicity.
Theophylline release from P n PrOx and PEtOx polymer mixtures and co-polymers
As the tablets consisting of P n PrOx and THA did not reach full drug release, polymer blends and co-polymers consisting of PEtOx and P n PrOx were investigated to tune the in vitro release of THA from the tablets by incorporation of the more hydrophilic PEtOx. Firstly, the hydrophobic P n PrOx was mixed with the hydrophilic PEtOx in different ratios to enhance water penetration. To investigate the impact of having a physical mixture of two polymers versus the distribution of both ethyl and npropyl units in a single chain, co-polymers of P n PrOx and PEtOx were prepared. These co-polymers were synthesized as described in the supplementary data and were formulated with 70% w/w THA. Table 2 summarizes the different formulations used (FC1-FC3 indicating copolymers and FM1-FM3 indicating polymer mixtures).
Introducing 7.5% PEtOx to the P n PrOx release retarding matrix (FM3) resulted in a considerable increase in the drug release whereby 100, 88 and 45% THA was released from the DC, ME and IM tablets within 24 h, respectively. Moreover, the release of THA from FC3 was 100, 76 and 33% within 24 h from DC, ME and IM tablets, respectively (Fig. 6). Interestingly, the release of THA from DC, ME and IM tablets from the polymer mixtures and co-polymer formulations with the same polymer ratios was similar with f2 values > 50. These results indicate that the enhanced solvation of EtOx units will enhance water access to the tablet leading to faster release. The similar release kinetic for physical mixtures and co-polymers indicates that the small amount of PEtOx in the physical mixtures is most likely retained in the tablet as the co-polymers are not water-soluble at 37 C. In general, THA release from formulations with higher PEtOx was faster regardless of the processing technique (Fig. 6) as the hydrophilic PEtOx enhanced hydration of the hydrophobic P n PrOx matrix. PEtOx is first hydrated facilitating drug release that consequently leads to pore formation within the matrix. These channels increased the matrix permeability to the drug. This was supported by X-ray tomography images showing the pore distribution of IM tablets of F8 and FM3 before and after dissolution with a significant increase in the maximum pore opening of FM3 after 24 h dissolution (Fig. 7). The porosity increased from 0.08 to 0.27% and 0.09-11.48% for F8 and FM3, respectively.
P sec BuOx matrix
The next series of experiments used P sec BuOx as a matrix excipient to tune the release of the highly soluble APIs (MPT and MTF) that could not sufficiently be sustained with P n PrOx. Since P sec BuOx is more hydrophobic than P n PrOx, as shown in Fig. 2, a more sustained release was anticipated for these more hydrophilic drugs. All P sec BuOx-based formulations did not disintegrate over 12 h. This was due to the higher T g (48.6 C) compared to P n PrOx (17.8 C), making the polymer glassy at the test temperature and less prone to mechanical stress during disintegration. Firstly, the in vitro release of MPT from the P sec BuOx matrix was found to be complete within 2, 12 and 0.5 h for DC, ME and IM tablets (Fig. 8A). Release from the P sec BuOx matrix was slower from the DC and ME tablets compared to the P n PrOx matrix due to the higher hydrophobicity of P sec BuOx. However, the significantly faster release of the IM tablets could be ascribed to the loss of MPT crystallinity after the second heat treatment as P sec BuOx had to be processed at higher temperatures compared to P n PrOx-based formulation due to the higher glass transition (Table 1). IM for F3 was performed at 30 C above the melting temperature of MPT (121 C). As a result, the crystalline fraction of MPT significantly dropped to 40% (Table 3). On the other hand, P sec BuOx exhibited an excellent ability to sustain the release of MTF (Fig. 8B) as no loss of crystallinity was observed after processing due to the higher melting temperature of MTF. The in vitro release of MTF was complete after 6 and 16 h for DC and ME tablets, respectively. However, a complete MTF release was not achieved within 24 h from the IM tablets. Subsequently, introducing a low concentration (6%, w/w) of the hydrophilic PEtOx (F6) significantly improved MTF release, as shown in Fig. 8B. Moreover, higher content (7.5%, w/w) of hydrophilic PEtOx (F7) was correlated with a faster MTF release from the IM tablet. This also indicated the ability to finely control the water penetration into the tablets by homogeneously blending the P sec BuOx and PEtOx homopolymers, allowing fine control of the pore formation, presumably due to drug release in these high drug loading tablets. These results clearly demonstrate that the release kinetics of drugs with different aqueous solubilities can be easily steered by adjusting the ratio of physical mixtures of PAOx with different hydrophilicity in the formulation.
The IM tablets of F5 and F7 with 70% w/w MTF were selected as the most promising formulations for an in vivo study for which the influence of the pH change in the gastro-intestinal tract was first studied in vitro by evaluating the drug release in simulated gastric fluid for 2 h, followed by simulated intestinal fluid for 22 h. As demonstrated in Fig. 9, the drug release was pH-independent, and the release of MTF from and SIF was similar with an f2 value > 50.In addition, alcohol-induced dose dumping was evaluated as recommended by the EMA [49]. Co-ingesting alcoholic beverages with the medication might disrupt the sustained release mechanism of formulations and result in dose dumping and safety issues. Thus, a SIF medium containing 5, 10 and 20% (v/v) ethanol was used for testing the IM tablets of F5 and F7. The release of MTF in SIF with 5 and 10% (v/v) ethanol did not significantly differ from the release in non-alcoholic SIF media (f2 value > 50). However, dose dumping occurred using SIF with extremely high alcoholic concentration 20% (v/v), indicating a sharp solubility increase of P sec BuOx at high ethanol concentrations. Similar findings were observed for F7 (data are not shown).
In vivo evaluation
The most promising IM tablets (Fig. 8B) were used to further investigate the in vivo performance of PAOx-based high drug-loading sustained release tablets. Thus, in vivo testing was performed on F5 and F7 containing 70% w/w MTF, using a commercially available Glucophage™ (FR) formulation previously tested by our research group as clinically approved reference [41]. The mean MTF plasma concentration-time profiles after oral administration of these formulations to dogs are illustrated in Fig. 10, while the mean pharmacokinetic parameters (AUC, C max , T max , HVD t50%Cmax and R D ) are reported in Table 4. Despite the slow in vitro dissolution rates of FR tablets, a faster in vivo drug release was observed (Fig. 10) with a mean C max of 2.4 μg/mL after 3.0 h for FR. FR tablets form a surface gel layer in contact with water which has high sensitivity to the gastro-intestinal shear forces. The P sec-BuOx-based IM tablets (F5) revealed a C max of 1.2 μg/mL after 5.3 h, whereby the slow in vitro release from P sec BuOx formulation (F5) correlated with the low in vivo bioavailability, low C max and incomplete MTF release. However, the addition of a small PEtOx fraction (F7) significantly increased C max to 2.2 μg/mL after 4.2 h. The addition of PEtOx (7.5%, w/w) to the P sec BuOx matrix enhanced the drug release from the PAOx matrix and significantly improved the in vivo bioavailability. The PAOx-based tablets were still intact after 24 h and could be recovered from the faeces, containing 56 and 18% of the MFT content in case of F5 and F7 IM tablets, respectively. In contrast, no FR tablets could be recovered from the faeces after oral administration, indicating that tablets were eroded by the gastro-intestinal motility. Besides, the gastrointestinal residence time of F5 and F7 were 24 and 26 h, respectively.
The R D values were calculated to indicate the extent of the sustained release based on the HVD T50%Cmax value (3.2 h) of an immediate release formulation administrated to beagle dogs [43]. The R D values of 1.7, 2.9 and 3.2 indicated low-intermediate, strong and strong sustained release properties of FR, F5 and F7, respectively.
Conclusion
PAOx are identified as promising excipient for preparing sustainedrelease matrix tablets of highly dosed (70% w/w drug), highly soluble model drugs, applying DC, HME, or IM as manufacturing techniques. Changing the alkyl group on the polymer side-chain to control the polymer solubility behaviour and using polymer mixtures or co-polymers significantly impacted the release rate, allowing its optimization in function of the application and the API. Polymer mixtures and copolymers with the same polymer ratios showed similar release profiles, indicating that the PEtOx fraction did not dissolve from the physical mixture in the tablets. HME followed by IM was found to be a promising method to prepare sustained release dosage forms due to the intensive densification of the matrix. DC as a manufacturing technique also showed promising sustained-release results for the slightly soluble THA. The versatile potential of PAOx matrixes was also confirmed in vivo, where the sustained release properties of IM tablets were adjusted by mixing hydrophilic PEtOx with the thermoresponsive hydrophobic P sec BuOx. Moreover, PAOx formulations showed superior sustained-release capacity compared to the commercially available FR sustained release formulation. We expect this system to be extended to different drugs and predict a dynamic future for using these polymers in sustained-release oral drug formulation.
Declaration of competing interest
The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: Chris Vervaet has patent #US2021/015926A1 pending to Universiteit Gent.
|
2022-09-15T15:50:11.692Z
|
2022-09-01T00:00:00.000
|
{
"year": 2022,
"sha1": "7d3b5521516fa5af6810b6302ec83df69227b2d5",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.mtbio.2022.100414",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c9f1e41feef6e48a9588acc6d9fd61807516adaa",
"s2fieldsofstudy": [
"Materials Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
247171558
|
pes2o/s2orc
|
v3-fos-license
|
Patient-reported outcome measures for pain in women with pelvic floor disorders: a systematic review
Introduction and hypothesis Patient-reported outcome measures (PROMs) are helpful instruments when measuring and reporting changes in patient health status (Al Sayah et al. J Patient Rep Outcomes 5 (Suppl 2):99, 2021) such as the health-related quality of life (HrQoL) of women with pelvic organ prolapse (POP) and stress urinary incontinence (SUI). The Australasian Pelvic Floor Procedure Registry (APFPR) aims to increase capacity for women to report surgical outcomes through the collection of HrQoL data (Ruseckaite et al. Qual Life Res. 2021) but currently lacks a pain-specific PROM for women with pelvic floor disorders (PFDs), particularly POP and SUI. This review aims to systematically review the existing literature and identify instruments that measure pain in women with POP and SUI for inclusion within the APFPR, which reports on complications from these conditions. Methods We conducted a literature search on OVID MEDLINE, Embase, CINAHL, PsycINFO and EMCARE databases in addition to Google Scholar and grey literature to identify studies from inception to April 2021. Full-text studies were included if they used PROMs to measure pain in women with POP and SUI. Two authors independently screened articles, extracted data and assessed methodological quality. Results From 2001 studies, 23 publications describing 19 different PROMs were included for analysis. Eight of these instruments were specific to the pelvic floor; four were only specific to pain and used across multiple disorders; three were generic quality of life instruments and four were other non-validated instruments such as focus group interviews. These instruments were not specific to pain in women with POP or SUI, as they did not identify all relevant domains such as the sensation, region and duration of pain, or incidents where onset of pain occurs. Conclusions The findings of this review suggest there are no current PROMs that are suitable pain-specific instruments for women with POP or SUI. This knowledge may inform and assist in the development of a new PROM to be implemented into the APFPR. Supplementary Information The online version contains supplementary material available at 10.1007/s00192-022-05126-4.
Introduction
Pelvic floor disorders (PFDs) involve dysfunction of the muscles within the pelvic floor, where the pelvic muscles weaken or tighten leading to complications [1]. These complications can include stress urinary incontinence (SUI) and pelvic organ prolapse (POP). The International Urogynecological Association (IUGA) and International Continence Society (ICS) defines POP as the descent of one or more of the anterior vaginal wall, posterior vaginal wall, uterus or apex of the vagina [2]. In addition, SUI refers to the involuntary loss of urine on effort or physical exertion [2]. In Australia, up to 50% of women are affected by SUI and 9% are symptomatic for POP [3], with a 19% lifetime risk of requiring a pelvic floor reconstructive procedure [4]. Until recently, of the surgical interventions for SUI and POP, it was estimated that approximately 25% involve the use of a mesh product with an estimated 150,000 mesh devices being implanted in Australia since 1998 [5].
A number of women have reported adverse events such as chronic pain and erosion of mesh into the vagina [6] in response to undergoing pelvic floor surgical procedures involving transvaginal mesh implants. Women with SUI, and those that have complications following surgery for this disorder, have significantly poorer healthrelated quality of life (HrQoL) than their counterparts without SUI and pain due to surgery. As HrQoL is subject to the patients' experience and personal beliefs, it is best described by patients themselves through patientreported outcome measures (PROMs) [7].
A PROM is defined by the US Federal Drug Administration (FDA) as a "measurement of patient health status elicited directly from the patient" [7]. Many PROMs have been developed to measure HrQoL and can be either generic or specific to a condition, covering several specific domains such as fatigue, depression and pain [8,9]. Registries are a proficient means of collecting diseaserelated PROMs as they routinely accumulate data from a large group of patients and thus can evaluate specified outcomes for a population [10]. The Australasian Pelvic Floor Procedure Registry (APFPR) was established in 2019 following a Senate inquiry into complications surrounding pelvic floor procedures that included pain and erosion of mesh into the vagina [11]. Due to the sometimes distressing and complex experience of pain from PFDs or complications associated with POP and SUI surgery, PROMs that measure an array of pain domains by capturing the type and range experienced in these circumstances can support early identification of relevant pain and the clinical management of patients undergoing these procedures [12]. The registry, which aims to provide support to women to report health outcomes regarding POP and SUI, would therefore benefit immensely from the inclusion of a pain-specific PROM.
Following an acceptability study conducted by the APFPR of PROMs in women following procedures for POP and SUI, it was found that women did not believe that current pain instruments were suitable for the registry [13]. Current PROMs from this study failed to recognize the sensation, region or duration of pain, or incidents where onset of pain occurs in women treated surgically for POP or SUI. While existing PROMs may have aspects that are relevant, there is not yet an instrument that covers all of these domains or where all questions are relevant for these groups of patients. Pain following surgery for POP or SUI is complex as it can exist for a variety of reasons including patientrelated factors, the underlying conditions of the disorders, post-operative healing or a range of post-surgical complications including mesh exposure, infection, urinary retention and nerve injury [14]. A greater understanding and analysis of the pain may point to the underlying pathophysiology of this symptom, leading to further clinical investigation and appropriate health service management of the underlying cause [15].
The aim of this study was to review the existing literature and to identify whether there is a current PROM that measures pain in adult women suffering from POP or SUI for inclusion in the APFPR, which specifically reports complications from these two conditions.
Materials and methods
This systematic review was performed following the Preferred Reporting Items for Systematic review and Meta-Analysis (PRISMA) guidelines [16]. The databases searched include OVID MEDLINE, Embase, CINAHL, PsycINFO and EMCARE from inception to April 2021. Google Scholar was also searched as grey literature, but no additional papers were found. This review was registered on PROSPERO (ID: CRD42021250117). The initial MEDLINE search strategy included search terms "patient reported outcome measures" OR "patient health questionnaire" OR "self-report" OR "surveys" OR "questionnaires" OR "quality of life" OR "health related quality of life" OR "perception" AND "pelvic floor disorders" OR "pelvic floor dysfunction". After the search strategy was finalized in MEDLINE, it was carried out in other databases and adapted as required using MeSH trees. The detailed search strategy is available as Supplementary material. The search was limited to the English language and human participants only (see Fig. 1).
Eligibility criteria
We included quantitative and qualitative studies focusing on pain and PFDs involving POP and SUI. No restriction on year of publication was applied. Subjects were women, both inpatients and outpatients. Articles involving only male participants were excluded. Studies without a comparator were considered for inclusion. The main outcome of our analysis was to identify and evaluate all existing instruments used to measure pain in women with POP and SUI.
Screening and selection
The first stage of screening involved two reviewers (MR, RR) reading titles and abstracts of all articles identified by the search. Any articles that clearly did not meet the inclusion criteria were removed. Exclusion criteria were studies where the article was not available in the English language as well as conference abstracts and editorials. Full texts of remaining articles were then read by two reviewers (MR, RR). The numbers of studies at each stage of the search were recorded using the PRISMA flow diagram.
Data extraction
A data extraction form was constructed to summarize selected studies in line with the outcomes of the systematic review. The form was tested on a small number of articles and revised as necessary.
The following information was extracted: • Type of study (cross-sectional, longitudinal, validation, development, review); • Study population (number of participants, adults); • Mean age of participants where provided; • Setting in which PROM(s) administered (inpatient, outpatient, clinical trial); • PROM(s) used; • Type of PROM(s) (generic, specific); • Time points PROM(s) administered (pre-or post-diagnosis, stage of study); • Method of administration (interview, paper, online); • Key findings of study.
A descriptive synthesis of results was undertaken, organized thematically by type, context, frequency, modes and methods of administration each measure.
The quality of the studies was assessed using the COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) risk of bias checklist [17]. The COS-MIN tool was chosen as it is specifically designed for studies using PROMs. This tool includes assessment of ten domains, and each category was classified as very good, adequate, doubtful or inadequate, if applicable. Results are summarized into a table presenting the lowest score for each property [17].
Search results
The search yielded 2001 results. After duplicates were deleted, 1672 articles remained. Studies were screened in two phases. An initial screen of titles and abstracts was conducted by two reviewers (MR, RR), which identified 52 articles that fit the inclusion criteria. A further screen of full texts eliminated 29 articles that met the exclusion criteria.
The final number of studies included in the review was 23 articles. The numbers at each stage are outlined in Fig. 1.
Risk of Bias
Two authors (MR, RR) independently assessed the risk of bias of each of the studies following the COSMIN checklist. Several papers in this review did not validate the instruments used in their studies and thus were not critically appraised. The quality of the results was assessed after data extraction and a full risk of bias table can be found in the Supplementary material.
The COSMIN criteria are used to discern whether the psychometric properties of PROMs have been evaluated using rigorous measures so that reviewers can evaluate the quality of the instrument. For example, the most evaluated property was reliability, where a majority (59%) of instruments scored as 'adequate', followed by 25% as 'very good'. This suggests that most instruments that could be tested for reliability were consistent in their measurements of pain. The second most common property was hypothesis testing for construct validity, where 90% of the eligible instruments scored as 'adequate'. This suggests that most PROMs assessed were adequately consistent with hypotheses based on the assumption that the PROM validly measures the construct to be measured.
PROMs identified
We identified 19 different PROMs that focussed on pain across 23 full-text articles included in this review (see Table 1).
Most (n = 12, 52%) of the studies reported both generic and specific instruments. The next most frequent were articles that contained only condition-specific PROMs (n = 8, 35%), followed by publications reporting generic instruments (n = 3, 13%). There was one study reporting a telephone survey [37], one semi-structured interview [42] and one study involving focus groups [32].
Pain-specific instruments
There were four pain-specific instruments; however, none were targeted to the pelvic floor or related/referred pain. Pain-specific instruments were reported in four (17%) articles [22,27,33,43]. The Brief Pain Inventory (BPI) was used once by Tincello et al. [43] to measure 'post-operative pain' on a scale of 0 (no) to 10 (severe). The Pain Catastrophizing Scale (PCS) was used once by Larouche et al. [33] to measure 'pre-operative pain' with a score of 0 to 52. The McGill Pain questionnaire measured 'post-operative pain' in the same article, ranging from 0 to 10 [33]. The visual analogue scale (VAS) was utilized in three (13%) studies [22,27,43] and thus was the most used pain-specific instrument in this systematic review. The VAS measured 'pain' on a scale of 0 (no pain) to 10 (pain as bad as it could be) [44] pre- [43], peri-and post-operatively [22] in women who underwent surgery for a PFD as well as in women who attended a urology or gynaeco-urology clinic for a PFD [27].
PFD-specific instruments
Nearly half (42%) of the instruments identified in this review were condition-specific relating to POP or SUI. Most instruments covered just one area of pain, whether that was described as just 'pain' or 'bodily pain', for example; yet two, the electronic Personal Assessment Questionnaire-Pelvic Floor (ePAQ-PF) and the Genitourinary Pain Index (GUPI), covered an array of pain-related domains [45,46]. Dua et al. [40] and Elenskaia et al. [29] utilized ePAQ-PF to measure vaginal pain, bladder pain, pain relieved by micturition, dragging pain and pain during or after sex. Cella et al. [21] validated the Lower Urinary Tract Dysfunction Research Network Symptom Index-29 (LURN SI-29) against GUPI, which measured pain at entrance to the vagina, pain in the vagina, pain in the urethra as well as pain during or after sexual intercourse. Different versions of the Pelvic Floor Distress Inventory (PFDI), including the Pelvic Floor Distress Inventory Questionnaire-Short Form 20 (PFDIQ-SF20) and the Pelvic Floor Disability Index-20 (PFDI-20), were analysed in five different articles for "pain or discomfort in the lower abdomen or genital region" [21,30,32,33,41]. In fact, five instruments across ten studies measured some sort of pain in the abdominal, vaginal or genital region [21, 28-30, 32, 33, 37, 40, 41, 47]. Four PROMs [ePAQ-PF, GUPI, the Pelvic Organ Prolapse/Urinary Incontinence Sexual Questionnaire (PISQ) and the Female Sexual Function Index (FSFI)] were used to assess pain during intercourse across multiple articles [21,24,29,30,35,40,48] and two instruments (ePAQ-PF, LURN SI-29) measured bladder pain specifically [21,29,40]. URIS-24 [36] did not actually measure pain itself but was validated against the pain domain of the SF-36. Furthermore, ICIQ-UI-SF version of Bladder pain PFDI (PFDI-20, PFDIQ-SF20) Pain/discomfort in the lower abdomen or genital region PISQ (PISQ 12, PISQ-IR) Pain during sexual intercourse, pain stopping one from being sexually active UDI-6 Pain, pain in lower abdominal/genital area Generic EQ-5D-5L Pain, pain/discomfort (not specific to PFDs) KHQ Sensation of pain, pain in body part (not specific to PFDs) SF- 36 Bodily pain (not specific to PFDs) Other Focus groups Ranking of pain as an adverse effects post-surgery Other non-validated questions Expectation of post-operative pain, self-reported pain tolerance before and after surgery in both POP and SUI groups Pain question Pain as a patient symptom after surgery for POP Semi structured interview Pain reported as 'pelvic floor problems' post-partum the ICIQ tool did not measure pelvic floor pain; however, the ICIQ-VS did, measuring "awareness of dragging pain in lower abdomen" [28].
Generic instruments
Three different generic HrQoL instruments were identified across multiple articles. The five-level EuroQol 5 (EQ-5D-5 L) was used by Tincello et al. [43] and Cashman et al. [49], utilizing one of the five domains in the instrument to measure pain in a "non-specific manner" prior to surgery and 3 months post-surgery. 'Pain/discomfort' associated with urinary incontinence in women was also measured by Dayana et al. [24] using the EQ-5D-5L.
Other non-validated questions to measure pain
Four papers described other instruments and means to assess pain in women with PFDs, including semi-structured interviews and focus groups. Buurman et al. [42] utilized a semistructured interview in women 1 month and 1 year post-birth to discuss perception of PFDs, where all women (n = 26) reported pain, including pain that they were not anticipating. Dunivan et al. [32] utilized focus groups to rank adverse effects, one of which included pain that was 'very severe', 'moderately to somewhat severe' or 'not severe'. Furthermore, Larouche et al. [33] used non-validated questions that addressed post-operative pain, and LeBrun et al. [30] described the inclusion of 'patient reported symptoms of pain' after surgery for POP in the Pelvic Floor Disorders Registry (PFDR).
General findings
This was the first systematic review to look at PROMs that measure pain in women with POP or SUI. We found that there were no validated condition-specific instruments that incorporated all of sensation, region and duration of pain, and that could capture all clinical scenarios where onset of pain occurs in women who suffer from PFDs and their related surgical complications. PROMs were solely PFDspecific instruments (but not focused on pain as a symptom) [21, 26, 28, 29, 35-37, 40, 47, 48, 51], purely pain-specific instruments (these were general and not created with this population in mind) [22,33], generic instruments (lacking specificity to both pain and women with PFDs) [19,23,24,27,43,49,51] or non-validated questions (which could not be standardized to measure pain in another population group) [30,32,33,42]. There are a range of pain types that women with POP or SUI may experience, including preoperative pain due to hypertonic pelvic floor/myalgia related to the underlying disorder, typical post-operative pain, atypical post-operative pain due to a surgical complication such as an infection or injury, and longer term pain due to pelvic mesh extrusion or breakdown [52]. The instruments found in this literature review did not capture all of sensation of pain, the region in the body where the pain culminates, how long it lasts or with what activities the pain onset occurs. These aspects of pain are important as they may suggest specific underlying pathophysiology worthy of further investigation as well as providing a more holistic understanding of the impact of the pain on women's HrQoL.
Pain-specific PROMs
Pain-specific instruments are not suitable to measure pain in women with PFDs as they are not targeted sufficiently to the unique range of issues and pain due to complications that this population can be affected by. The VAS [22,27,43] is a universal pain assessment tool and measured pain from 0 (none) to 10 ('worst pain possible') [53]. In addition, the McGill Pain Questionnaire measured 'post-operative pain' ranging from 0 to 10 [33]. The BPI was another instrument measuring post-operative pain on a scale of 0 (no) to 10 (severe) at discharge home or 24 h after surgery [43]. Furthermore, the PCS measured 'pre-operative pain' with a score of 0 to 52 [33]. Despite allowing for a quick assessment of both acute and chronic pain, the mono-dimensional aspect of these instruments may not be appropriate in revealing the quality of the painful experience or differentiating the types of pain that come with these conditions, or as a result of mesh procedure complications [54]. This could include painful voiding, mesh-related infection or severe vaginal pain aggravated by movements [14]. Additional qualitative descriptions of pain would increase the utility of these instruments, and furthermore, a better understanding of these pain characteristics may aid improvements in managing underlying causes of pain.
PFD-specific PROMs
The UDI-6 instrument, which is condition specific and used for both POP and SUI, both pre-and post-surgery [37,47], asks the question, "Do you experience pain or discomfort in your lower abdominal, pelvic or genital region?" with a ranking of 0 (not at all) to 3 (greatly). This question is suitable to women with PFDs as it targets a specific region of pain, however, fails to recognize different types of pain, as women with these conditions suffer from pressure or heaviness deep in the pelvic area to severe, sharp pains and cramping [55]. An instrument such as the UDI-6, gathering data that one "has pain", is not descriptive enough to inform a health professional about pain type [56]. Other PFD-specific instruments were also not suitable to measure pain in women with PFDs. The ePAQ-PF, despite measuring pain in pelvic floor disorders, consists of 120 questions [29,40]. The length of this instrument may result in patient burden. The LURN SI-29 only explores the frequency and time points of bladder pain, failing to uncover the nature and intensity of such pain [21]. Conversely, the GUPI assesses bladder pain symptoms, yet not the onset of such [21]. In addition, the PFDI, where versions were included in five studies [21,30,33,38,41], includes questions such as "Do you usually experience heaviness or dullness in the pelvic area?" with a scale of 'no' or 'yes' and, if yes, a pain rating of 1 to 4. In addition, the ICIQ-VS [28] asks "Are you aware of dragging pain in your lower abdomen?" with scores of 0 (never) to 4 (all the time). Ultimately, while questions like these are specific and more targeted to the population, they fail to retrieve information such as when the heaviness, dullness or dragging pain is felt, with what activities, whether the pain is constant or intermittent and when it first started. These type of ad hoc questions regarding pain do not truly capture the entirety of the pain.
Generic PROMs
The generic instruments entailed questions that were rather broad, for example, in the SF-36: "How much bodily pain have you had during the past 4 weeks?" with a rating of 'none' to 'very severe' [50]. In addition, the EQ-5D-5L [24,43,49] asks patients to tick a box about their pain, where having no pain, slight pain, moderate pain, severe pain or extreme pain or discomfort are options. These pain questions may not be suitable for women with PFDs as 'pain or discomfort' in the 'bodily' region is not specific enough and does not inform us of the true sensations of pain. The quality and degree of pain are imperative as complications from procedures may be identified as a source of the patient's pain [57]. Thus, generic PROMs that have pain domains may not be able to capture the full extent of pain suffered by women living with this condition and complications post-surgery [58].
Other non-validated questions to measure pain
Moreover, non-validated questions regarding pain may better encapsulate a patient's personal experience with pelvic floorrelated pain. Dunivan et al. [32] incorporated the patients' perspective utilizing focus groups at three separate surgery sites to discuss adverse effects, one including pain. A woman mentioned: "I have pain as well in my rectum. It feels like it gets pinched or something" [32]. The ability to converse with these women, compared to ticking a box in a questionnaire, is a benefit as the health professional can further deduce the true sensation, duration and region of pain, and the incidents where onset of pain occurs in women with PFDs. However, questions within focus groups and other semi-structured interviews are non-validated and therefore may not be reliable or applicable across other groups [59] as they are not standardized. A new validated PROM may be able to flag underlying clinical issues, whereby clinicians can further investigate through patient-specific consultation.
Inclusion of pain instrument in the APFPR
Following review of the available pain questionnaires by clinicians and consumers, it was considered that given the significance of pain as a potential indicator of pathophysiology, as well as its impact on women's HrQoL, there is a need for a new pain-specific PROM in the APFPR for women with POP or SUI. This review of the literature has confirmed that existing validated tools do not meet this need. The inclusion of a pain-related PROM into the APFPR will allow for further investigation of pain, especially as a complication post-surgery, and thus a more nuanced understanding of the impact of the pain on a woman's HrQoL. Consequently, a new PROM developed for and included in the APFPR focusing on accurately measuring pain for POP and SUI could improve the quality of care and QoL of women living with these disorders. The development of a new PROM could be achieved through focus group questions and semi-structured interviews, providing a more personal insight into the woman's experience and their subsequent HrQoL. A validated instrument created from these more 'conversational' type questions would provide huge benefit to the registry. However, a questionnaire that incorporates all pain types found in our search of the literature may be rather extensive. Therefore, it is very important for well conducted semi-structured interviews with women to highlight the most imperative pain types and time points. To do this, one method may be to conduct such interviews with both patients and pelvic floor clinicians and ask them what they deem to be relevant [60]. Furthermore, through qualitative interviews with women who suffer from PFDs themselves, content validity of the PROM may be deduced [60].
Strengths and limitations
This systematic review synthesized data from five databases and thus produces a rather robust body of evidence. It utilized systematic methods to assess study quality. In addition, this review is the first to critically evaluate the types of pain instruments and their subsequent properties in women with POP or SUI. However, this systematic review has a limitations, that of being restricted to English language publications only, where other languages may have provided different insights into pain measurements using PROMs. A further search of grey literature and exterior databases could have been beneficial to include a wider variety of studies.
Conclusion
This review aimed to identify whether there is a PROM that measures pain specifically for women with POP and SUI for inclusion in the APFPR, which specifically reports complications from these two conditions. We did not find a suitable pain-specific PROM designed for this population, and thus there remains a serious lack of substantial reporting on the HrQoL in women who continue to suffer pain following pelvic floor surgery. Based on a systematic review of the current literature, we suggest that the next step entails the development of a new instrument for pain, especially pain related to complications due to pelvic floor surgery, and one that will be suitable for inclusion into the APFPR. This new PROM may be suited for both pre-and post-surgery data collection.
Conflict of interest None.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
|
2022-03-02T14:42:03.238Z
|
2022-03-02T00:00:00.000
|
{
"year": 2022,
"sha1": "3d9fdb9c5b8da21481feab39d7050da5a1c57555",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00192-022-05126-4.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "3d9fdb9c5b8da21481feab39d7050da5a1c57555",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
12633751
|
pes2o/s2orc
|
v3-fos-license
|
Dendritic Cells: Cellular Mediators for Immunological Tolerance
In general, immunological tolerance is acquired upon treatment with non-specific immunosuppressive drugs. This indiscriminate immunosuppression of the patient often causes serious side-effects, such as opportunistic infectious diseases. Therefore, the need for antigen-specific modulation of pathogenic immune responses is of crucial importance in the treatment of inflammatory diseases. In this perspective, dendritic cells (DCs) can have an important immune-regulatory function, besides their notorious antigen-presenting capacity. DCs appear to be essential for both central and peripheral tolerance. In the thymus, DCs are involved in clonal deletion of autoreactive immature T cells by presenting self-antigens. Additionally, tolerance is achieved by their interactions with T cells in the periphery and subsequent induction of T cell anergy, T cell deletion, and induction of regulatory T cells (Treg). Various studies have described, modulation of DC characteristics with the purpose to induce antigen-specific tolerance in autoimmune diseases, graft-versus-host-disease (GVHD), and transplantations. Promising results in animal models have prompted researchers to initiate first-in-men clinical trials. The purpose of current review is to provide an overview of the role of DCs in the immunopathogenesis of autoimmunity, as well as recent concepts of dendritic cell-based therapeutic opportunities in autoimmune diseases.
Introduction
Dendritic cells (DCs) are widely recognized as the most professional antigen-presenting cells (APCs). Moreover, they are indispensable in the regulation of the delicate balance between immunity and tolerance [1][2][3]. By interacting with other cells of the immune system through cell-cell contact or the production of cytokines, DCs induce an appropriate answer to a specific antigen. DCs can also prevent (auto)immunity by inducing apoptosis of autoreactive T cells in the thymus on the one hand (i.e., central tolerance), and by induction of anergy, deletion, or tolerance through cooperation with regulatory T cells (Treg) in the periphery on the other hand (i.e., peripheral tolerance). Consequently, it has been hypothesized that defects in the number, phenotype, and/or function of DCs cause the development of autoimmune diseases. Furthermore, DC-based antigenspecific modulation of the unwanted responses is evaluated for therapeutic approaches in recent years and may have several advantages in contrast to standard treatments which can induce a variety of complications and have serious sideeffects. Indeed, considering the key role of DCs in the induction and activation of both effector T cells and Treg, DCs can be used to suppress or redirect immune responses in an antigen-specific manner. Recent investigations have shown promising results for the role of DCs as cellular treatment of autoimmune diseases and in preventing transplant rejections. Here, we discuss the role of DCs in the immunopathogenesis of autoimmunity, especially with regard to mechanisms underlying T cell tolerance, and recent concepts of DC-based therapeutic opportunities in autoimmune diseases.
Dendritic Cells: Key Regulators of
Immunity and Tolerance 2.1. DC Subsets and Differentiation Stages. DCs originate from CD34+ hematopoietic progenitor cells in the bone marrow and are generally classified in two groups: myeloid or classical DCs (cDCs) and plasmacytoid DCs (pDCs) [1,4]. pDCs are characterized by expression of CD123 and a high production of type I interferon (IFN The widespread distribution of DCs underlines their sentinel function. Indeed, DCs are most concentrated in places of the body where invasion of pathogens is most likely. Additionally, they are also present in organs such as the heart and kidneys and lymphoid structures, including the spleen, lymph nodes, and the thymus. Where present, iDCs take up both foreign as well as self-proteins and structures and process them intracellularly to antigens that are subsequently presented in the context of major histocompatibility (MHC) class I and II molecules on the cell's surface. Once DCs capture these antigens in the presence of so-called "danger signals, " DCs undergo a complex maturation process. For this, DCs are equipped with pathogen-recognition receptors (PRRs) which detect foreign antigens (i.e., pathogenassociated molecular patterns, PAMPs) thereby activating specific signalling pathways to drive biological and immunological responses. These stimuli can be bacterial products, such as lipopolysaccharide (LPS), or viral products, including double-stranded RNA, but also proinflammatory cytokines like TNF- [1,5]. Upon maturation, DCs efficiently present the antigen/MHC complex in combination with co-stimulatory molecules, have changed their pattern of cytokine production [6], and will migrate to the lymph nodes where they eventually activate T cells [1,7].
2.2. The Immunological Synapse. DCs bridge innate and adaptive immunity, integrate a variety of stimuli, and establish protective immunity. For this, efficient communication between DCs and T cells is warranted and must take place in the presence of at least 3 signals. First, the presented antigen/MHC complex must bind with the T cell receptor (TCR) of T cells (i.e., "signal 1"). Second, costimulation is obligatory for T cell activation (i.e., "signal 2"). For instance, binding of CD80/86 molecules on DCs with CD28 present on the cell membrane of T cells results in T cell stimulation. For a long time, it was believed that antigen recognition in the absence of co-stimulatory factors results in T cell anergy [5]. However, to date a variety of co-stimulatory pathways have been identified and are currently classified based on their impact on primed T cells [8]. Indeed, pathways delivering activatory signals to T cells are termed co-stimulatory pathways, whereas pathways delivering tolerogenic signals to T cells are termed coinhibitory pathways. Furthermore, it is generally accepted that an additional "signal 3" is also needed for efficient T cell stimulation and polarization. A well-known example is the potent induction of interferon (IFN)--producing T helper type 1 (Th1) cells by interleukin (IL)-12 produced by DCs as response to certain microbial stimuli [6,9]. Furthermore, both in vitro as well as in vivo studies have demonstrated that CD40 ligation of CD8+ T cells is necessary for optimal clonal expansion, effector function, and generation of a memory population [10][11][12]. Raveney and Morgan [13] have suggested that alterations in one of these three signals could shift the balance to tolerance or (auto)immunity. Recently, Kalinski et al. [6,7] described a potential fourth signal delivered by DCs that results in the upregulation of chemokine receptors on effector T cells and that thus might play part in organ-specific chemotaxis of T cells.
Depending on the cytokines present upon T cell activation, naïve CD4+ T helper (Th) cells can acquire a variety of immune effector phenotypes [14]. In brief, release of IL-12 by DCs promotes a Th type 1 (Th1) response. Th1 cells mediate a cellular as well as delayed-type hypersensibility immune response with proliferation of T cells and production of IFNand IL-2. Furthermore, Th1 cells induce stimulation of CD8+ cytotoxic T cells (CTL). Th2 cells are stimulated through OX40 ligation by DCs, produce mainly IL-4, IL-5, and IL-13, and promote the activation of B cells, which can also be involved in autoimmunity [15]. Tumor-growth factor (TGF)-, in the absence of proinflammatory cytokines, induces Tregs, while TGF-, IL-1, and IL-6 are needed for induction of Th17 cells [16]. Tregs are immune suppressive, and hence counteract effector T cells. In contrast, Th17 cells generate an influx of neutrophils and cause allergic or autoimmune reactions.
Dendritic Cells Inducing T cell Tolerance.
DCs are essential for both central and peripheral tolerance [5,[17][18][19][20]. Central tolerance occurs in the thymus where thymoid DCs present self-antigens to developing T cells. Subsequently, lymphocytes with autoreactivity above a certain threshold are deleted, a process called clonal deletion. Additionally, naturally occurring Tregs (nTregs) are positively selected by thymoid DCs in the thymus [21]. However, some limitations of central tolerance resulting in escape of potentially autoreactive T cells underlie the need for effective peripheral silencing mechanisms. In this regard, several mechanisms mediated by DCs have been proposed. (i) It has been suggested that iDCs fail to stimulate T cells sufficiently because of their low expression of MHC molecules and costimulatory factors. This results in T cell anergy [1,22]. (ii) It has also been reported that suboptimal antigen presentation, together with indoleamine 2,3-dioxygenase (IDO) or Fas (CD95) expression by iDCs leads to inhibition of T cell proliferation and T cell deletion [5]. (iii) Furthermore, DCs are able to induce Tregs to preserve immune tolerance to selfantigens [17] as well as to certain foreign antigens [1,2,5]. Moreover, IL-10-producing regulatory type 1 T cells (Tr1) are also promoted by DCs, hereby reinforcing peripheral tolerance [17,23,24] (for review on Treg subsets, see [21]).
Zehn and Bevan [19] showed that central tolerance accompanied by equal efficient peripheral tolerance is very efficient in withholding high avidity autoreactive T cells. Despite these mechanisms, some low avidity autoreactive T cells may escape and be present in the periphery. Therefore, it has been suggested that their activation can occur by crossreaction with foreign antigens, subsequently driving T cells to differentiate into effector T cells causing autoimmunity.
Role of DCs in the Pathogenesis of Autoimmunity
A healthy immune system recognizes and eliminates invading pathogens, but preserves tolerance for self-antigens. In contrast, autoimmune diseases develop when self-antigens are recognized as foreign by the immune system, resulting in hyperactivity of both cellular and humoral immunity against these antigens. The underlying mechanisms abrogating immune tolerance for self-antigens are still unclear. However, given the central role of DCs in maintaining the balance between (auto)immunity and tolerance, they are believed to play an important role in this process [2,25]. While neonatal mice who have undergone thymectomy [26] or thymic deletion [27] develop severe systemic autoimmune diseases, similar clinical outcomes in mice were obtained upon the depletion of both cDCs and pDCs. Indeed, Ohnmacht et al. [28] observed that constitutive ablation of DCs in mice leads to the breakdown of tolerance for self-antigens resulting in severe spontaneous autoimmune responses possibly caused by an increased amount of Th1 and Th17 cells. Moreover, a variety of antibodies against both nuclear and tissue-specific autogens was found in these mice. The authors showed that DCs with a short lifespan did not induce an efficient tolerance of CD4+ T cells, which was reflected in the thymus as a decreased negative selection and as a shortage of tolerogenic DCs in the periphery. Albeit that others demonstrated that increasing the lifespan of DCs through inhibition of apoptosis also induced autoimmunity in mice [29], thereby emphasizing the ambiguous role of DCs in immunity as well as tolerance. Of interest, it was recently described that peripheral T cells can reenter the thymus, where they target thymic DCs and medullary thymic epithelial cells. As a consequence, negative selection in the thymus was suppressed with breakthrough of T cells with a high affinity for self-antigens causing autoimmune diseases [30]. Altogether these studies underscore the importance of immune regulation in the thymus and periphery controlling (auto)immunity.
Whereas it is generally accepted that DCs in steady state, although loaded with self-antigens from their environment, do not trigger autoimmunity [5,18,31], discrepancies in DC number, phenotype, and function are believed to contribute to disease [2,[32][33][34][35][36]. Indeed, in animal models for type I diabetes, arthritis [17], Wiskott-Aldrich syndrome [20], and systemic lupus erythematosus (SLE) [37], it was shown that increased access of DCs to intracellular autogens-mediated by increased amounts of apoptotic cells or insufficient clearance of these cells-resulted in subsequent autogen presentation and activation of T cells. In an attempt to elucidate possible underlying mechanisms, Sawatani et al. [29] attributed a role in the phagocytic activity and antigen-presenting function of DCs to the dendritic cell-specific transmembrane protein (DC-STAMP). Indeed, in DC-STAMP-deficient mice the authors found increased in vitro phagocytosis and antigen presentation by DCs which could give rise to systemic autoimmunity [29]. Because of the high expression of MHC class II and co-stimulatory molecules, mature DCs are utmost equipped to activate T cells. In addition, both mature cDC and pDC produce proinflammatory cytokines, including IL-12p70 and type 1 IFN, respectively, which could contribute to the pathogenesis of autoimmunity [38,39]. In this perspective, Lech et al. [40] demonstrated that the absence of the Sigirr gene, which is a variant of Toll-like receptor (TLR)/interleukin 1 receptor (Tir) family and suppresses the TLR-mediated pathogen recognition in DCs, resulted in enhanced activation of DCs. This was evidenced by increased expression of proinflammatory mediators and was associated with the development of murine lupus. In inflamed tissues, such as the synovium in rheumatoid arthritis (RA), these proinflammatory signalling molecules are found in high amounts in DCs in the vicinity of T cells. For this, it has been hypothesized that DCs maintain the local autoreactive T cell response [38]. Besides, a correlation exists between the amount of DCs and the concentration of anticitrullinated peptide antibodies in serum of RA patients [38], suggesting a possible regulatory role for DCs in the production of autoantibodies in RA. Furthermore, DCs are described to enhance the formation of ectopic lymphoid tissues in target organs. The underlying mechanism is probably explained by chemotactic cytokines released by DCs leading to lymphoid neogenesis and recruitment of leukocytes in the inflamed tissue, including the synovium [41] and the pancreatic islets [42]. In other studies the formation of ectopic lymphoid structures was ascribed to B cells [16]. DCs can also directly damage surrounding tissues. In this perspective, it was recently shown that monocyte-derived DCs could destroy the cartilage in joints through the production of TNF- [43].
Tolerogenic DCs-Based Treatments
Efforts to bring DC vaccination to the clinic aiming induction of tolerance, were initiated by Dhodapkar et al. who demonstrated that pulsing immature DCs with influenza matrix protein (IMP) and keyhole limpet hemocyanin (KLH) 4 Clinical and Developmental Immunology resulted in a decrease of influenza-specific CD8+ IFN-secreting T cells, while peptide-specific IL-10-secreting T cells appeared [44]. Menges et al. [45] showed in mice that bone marrow-derived DCs treated with TNF-, so-called semimature DCs, were able to suppress the course of experimental autoimmune encephalomyelitis (EAE), the animal model for MS, through the activation of IL-10-secreting Tregs. Unfortunately, the semi-mature phenotype of these DCs is not stable since they produce proinflammatory cytokines upon introduction of a secondary stimulus (e.g., LPS). In contrast, biological molecules and pharmaceutical agents, including vitamin D 3 , IL-10, the corticosteroid dexamethasone, and the immunosuppressive drug rapamycine, are known to induce immature DCs with a low immunogenic character, that is, no upregulation of co-stimulatory molecules or secretion of proinflammatory cytokines, so-called tolerogenic DCs (tolDCs). Indeed, treatment of DCs with vitamin D 3 or equivalents resulted in an increased release of IL-10, whereas the expression of co-stimulatory molecules and bioactive IL-12 was downregulated. Moreover, the authors demonstrated that these tolDCs induced tolerance to the allograft in a mouse model [46]. Another example is triptolide, derived from a Chinese herb, which was found to have potent immunosuppressive effects as demonstrated by its prevention of DC migration and release of chemokines as well as subsequent inhibition of T cell activation and proliferation [47,48]. Treatment of human DCs with the immunoregulatory neuropeptide, vasoactive intestinal peptide (VIP), induces significant production of anti-inflammatory cytokines, such as IL-10, causes a decrease in the expression of the costimulatory molecules CD80/86, and inhibits the phagocytic activity by DCs [49,50]. Importantly, these DCs VIP keep their immature phenotype after exposure to inflammatory signals like TNF-and LPS. Hence, a stable immature phenotype is generated. In addition, a population of antigen-specific Tr1like cells, producing both IL-10 and TGF-and inhibiting the proliferation of Th1 cells, was found. Moreover, CD8+CD28− Tregs were also induced contributing to the antigen-specific tolerance. Vaccination of DCs VIP in mice during development of collagen-induced arthritis (CIA), EAE, and graft-versushost disease (GVHD) in allogeneic bone marrow transplantation induced organ-specific tolerance and suppressed the course of disease.
Recently, genetic engineering has made its way in the quest for therapeutic possibilities for autoimmune diseases. Indeed, the insertion of new DNA in order to enhance tolDC function has been investigated. For example, by transfection of DNA coding for the Fas-ligand [51] or TNF-related apoptosis-inducing ligand (TRAIL) so-called "killer" DCs could be obtained. These genetically modified DCs efficiently induce T cell apoptosis, suppress autoimmune arthritis, and prevent rejection of donor-specific heart transplants in animal models [52]. In addition, injections of genetically modified IL-4-producing DCs in CIA suppress the development and inflammation level of arthritis. In a study of Kaneko et al. [53], however, these DCs caused an accelerated immune reaction and rejection of the allograft, making these IL-4-producing DCs less attractive for therapeutic use.
Alternatively, selective knockout of the expression of DCcharacteristic molecules and functions has been intensively investigated. Utilizing RNA interference (RNAi) directed at IL-12p35 in order to generate IL-12-silenced DCs resulted in the prolongation of the intestinal allograft lifespan in rats [54]. Similar results were achieved in animal models after silencing of RelB and NF-B which resulted in allogeneic donor-specific hyporesponsitivity of the T cells, associated with an inhibition of the cytokine production of Th1 cells, and prolonged survival of the cardiac allograft in mice [55]. Recently, a clinical trial administering monocyte-derived DCs genetically modified with antisense oligonucleotides targeting the transcripts of CD40, CD80, and CD86, thereby selectively reducing their surface expression [56], was performed in type 1 diabetes patients and was proven to be safe, well tolerated, and without any adverse effects [57]. Whether recently identified negative regulators of DC activation, including zDC [58] and FOXO3 [59], hold promise for future DC-based tolerance-inducing strategies remains to be established.
Induction of Long-Lasting Immune Tolerance
Ideally, therapies for immunosuppression must also be durable. This means that the ability to regulate the autoimmune response has to be permanent or at least for many years following intervention, for instance, via the generation of selfantigen-specific Tregs. Indeed, different in vitro generated tolDCs, including IL-10-modulated DCs [60] and DCs treated with a combination of dexamethasone and 1 ,25-dihydroxyvitamin D 3 [61], were shown to induce Tregs. In addition, Housley et al. [62] demonstrated that activation of PPAR , a nuclear hormone receptor, in CD103+ DCs from the gut-associated lymphoid tissue (GALT) in mice was important for the regulation of retinoic acid secretion and Treg generation by DCs. This might contribute to the suppression of autoimmunity since other studies [63,64] reported that CD103+ GALT DCs induce an increased conversion of effector T cells to Tregs in a retinoic acid-dependent manner. Interestingly, some tolDC populations also promote the induction of regulatory B cells (Bregs), underlining suitability for tolerance-inducing strategies [61].
Whereas DCs drive the differentiation of Tregs in order to control immune responses, Tregs also modulate DC phenotype and function [65]. Indeed, Gabryšová et al. [66] showed that the autoimmune response was limited by a negative feedback system started by the antigen-induced differentiation of Th1 cells into IL-10-producing Tregs which on their turn inhibited DC maturation, thereby suppressing Th1 responses and completing the negative feedback loop. Furthermore, following depletion of FoxP3+ T cells, DCs that lack the expression of MHC class II molecules were not able to make cognate interactions with CD4+ T cells resulting in spontaneous and fatal CTL-mediated autoimmunity, indicating the critical suppressive role of the FoxP3+ Treg population in maintaining DCs in a tolerogenic state [67]. Overall, these findings highlight the importance of the bidirectional crosstalk between DCs and Tregs in maintaining and inducing tolerance.
Discussion
The use of tolerogenic DCs as cellular mediators for the induction of tolerance in autoimmune diseases and transplantation is very promising and could in the future complement or even substitute immunosuppressive agents which have important side effects including increased risk of infections. However, some open-standing questions need to be addressed before DC-based vaccines could be implemented in the clinic [68].
A first challenge is the identification of a maturationresistant subtype of DCs. For instance, while CD8 + DCs, the mouse equivalents of human myeloid DCs can act tolerogenic by inducing T cell apoptosis via their expression of Fasligands [69,70], others demonstrated that these CD8 + DCs released high amounts of IL-12 and were able to stimulate CD8+ CTL [71]. Additionally, Waithman et al. [72] described a CD11c+CD207+ skin-derived DC subset presenting selfantigens in the draining lymph nodes and inducing deletion of MHC class I-restricted autoreactive T cells, thereby contributing to tolerance. In contrast, others showed that these skin-derived DCs drive autoimmune tissue destruction. Hence, tolDCs cannot solely be distinguished based on their phenotype but must be carefully investigated regarding their stability and tolerogenic effect, especially after vaccination. Given the risk of in vivo reactivation, this is particularly of importance in any pathological state with an underlying inflammatory microenvironment.
Ideally, therapies for immunosuppression must also be (self-) antigen specific and durable. In this respect, Hawiger et al. [73] devised a DC-targeting system. Using a monoclonal antibody targeting DEC-205, a DC-restricted endocytic receptor, the authors delivered a specific antigen to DC. Albeit that initially an extensive T cell proliferation was observed, this was followed by T cell anergy and deletion. With these results, the authors suggested a possible role for inducing antigen-specific peripheral tolerance with this system. Unfortunately, in combination with a DC maturation stimulus, this strategy resulted in immune activation, thereby limiting its clinical use for the treatment of autoimmunity. Hence better insights in the role of distinct DC populations are warranted. In this respect, antigens delivered via antibodies to CLEC9A, a recently discovered C-type lectin receptor which is selectively expressed by CD141+ myeloid DCs, were shown to be a promising strategy to efficiently induce immunity against infections and malignant diseases [74,75]. Likewise, antigens specifically delivered to migratory DCs, trafficking from peripheral tissues to draining lymph nodes charged with self-antigens, were shown to be superior in generating Tregs in vivo and consequently drastically improved the outcome of autoimmune disease [76]. In addition, durable tolerance means that the ability to regulate the autoimmune response has to be permanent or at least for many years following intervention, for instance, via the generation of self-antigenspecific Tregs. For this, increased knowledge with regard to the pharmacokinetic and pharmacodynamic properties of DC-based strategies is imperative. Other related questions that need to be taken into consideration for the success of this approach are the timing of DC therapy (e.g., a prophylactic or a therapeutic treatment regimen) and selection of antigenic peptide(s) for loading DCs. Additionally, parameters such as antigen dose, number of cells, requirements for repetitive DC vaccinations, and the route of administration need to be addressed in clinical application. Finally, ethical issues may also arise, especially with regard to the implementation of experimental therapy for graft acceptance upon transplantation while there is a shortage of organ donations. Note must be taken that patient-specific treatment modalities, including DC-based vaccination, are very expensive and require careful monitoring of treatment-related efficacy and toxicity, individual patient morbidity, and quality of life, as well as societal costs.
|
2018-04-03T03:00:29.581Z
|
2013-05-15T00:00:00.000
|
{
"year": 2013,
"sha1": "2ab8a27f08425fd2f4eea733aa35b6b946eb8b3d",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/jir/2013/972865.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "64a9e0f62e251ae0eba179c48dd356f5ac62df32",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
7759820
|
pes2o/s2orc
|
v3-fos-license
|
Reduction-responsive PEtOz-SS-PCL micelle with tailored size to overcome blood–brain barrier and enhance doxorubicin antiglioma effect
Abstract A series of novel reduction-responsive micelles with tailored size were designed and prepared to release doxorubicin (DOX) for treating glioma, which were developed based on amphiphilic block copolymer poly (2-ethyl-2-oxazoline)-b-poly (ε-caprolactone) (PEtOz-SS-PCL) and the micelle size could be regulated by designing the polymer structure. The DOX-loaded PEtOz-SS-PCL micelles had small size and rapid drug release in reductive intracellular environments. Biodistribution and in vivo imaging studies in C6 glioma mice tumor model showed that DOX loaded PEtOz-SS-PCL43 micelles with the smallest size had superior accumulation and fast drug release in tumor sites. In vivo antitumor activity demonstrated that DOX-loaded PEtOz-SS-PCL43 micelles improved antitumor efficacy in contrast to PEtOz-SS-PCL micelles with larger size toward the orthotopic C6-Luci cells-bearing mice. This study shows great potential in tailoring the micelle size and introducing the responsive bonds or compartment for intracellular drug delivery and release in glioma treatment by designing the architecture of the polymer.
Introduction
Malignant gliomas are the most common high-grade primary brain tumor with high morbidity, mortality and are poorly responsive to current treatments (Wei et al., 2014). The median survival of patients with glioblastoma seldom exceeds 14.6 months (Minniti et al., 2008). A major obstacle for glioma treatment is to deliver drugs effectively to the tumor cells. The blood-brain barrier (BBB) is a major obstacle to the delivery of drugs into brain tumors (Groothuis, 2000). Thus, most of small molecules with low lipid solubility including brain tumor chemotherapeutics rarely cross the BBB and do not enter into the brain (Pardridge 2007a, b;Biddlestone-Thorpe et al., 2012). Developing some strategies for increasing drug in glioma is a great challenge in glioma therapy.
Over the past years, great efforts have been made to overcome these obstacles. Recently with the development of nanotechnology, nanoparticle therapeutic carriers have provided new opportunities to achieve effective therapeutic distribution at tumor sites (Brunetti et al., 2015;Kunjachan et al., 2015;Stylianopoulos, 2013;Liu et al., 2016). These drug carriers are passively targeted to tumors through the enhanced permeability and retention (EPR) effect, so they are ideally suited for the delivery of chemotherapeutics in cancer treatment. The complexity of glioma, especially the existence of the BBB, higher requirements to the control of nanoparticles size emerged. It has been reported that the vascular endothelial cells and associated pericytes are often abnormal in tumors and some BBB was destroyed in brain tumors (Schneider et al., 2004;Wohlfart et al., 2012). Based on this occasion, the glioma-targeted drug delivery systems are mainly divided into two categories. The drug delivery systems must deliver drugs across the BBB for glioma therapy. When the BBB is intact, the systems that must deliver drugs across the BBB for tumor-targeting are defined as cascadetargeting systems. Other drug delivery systems utilized in high-grade glioma are designated as glioma-targeting systems, which mainly accumulated in the glioma by EPR effect (Mager et al., 2017). Creating more safe and small size drug delivery systems is the key to the success of glioma therapy.
Micelle-based drug delivery system has been proven to be an attractive alternative for the delivery of chemotherapy drugs (Sun et al., 2009;Liang et al., 2014;Chen et al., 2015;Zhang et al., 2016). The particle size of micelles was controlled by adjusting the length and ratio of the hydrophilic and hydrophobic segments. However, to design stimuli-responsive micelles would be more effective for tumor therapy, owing to anticancer drugs released exclusively in tumor tissue or inside tumor cell. The reduction responsibility is widely used stimuli in designing drug delivery system for triggered release (Felber et al., 2012;He et al., 2013). Poly(ethylene glycol) (PEG) currently is the most extensively used polymer in drug delivery as hydrophilic parts. Based on its low dispersity ( -D), biocompatibility and limited recognition by the immune system (stealth behavior), PEG remains the gold standard in polymer-based biomedical applications (Knop et al., 2010). However, formation of PEG-antibodies (Tagami et al., 2010), the accelerated blood clearance of PEG (Koide et al., 2010) and nonbiodegradability of PEG resulting in body accumulation of high-molarmass PEGs limited the usefulness of PEGs in drug delivery in clinical trials (Pasut & Veronese, 2007). Poly(2-ethyl-2-oxazoline)s, abbreviated as PEtOz, provide higher stability, tunability, and functionalization than PEG, while retaining the requisite features of biocompatibility, stealth behavior and low dispersity Hoogenboom 2009;Zhao et al., 2015). It is better replacements for PEG in drug delivery. A first PEtOz drug conjugate for treatment of Parkinson's disease recently entered Phase II clinical trials (Moreadith et al., 2017). Therefore, in this study, PEtOz was chosen as hydrophilic part. Owing to biocompatibility and biodegradability nature of polycaprolactone (PCL), it is extensively studied for controlled drug delivery (Manavitehrani et al., 2016;Senevirathne et al., 2017). Thus, PCL was used as hydrophobic part in our study.
In this study, we synthesized reduction-responsive and size-controllable micelles based on amphiphilic polymer poly (2-ethyl-2-oxazoline)polycaprolactone (denoted as PEtOz-SS-PCL) to encapsulate DOX for glioma therapy. Control over micelle size is therapeutically important for glioma, as it has been observed that particles with diameter between 20 and 100 nm are effective in penetrating into BBB (Crommelin et al., 2003;Duncan, 2003). In this experiment, PEtOz are hydrophilic parts with fixed polymerization degree and PCL are hydrophobic parts with three different degrees of polymerization (DPs) of 23, 33 and 43. We can obtain micelle with different size by a series of amphiphilic polymer with different DP of PCL block, denoted as PEtOz-SS-PCL23, PEtOz-SS-PCL33 and PEtOz-SS-PCL43. As shown in Scheme 1, the introduction of a disulfide bond between PEtOz and PCL was beneficial to rapidly release DOX in glioma cells. The integration of above features makes PEtOz-SS-PCL an excellent carrier for delivery DOX for glioma therapy.
Characterization
The 1 H NMR spectra were recorded on a Unity Inova 400 spectrometer operating at 400 MHz using chloroform-d. The molecular weight and polydispersity of copolymers were determined by a PL GPC 50 instrument equipped with Jordi GPC columns (10E4, 2 M) following a differential refractiveindex detector (PL-RI). The measurements were performed Scheme 1. Illustration of reduction-responsive shell-sheddable PEtOz-SS-PCL micelles for triggered DOX delivery in vivo. (i) The micelles are assembled from block copolymers PEtOz-SS-PCL; (ii) DOX-loaded micelles efficiently accumulate in C6 glioma tumor; (iii) DOX is quickly released into the cytoplasm triggered by reduction stimuli.
using DMF as the eluent at 50 C and a sense of narrow polystyrene standards for the calibration of the columns. The size and zeta-potential of micelles was determined by a Zetasizer Nano-ZS from Malvern Instruments. Transmission electron microscopy (TEM) measurement was performed on a FEI Tecai GT 12 operated with an accelerating voltage of 200 KV. The absorbance of each well in MTT assays was measured using a Synergy H4 Hybrid Multi-Mode Microplate Reader. The fluorescence measurement of doxorubicin was performed using F-4600FL spectrophotometer at 298 K. Flow cytometric analysis using a BD FACSCalibur flow cytometer.
Synthesis of PEtOz-SS-PCL
The copolymers of poly(2-ethyl-2-oxazoline) pyridyl disulfide (PEtOz-SS-Py) were prepared following a procedure reported by Ging-Ho Hsiue and PCL-SH were synthesized by ring-opened polymerization of e-CL using HES as initiator, then reacted with DTT, following a procedure reported previously (Sun et al., 2009). The synthesis of amphiphilic polymer PEtOz-SS-PCL43 was used as an example in the following procedure. The PCL-SH (150 mg, 0.0302 mmol) was added into a DCM (6 mL) solution of PEtOz-SS-Py (195.93 mg, 0.0362 mmol) under a nitrogen atmosphere at room temperature and adjusted the pH to 2.5 by adding acetic acid. The reaction was allowed to proceed under stirring for 48 h. The product PEtOz-SS-PCL polymer was isolated into cold diethyl ether, filtration and washing with cold methanol several times to remove excess PEtOz-SS-Py and vacuum-dried to yield the product. The yield of PEtOz-SS-PCL was about 30%.
Preparation of PEtOz-SS-PCL micelles and physicochemical characterization
Micelles of PEtOz-SS-PCL were prepared under stirring by dropwise 1.2 mL double distilled water to 0.5 mL THF solution of block copolymer (0.4 wt%) at room temperature, and removed THF thoroughly by extensive dialysis against PB (10 mM, pH 7.4) for 24 h.
Transmission electron microscopy (TEM) was used to detect particle morphology. Before micelles were measured, 10 lL fresh micelles sample was placed onto carbon-coated copper grids, and micelles were air-dried and then negatively stained by a 2% sodium phosphotungstate solution.
Dynamic light scattering (DLS) measurements were performed in aqueous solution using a Malvern Zetasizer Nano ZS apparatus. To evaluate particle size, the intensity-weighted Z-average of the particle diameter is reported in nm.
The critical micelle concentration (CMC) of polymer PEtOz-SS-PCL was determined by fluorescence spectrometer (FL 4600) using pyrene as a fluorescence probe. The concentration of PEtOz-SS-PCL micelle was changed in a range from 1.0 Â 10 À5 to 0.1 mg/mL, and the pyrene concentration was 0.6 lM. The excitation wavelength of fluorescence spectra was fixed at 330 nm, and then, the emission fluorescence at 372 nm and 383 nm was recorded, and the CMC value was obtained as the interaction point by extrapolating the intensity ratio I 372 /I 383 at above test concentrations.
In vitro cellular uptake
To evaluate the cellular uptake of DOX-loaded PEtOz-SS-PCL micelles, flow cytometric analysis was used. C6 cells at a density of 5 Â 10 4 were cultured in 12-well tissue culture plates and cultured at 37 C in a 5% CO 2 humidified atmosphere with DMEM containing 10 vol % FBS for 24 h. Later, old medium was replaced with fresh medium. Then, the cells were treated with DOX loaded PEtOz-SS-PCL23, PEtOz-SS-PCL33, PEtOz-SS-PCL43 nanoparticles and PBS at DOX concentration of 1 lgmL À1 for 2 h. The cells were trypsinized, washed three times with cold PBS and resuspended in 500 lL cold PBS for flow cytometric analysis using a BD FACSCalibur flow cytometer.
In vitro drug release
The in vitro release of DOX from PEtOz-SS-PCL micelles was using a dialysis tube (MWCO 12000) incubated at 37 C for 24 h in two different media, that is, PB (10 mM, pH 7.4) with 10 mM DTT and PB (pH 7.4, 10 mM). In order to acquire sink conditions, drug release studies were performed at low drugloading contents (ca. 0.5 wt%), 0.7 mL DOX-loaded micelle solution dialysis against 20 mL of the same medium. At desired time intervals, 6 mL release media were taken out and replenished with an equal volume of fresh media. The concentration of DOX was determined by fluorescence (FL4600) measurements (excitation at 480 nm). To determine the amount of DOX released, calibration curves were run with DOX/PB buffer solutions with different DOX concentrations at pH 7.4. The emission at 600 nm was recorded. The release experiments were performed in triplicate. The results are presented as the average ± SD.
Reduction-responsive size change of PEtOz-SS-PCL micelles
Reduction induced size change of PEtOz-SS-PCL micelles was measured by DLS, which was performed in response to PB buffer (pH 7.4) with or without reductive agent (10 mM DTT) at 37 C. Typically, PEtOz-SS-PCL43 micelles (1.5 mL) in PB buffer (pH 7.4, 10 mM) were first removed oxygen by bubbling with nitrogen, then, DTT was added to obtain a final concentration of 10 mM. The above mixture was placed in a shaking bed with a speed of 200 rpm at 37 C. The size was measured by DLS at predetermined time points.
Cell viability assays
The cytotoxicity of PEtOz-ss-PCL23, PEtOz-ss-PCL33, PEtOz-ss-PCL43 and DOX was detected by the 3-(4, 5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) assay. Three thousand C6 cells were seeded in 96-well tissue culture plate and cultured at 37 C in a 5% CO 2 -humidified atmosphere in 150 lL DMEM containing 10 vol% FBS for 24 h. Then, free DOX and DOX-loaded PEtOz-SS-PCL micelles at different DOX concentrations were added and incubated for another 48 h. After then 10 lL MTT reagent (5 mg mL À1 ) was added to the each well and incubated for an additional 4 h. Medium was removed lightly and 150 lL DMSO was added to each well for 15 min to dissolve purple formazan. At last quantification, measurements (optical density) were obtained at a wavelength of 480 nm using a spectrophotometric analysis.
Mice
ICR mice (male, 18-20 g) were purchased from Beijing HFK Bioscience Co., Ltd. (Beijing, China). All animals received care in compliance with the guidelines in the Guide for the Care and Use of Laboratory Animals. All procedures were approved by Xuzhou Medical University of China Animal Care and Used committee.
Glioma-bearing ICR mice were prepared by intracranial injection (striatum, 1.8 mm right lateral to the bregma and 3 mm of depth) of 1 Â 10 5 C6-Luci cells suspended in 4 mL of L15 medium into male ICR mice (Li et al., 2014;An et al., 2015). After 7 days of xenograft glioma, nude mice were randomly divided into five groups (n ¼ 10) and housed in a controlled temperature room with regular alternating cycles of light and darkness.
In vivo distribution of DOX-loaded PEtOz-SS-PCL
Glioma-bearing ICR mice were first injected with freshly prepared luciferin substrate and imaged with the Xenogen IVIS Spectrum optical imaging device to prove to have similar volume tumors in the brain after 7 days of xenograft glioma. After that, glioma-bearing ICR mice were injected intravenously with free DOX, DOX loaded PEtOz-SS-PCL23, PEtOz-SS-PCL33 and PEtOz-SS-PCL43 micelles through the tail vein at the dose of 3 mgkg À1 DOX per animal. PBS was injected as control. Then, at 4 h after administration, the mice were sacrificed, and the glioma model brains as well as other principal organs (heart, liver, spleen, lung and kidney) were excised carefully and visualized under the in vivo real-time fluorescence imaging system. The excised glioma model brains were then fixed with 4% paraformaldehyde for 72 h and further dehydrated in sucrose solution. Slices of 20 mm thickness were prepared and stained with DAPI for 10 min at room temperature. The slices were observed under fluorescence microscopy and photographed (Olympus, Japan).
The survival time and body weight of mice were measured. After the last treatment, the mice were euthanized and the hearts were collected. The hearts were washed with saline and fixed in the 4% paraformaldehyde. The sections of hearts were stained with hematoxylin and eosin and observed under fluorescence microscopy and photographed (Olympus, Japan).
Statistical analysis
Statistical analysis was performed using the Student's t-test with p < .05 as significant difference. The experimental results were given in the format of mean, mean ± SD in the figures.
Synthesis of PEtOz-SS-PCL
A series of novel amphiphilic PEtOz-SS-PCL block copolymer was synthesized through exchange reaction between poly (2ethyl-2-oxazoline) pyridyl disulfide (PEtOz-SS-Py) and mercap to PCL (PCL-SH) in dichloromethane with molar ratio of 1.2/1 ( Figure S1, Table 1). Figure 1 showed both hydrophilic comparts of PEtOz (d 1.10, d 3.45) and hydrophobic comparts of PCL (d 1.37, d 1.65, d 2.30, d 4.05) in 1 H NMR spectra. The integral ratio of signals at d 1.10 and d 4.05 manifested an equivalent coupling of PEtOz and PCL. The DP of PEtOz and PCL was calculated from the 1 H NMR end group analysis ( Figure S2) (Zhang et al., 2010;Zhu et al., 2011;Yang et al., 2012). Moreover, GPC results showed that the resultant copolymers PEtOz-SS-PCL had a narrow distribution of molecular weight after the exchange reaction between PEtOz-SS-Py and PCL-SH, with a PDI of 1.27, 1.17, 1.21, respectively ( Figure S3). Therefore, these results showed we have synthesized diblock copolymer PEtOz-SS-PCL successfully. These block copolymers were prepared by varying the length of PCL block, while the hydrophilic PEtOz block was fixed. The molar ratio of the initial monomer and initiator concentration were 43: 1, 33: 1, 23: 1 respectively, resulting in different lengths of PCL block. The molecular weight and a PDI of block copolymers PEtOz-SS-PCL were determined by GPC ( Figure S4) and summarized in Table 1.
Characterization of PEtOz-SS-PCL micelles
TEM examination revealed that the PEtOz-SS-PCL43 micelles were dispersed as individual particles with a spherical shape (Figure 2(A)). DLS analysis showed that the average particle size of PEtOz-SS-PCL decreased with the increase of the polymerization degree of PCL, which might due to increasing the concentration of hydrophobic chain. From Table S1, it can be seen that the sizes of the PEtOz-SS-PCL23, PEtOz-SS-PCL33 and PEtOz-SS-PCL43 are 160.2 ± 1.4 nm, 140.6 ± 1.6 nm and 97. 3 ± 1.8 nm, respectively. While the size of the DOX-loaded PEtOz-SS-PCL23 and PEtOz-SS-PCL33 are 162.4 ± 0.1 and 137. 8 ± 1.3, respectively. Thus, the loading of DOX in PEtOz-SS-PCL23 and PEtOz-SS-PCL33 do not increase the micelles size significantly. However, the size of DOX-loaded PEtOz-SS-PCL43 are smaller than these of corresponding blank micelles, it is 88.4 ± 2.7 nm. The strong attractive hydrophobic interactions between the encapsulated drug and inner core causes the reduced micelle size (Ding et al., 2013;Shi et al., 2015). The physical properties and loading capacity of PEtOz-SS-PCL23, PEtOz-SS-PCL33 and PEtOz-SS-PCL43 were listed in Table S1. The cytotoxicity of PEtOz-ss-PCL23, PEtOz-ss-PCL33 and PEtOzss-PCL43 was evaluated with an MTT assay against C6 cells. As shown in Figure S5, surviving cell viability was greater than 90% with 1 mgmL À1 , demonstrating that the cytotoxicity of these micelles is fairly low. These results suggested that PCL with DP of 43 had the smallest size. So, we used the polymer PEtOz-SS-PCL43 for the following drug release experiment.
CMC is an important property for micelle. We use pyrene as hydrophobic fluorescence probe to measure the CMC value ( Figure S6). The CMCs of PEtOz-SS-PCL copolymers were in the range of 5.18-8.29 mgL À1 , which was lower than those of micelles formed by amphiphilic polymer with similar molecular weight, such as PEG-PCL and PEG-SS-PCL (Table S1) (Sun et al., 2009). The lower CMC values also suggested the micelles self-assembled from PEtOz-SS-PCL were stable upon dilution under physiological conditions, which is beneficial for the drug delivery application in vivo. The CMCs of PEtOz-SS-PCL micelles decreased with an increasing amount of hydrophobic PCL block, because the micelle with longer hydrophobic PCL block which played an important role in the stability of the micelle were prone to form. We also studied the effects of size on the cellular uptake efficiency of PEtOz-SS-PCL23, PEtOz-SS-PCL33 and PEtOz-SS-PCL43 micelles by flow cytometry. As shown in Figure 2(B), PEtOz-SS-PCL43 with size around 80 nm were taken up by tumor cells far more than larger micelle of PEtOz-SS-PCL23 and PEtOz-SS-PCL33. The result indicated that the cellular uptake process of PEtOz-SS-PCL micelles highly depended on the size of micelles and small size micelles exhibited high uptake efficiency.
The in vitro release of DOX from DOX-loaded PEtOz-SS-PCL43 micelles was studied at 37 C in PB buffer (pH 7.4) with or without 10 mM DTT. The disulfide bond located between the hydrophilic compartment and hydrophobic compartment would break in the reductive conditions (10 mM DTT) which is mimicking the intracellular cytopolasm (Figure 2(C)). The reduction-responsive drug loaded PEtOz-SS-PCL43 micelle has great potential using as a drug delivery system for the treatment of tumor.
We also used different buffer and reducing reagent DTT to investigate the reduction sensitivity of PEtOz-SS-PCL micelle. The size change of PEtOz-SS-PCL43 micelle was tracked by DLS ( Figure S7). Here, we used PB buffer (10 mM, pH 7.4) to simulate the intracellular condition, respectively. The DLS results showed that the micelle size fastly increased from about 100 nm to hundreds of nanometers after 4.0 h in the presence of 10 mM DTT, indicating that the PEtOz-SS-PCL micelle is sensitive to reductive condition. This phenomenon attributed to the reason that reduction sensitivity caused by the disulfide bond between hydrophilic and hydrophobic parts of polymer PEtOz-SS-PCL, which resulted in the disassembly of micelle.
Cell viability assays
The in vitro cytotoxicity of DOX-loaded PEtOz-SS-PCL micelles against C6 cells was evaluated by MTT assay. DOX-loaded PEtOz-SS-PCL43 micelles presented the significantly enhanced cytotoxicity compared with DOX-loaded PEtOz-SS-PCL23 and DOX-loaded PEtOz-SS-PCL33 at all the DOX concentrations studied (Figure 3). IC 50 of DOX-loaded PEtOz-SS-PCL43 micelles was 6.67 lg mL À1 on C6 cells, which was lower than that of DOX-loaded PEtOz-SS-PCL23 and DOX-loaded PEtOz-SS-PCL33 with IC 50 value of 45.68 and 16.16 lg mL À1 , respectively. It was suggested that higher uptake efficiency of DOX-loaded PEtOz-SS-PCL43 compared with DOX-loaded PEtOz-SS-PCL23 and DOX-loaded PEtOz-SS-PCL33 provided higher cytotoxic activity toward the glioma cells. The difference of uptake efficiency for drug-loaded micelle should assign to the different size of drug-loaded micelle, which was also in line with the cellular uptake result of flow cytometry. This phenomenon also demonstrated that the micelle size is one of the key point for cell endocytosis, the smaller size is beneficial for the cell uptake in this procession, and further released cargo under reductive condition in plasma, resulting in different in vitro cytotoxicity. Obviously, in vitro cytotoxicity of DOX-loaded PEtOz-SS-PCL micelles can be adjusted by tailoring the micelle size.
In vivo glioma distribution of PEtOz-SS-PCL
The in vivo targeting of glioma and localization in various organs of DOX-loaded PEtOz-SS-PCL were observed in orthotopic C6-Luci cells-bearing mice by the IVIS kinetic imaging system (Caliper Life Sciences, Hopkinton, MA). First, the brain tumor model was developed in ICR mice using stereotactic intracranial injection of $1 Â 10 5 C6-Luci cells into the primary somatosensory cortex. After 7 days post implantation, the bioluminescence signals were analyzed by in vivo bioluminescence imaging. As shown in Figure 4(A), the in vivo image finding was confirmed the existence of brain glioma. And then, DOX-loaded PEtOz-SS-PCL and free DOX were administered intravenously to orthotopic implantation model of glioma cells in ICR mice, and was examined after 4 h by an in vivo imaging system and fluorescence microscopy, respectively. The fluorescence intensity of free DOX in the brain was negligible, indicating that free DOX barely crossed the BBB in the mice. DOX loaded PEtOz-SS-PCL micelles with different PD of PCL had stronger DOX fluorescence than free DOX (Figure 4(B,C)). Compared to DOX-loaded PEtOz-SS-PCL23, PEtOz-SS-PCL33 treated mice, the strongest DOX fluorescence was found at the mice's tumor region treated with DOX-loaded PEtOz-SS-PCL43. These results indicated that PEtOz-SS-PCL43, which has the smallest nanosize could effectively enter into glioma by EPR effect. The tumor distribution of PEtOz-SS-PCL and free DOX was also confirmed by frozen tumor tissue section observed by fluorescence microscope (Figure 4(D)). Brain tumor tissue was identified by areas of hypercellularity as evident from DAPI-stained cell nuclei shown in blue (Figure 4(C)). The C6-Luci bearing mice treated with PEtOz-SS-PCL43 had more DOX fluorescence in brain tumor than treated with other groups, indicating that PEtOz-SS-PCL43 could effectively deliver DOX to brain tumor. After that, C6-Luci bearing brains and major organs were excised for ex vivo imaging to reveal the tissue distribution. A clearer result was found in the ex-brain imaging ( Figure 4(E)). Besides, the amount of PEtOz-SS-PCL was less in liver compared to free DOX group, suggesting a lower toxicity to liver. While the DOX fluorescence in heart, spleen, lung and kidney was similar in DOX-loaded PEtOz-SS-PCL23, PEtOz-SS-PCL33 and PEtOz-SS-PCL43 groups. These findings demonstrated that PEtOz-SS-PCL43 group with the smallest nanosize could effectively transport the DOX across the BBB and could result in the highest cellular uptake of DOX by glioma.
We next evaluated the therapeutic efficacy of locally administered free DOX, DOX-loaded PEtO Z -SS-PCL23, DOXloaded PEtO Z -SS-PCL33 and DOX-loaded PEtO Z -SS-PCL43 (the concentration of DOX 3 mg kg À1 ) by intravenous injection at 11 days established in ICR mice by stereotactic intracranial injection of $1 Â 10 5 C6-GFP-Luci human glioma cells into the cortex (Figure 5(A)). Tumor growth was measured in vivo using bioluminescence imaging. Tumors in animals in the control group (PBS) grew rapidly ( Figure 5(B)). By day 24, the tumor loaded in PBS group, as reflected by bioluminescence measurements, was 12.11-fold higher than at day 10. The tumor growth rate in mice that treated with free DOX, DOXloaded PEtO Z -SS-PCL23, DOX-loaded PEtO Z -SS-PCL33 and DOX-loaded PEtO Z -SS-PCL43 were 1.18-fold, 2.85-fold, 3.13fold and 0.99-fold higher than at day 10, respectively. From this result, we found that DOX-loaded PEtOz-SS-PLC43 had the highest antiglioma activity, due to DOX-loaded PEtO Z -SS-PLC43 delivered more DOX to glioma than DOXloaded PEtO Z -SS-PLC23 and DOX-loaded PEtO Z -SS-PLC33, which was confirmed by the distribution of DOX fluorescence in glioma.
To further estimate the antitumor efficacy, the body weight and overall survival of the glioma-bearing mice were assessed ( Figure 5(C,D)). As shown in Figure 6(C), treatments with DOX-loaded PEtOz-SS-PCL23 and DOX-loaded PEtO Z -SS-PCL33 did little in improving mice survival, registering a median survival of 29 days and 27.5 days versus 23.5 days for the PBS-treated group. Although remarkable tumor inhibition was observed in DOX-treated group, no benefit of median survival time emerged. Compared to the DOX-treated group (median survival, 31.5 days), DOX-loaded PEtOz-SS-PCL43- treated groups possessed prolonged survival times. The median survival time was 45 days for DOX-loaded PEtOz-SS-PCL43 treated group. The dominance of DOXloaded PEtOz-SS-PCL43 was also reflected on the body weight change. The body weight showed a slow decrease of DOX-loaded PEtOz-SS-PCL43, while other groups had a rapid decrease ( Figure 5(D)). All the above results demonstrated DOX-loaded PEtO Z -SS-PCL43 had a predominant therapeutic efficacy for glioma, due to its the smallest nanosize.
H and E staining analysis
DOX is a highly effective and widely used chemotherapeutic drug to cure various types of cancer; however, its effectiveness is limited by its cardiac toxicity (Cho et al., 2012;Guan et al., 2013;Subburaman et al., 2014). In order to further evaluate the cardiotoxicity of DOX and DOX-loaded PEtO Z -SS-PCL treatment, the histological analysis of the cardiac tissues was tested. As shown in Figure S8, histological examinations did not show any myocardial lesions in the group of mice treated with free DOX, DOX-loaded PEtOz-SS-PCL23, DOX-loaded PEtOz-SS-PCL33 and DOX-loaded PEtOz-SS-PCL43, suggesting that the treatment with free DOX, DOX-loaded PEtOz-SS-PCL23, DOX-loaded PEtOz-SS-PCL33 and DOX-loaded PEtOz-SS-PCL43 at the experimental dosage did not damage the heart of the mice during the experimental period.
Conclusions
In summary, we have successfully synthesized reductionresponsive and controlling size DOX-loaded PEtOz-SS-PCL micelles to encapsulate DOX for glioma therapy. The DOXloaded PEtOz-SS-PCL43 micelles showed the smallest size distribution and effectively entered into glioma in vivo. Moreover, treatment with DOX-loaded PEtOz-SS-PCL43 micelles inhibited significantly brain tumor growth in ICR mice orthotopic glioma model compared to other groups. Therefore, PEtOz-SS-PCL43 micelles can be potentially applied as a safe and efficient drug delivery for glioma treatment.
Disclosure statement
No potential conflict of interest was reported by the authors.
|
2018-04-03T03:40:40.481Z
|
2017-01-01T00:00:00.000
|
{
"year": 2017,
"sha1": "e4311cf9465e3914dca163c29866c1c4262699d8",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/10717544.2017.1402218?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "262f70b70cee1c045eb0efd2d2a529b13bf5c3fb",
"s2fieldsofstudy": [
"Biology",
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
246783587
|
pes2o/s2orc
|
v3-fos-license
|
Reply on RC1
Some steps of the methodology have been revised to take into account the characteristics of each Tropical Cyclone (TC), thus, we have considered the climatological situation for each individual storm and compared it to the condition when the TC occurred. This change allowed us to individually study the responses for each TC more accurately while at the same time separating the SST and Chl-a response completely. The uncertainty surrounding the interpolated data was addressed in this revision. For this, we incorporated two types of analysis: 1) we showed the approximated errors associated with the analyzed data and for various time periods surrounding TCs; 2) we used the previously shown two study cases (Nadine and Ophelia) as evaluation cases for non-interpolated data. Overall, the interpolated datasets appear to provide consistent data that delivered good results, either not showing a large uncertainty (particularly for SST) and showing good relations to non-interpolated data (particularly for Chl-a). Finally, some small but important changes were made in the results section, with the addition of individual 6-hour observation analysis, which corroborated the analysis made in the original manuscript; and in the Nadine (2012) study case, which was not clear enough in the original version.
were appreciated and will certainly improve the overall quality of this article. Some points of the manuscript have suffered major revisions to answer the criticisms/suggestions made by the reviewers, including: Some steps of the methodology have been revised to take into account the characteristics of each Tropical Cyclone (TC), thus, we have considered the climatological situation for each individual storm and compared it to the condition when the TC occurred. This change allowed us to individually study the responses for each TC more accurately while at the same time separating the SST and Chl-a response completely. The uncertainty surrounding the interpolated data was addressed in this revision. For this, we incorporated two types of analysis: 1) we showed the approximated errors associated with the analyzed data and for various time periods surrounding TCs; 2) we used the previously shown two study cases (Nadine and Ophelia) as evaluation cases for non-interpolated data. Overall, the interpolated datasets appear to provide consistent data that delivered good results, either not showing a large uncertainty (particularly for SST) and showing good relations to non-interpolated data (particularly for Chl-a). Finally, some small but important changes were made in the results section, with the addition of individual 6-hour observation analysis, which corroborated the analysis made in the original manuscript; and in the Nadine (2012) study case, which was not clear enough in the original version.
Overall, we are confident that these changes contributed to clarify some issues not sufficiently clear in the original manuscript. In this regard, the observations made by the reviewers were greatly appreciated and have certainly helped to improve the quality of the revised manuscript.
Answer to major comments:
Ekman pumping is an important component of the surface mixing as shown in some of the literature we've presented in the manuscript (e.g., Prince, 1981). The Ekman pumping is often computed using satellite (wind and wind stress), however, it can be complicated to study this effect behind TCs using remote sensing data due to large gaps in daily data as a consequence of frequent cloud cover. To partially overcome this caveat, we have elected to produce an additional analysis included in the Ophelia study case that expands beyond our study region and explores the wind stress data provided by the NOAA CoastWatch dataset. This dataset is derived from wind measurements obtained from the Advanced Scatterometer (ASCAT) instrument onboard EUMETSAT's MetOp satellites (A and B). ASCAT presents a near all-weather capacity (not affected by clouds), as it operates a frequency in C-band (5.255 GHz), therefore, minimizing the number of missing values in predominately clouded areas such as the case of tropical cyclone paths. We thank the reviewer for the very relevant point raised here. The CMEMS interpolated datasets used in this work aims to improve the low level of knowledge over those areas with strong cloud cover, however, it is expected that few data available should be affected by the interpolation. The data providers cannot guarantee absolute success in this process although with high reliability ( 2019)). Therefore, in the revised manuscript we will incorporate this information ( Figure R1) in the analysis and take it into account in the discussion of the results. We appreciate the reviewer's comments on this matter and agree that the methodology requires further explanation and clarification. At first, the window considered was the same for all TCs, which in retrospect seems to be not the most adequate for this study. The nature of this methodology forced our algorithms to search before the storm for a mean situation and after the storm for a significant response and then produce a mean ideal window to study all considered TCs. Indeed, as it stands it is not flexible enough to accommodate for differences among these many different storms, with diverse translation speeds and sizes, as pointed out by the reviewer. We have considered to implement a major change in the methodology that is capable of better representing such differences between TCs. Thus, some steps in the methodology have changed to better account the individual characteristics of each TC. In particular, we have considered the climatological situation of that storm's time period and compared it to the observed situation when the TC occurred in the region. This new approach allows the study of different time periods where SST and Chl-a responses differ, as well as different impacts depending on the TC's characteristics.
It is true that the difference in location was not taken into consideration, and we want to thank the reviewer for this important point. In fact, as discussed in the introduction, there is a noticeable meridional gradient of each variable in this region (warmer SSTs in the south and more biological activity in the north), this matter was further explored and will be taken into consideration in the revised manuscript. The novel methodology, as described in detail when answering the 3 rd and 5 th major comments, allows to take this latitudinal dependence into account. Thus, we have now analyzed the response in each observation respective to the latitude and longitude of each observation (see fig. R2). Results were only significant (at the 95% statistical level) for the Chl-a respective to the latitude. However, the relation is minimal (r = 0.135), and since the other responses were not significant, we decided not to include these results in the main manuscript, but to mention them since they are relevant.
We agree with the reviewer that the properties of TCs change a great deal during their lifetimes such as seen in the Hurricane Ophelia study case. In this regard, we will have an additional change to the revised methodology, where it will be divided in full TC and individual 6-hour observations. These two approaches differ in the way they are processed since the full TC eliminates any superposition of pixels (such as seen in the Nadine study case) and allows us to analyze the area after the complete passage of the cyclone over the region; for the second, we will not have this possibility and the superposition needs to be accounted on the discussion, however, this allows the study of the responses based on the observations' characteristics (intensity, translation speed, etc.). Concluding, this change will impact the results since the revised figures will include individual observation (see fig. R3).
Answer to other points:
It is related to the much lower cyclonic activity observed in our study region in relation to that observed in the rest of the north Atlantic basin, in which it is inserted.
We agree with the reviewer, and we will make an effort to reduce some of these sentences in the revised methodology.
Not entirely sure we understood this point, it is however worth saying that figure 7 (in the original manuscript) has suffered some changes to help clarify our results, as per other reviewers' suggestions (see fig. R4).
Figure labels (figures included as supplement):
Figure R1 (New Fig. S2) -Value of associated uncertainty for Chl-a (top row) and SST (bottom row) for three critical moments of this analysis (before, during, and after TCs) and a random sample from the dataset. Do note the larger scale of uncertainty for chl-a.
|
2022-02-13T16:25:38.972Z
|
2022-02-11T00:00:00.000
|
{
"year": 2022,
"sha1": "2059d320daf777530a0b6ba609d3531cb27d3dad",
"oa_license": "CCBY",
"oa_url": "https://nhess.copernicus.org/articles/22/1591/2022/nhess-22-1591-2022.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "391f74381dd07d5a5cb4d75b41ec9be51ffab0d0",
"s2fieldsofstudy": [],
"extfieldsofstudy": []
}
|
249714623
|
pes2o/s2orc
|
v3-fos-license
|
Learning from errors? The impact of erroneous example elaboration on learning outcomes of medical statistics in Chinese medical students
Background Constructivism theory has suggested that constructing students’ own meaning is essential to successful learning. The erroneous example can easily trigger learners’ confusion and metacognition, which may “force” students to process the learning material and construct meaning deeply. However, some learners exhibit a low level of elaboration activity and spend little time on each example. Providing instructional scaffolding and elaboration training may be an efficient method for addressing this issue. The current study conducted a randomized controlled trial to examine the effectiveness of erroneous example elaboration training on learning outcomes and the mediating effects of metacognitive load for Chinese students in medical statistics during the COVID-19 pandemic. Methods Ninety-one third-year undergraduate medical students were randomly assigned to the training group (n = 47) and the control group (n = 44). Prerequisite course performance and learning motivation were collected as covariates. The mid-term exam and final exam were viewed as posttest and delayed-test to make sure the robustness of the training effect. The metacognitive load was measured as a mediating variable to explain the relationship between the training and academic performance. Results The training significantly improved both posttest and delayed-test performance compared with no training (Fposttest = 26.65, p < 0.001, Partial η2 = 0.23; Fdelayed test = 38.03, p < 0.001, Partial η2 = 0.30). The variation trend in metacognitive load in the two groups was significantly different (F = 2.24, p < 0.05, partial η2 = 0.20), but metacognitive load could not explain the positive association between the treatment and academic performance (β = − 0.06, se = 0.24, 95% CI − 0.57 to 0.43). Conclusions Erroneous example learning and metacognitive demonstrations are effective for academic performance in the domain of medical statistics, but their underlying mechanism merits further study.
Introduction
Medical statistics is a compulsory course for medical students at all grade levels in China. It mainly focuses on summarizing, collecting, presenting, and interpreting medical practice data and using them to estimate the magnitude of associations and to test hypotheses. To learn medical statistics well, students should recall the Open Access *Correspondence: junyili@sicnu.edu.cn content of textbooks and fully understand the principles of statistics, construct their knowledge framework, and internalize what they have learned based on their own experience and insights [1]. Constructivism believes that learning is not a process of passive absorption, repeated practice and memory strengthening but rather a process of actively constructing meaning through an individual-environmental interaction (i.e., assimilation and accommodation) based on the existing knowledge and experience of the students [2]. Understanding learning material and constructing students' own meaning are vital to statistical learning or the learning of any other discipline. Elaboration processes are essential for meaningful learning since they allow learners to organize knowledge into a coherent structure and integrate new information with existing knowledge structures [3]. Students do not come to class as "empty vessels" waiting to be filled but instead approach learning material with significant prior knowledge. They need to interpret the new material in terms of their knowledge [1]. Thus, the instructional design of medical statistics should be improved by helping students construct their own meaning for what they are learning.
Worked-out example learning
Existing studies have suggested that the worked-out example is an efficient and effective instructional tool [4]. The worked-out example (i.e., consisting of a problem formulation, solution steps, and the final solution) has proven successful in various domains. Cognitive science suggests that seeing worked-out problems first makes the task easier and leads to greater understanding in less time [5]. Worked-out examples are problems that are completely worked out, showing all the steps of a solution. Using self-explanation, students decide why those steps are correct. Furthermore, when students can explain to themselves about why and how the correct answer is obtained, they gain a greater mental image of the process in different problem situations [6]. Cognitive psychology provides evidence that using worked-out examples and delaying actual problem-solving practice benefits novice learners [7]. For instance, in an introductory statistics class, students who studied worked-out problems demonstrated better academic performance on different statistical concepts [8]. Additionally, learners' active self-explanations of worked-out examples lead not only to enhanced near transfer but also to better far transfer [9]. This shows that the worked-out example plays a vital role in learning transfer. Without worked-out examples, students do not understand formulas properly and thus cannot apply them.
However, the adoption of a worked-out example is no panacea. The benefits that learners can obtain from it are not as significant as we thought. There is evidence that worked-out examples may be more suitable for novice learners [10]. The theoretical basis behind this is that in worked-out examples, learners will be more inclined to perform shallow processing, making it challenging to conduct deep learning and elaborate [11]. The extent to which learners benefit from the worked-out example depends heavily on how well they explain the solutions of the example to themselves [12]. Worked-out examples also vary in effectiveness depending on learner characteristics (especially prior knowledge) and on the learning outcomes considered [13]. Researchers have found that many learners are passive or superficial explainers; they exhibit a low level of elaboration activity and spend little time on each worked-out example. Passive and superficial example elaboration spontaneously occurred more frequently than deep elaboration [14]. This is probably because the solution steps and the final solution have already been provided to the students, which may easily give them the illusion of understanding. Namely, the given steps and correct answers may be detrimental to students' metacognitive monitoring and further affect learning outcomes.
Erroneous example learning
Why do we not provide students with an erroneous example to learn from instead of a correct example? An erroneous example is a worked example that incorporates at least one incorrect solution step [15]. It contains a common, well-documented misconception in a particular domain, a few self-explanation prompts/hints, and the correct solution [16]. These components form a scaffolding so that learners can identify errors, correct the erroneous problem-solving steps, and better use their abilities to generate solutions and solve problems correctly [17]. Experienced teachers can determine which parts of the learning material are prone to mistakes and can then compile erroneous examples, allowing students to see these examples, find the mistakes, and explain and correct them [18]. In this way, erroneous examples push students to further deepen their understanding of the content, help learners consolidate the concepts, methods, and skills they have learned, and improve learners' problem-solving and application abilities [19]. In addition, these examples can easily trigger learners' confusion, which has been proven to precede successful learning [20] because states of uncertainty and confusion may "force" students to deeply process the learning material.
Learners are consistently assimilating new information into their prior knowledge during the complex learning task. Deep learning occurs when there is a discrepancy in the information stream, and the discrepancy is identified and corrected [21]. As the discrepancy-reduction model suggests, learners will allocate more study time to difficult learning items (i.e., larger discrepancy between the current perceived state and the learning goal) than easier ones [22]. However, confusion must be effectively resolved by the learner, as unresolved confusion may have adverse consequences for learning [21].
Although some researchers worry that studying erroneous examples might appear to risk reinforcing students' misconceptions or introducing an inaccurate understanding, exploring students' errors can play a critical pedagogical role in teaching discussions [23]. There is evidence that the hypothetical errors of others can foster reflection, helping students recognize and correct errors in their own work [20]. This result seems to contradict the belief that showing students incorrect examples may reinforce existing misconceptions or introduce new errors, especially when the teaching materials identify the errors in the examples.
Metacognitive load and elaboration
Similar to learning worked-out examples, students need to be prompted to identify, explain, and correct the errors during erroneous example learning, which requires a high level of metacognitive monitoring and regulation. According to cognitive load theory (CLT), when learners invest effort in the construction and storage of schemata (e.g., in the process of erroneous example elaboration), they undertake a high level of germane cognitive load (i.e., the load imposed by cognitive processes directly relevant for learning) [24]. However, they also need to invest effort in monitoring this learning activity, which was called "metacognitive load" by Valcke [25]. Schwonke [26] believes that metacognitive load may be directly related to learning activities and learners' interaction. In the process of learning erroneous examples, students have to consistently monitor the discrepancy between the learning materials (i.e., erroneous examples) and their prior knowledge, and then they regulate their understanding/ knowledge structure. Therefore, in the present study, students' metacognitive load can be used as a potential mediating variable to explain the impact of erroneous examples on learning outcomes. Of course, erroneous example learning may be subject to passive or superficial explanations as well, and many students benefit less from this learning method [12]. According to instructional scaffolding theory and constructivist theory, when students cannot use certain knowledge and skills on their own, they can acquire new knowledge and skills through interaction with teachers [27]. Thus, a training procedure that aims to improve elaboration quality (especially metacognitive load) could be implemented. The instructors would act as a model and demonstrate metacognitive elaboration (e.g., self-explanation prompts, think aloud) utilizing a simple erroneous example. Then, the students would apply the elaboration behaviors demonstrated by the model. In this activity, teachers guide the teaching and enable students to master, construct, and internalize the behavior refined by the model to perform higher-level metacognitive activities and to increase the metacognitive load of students. The ultimate goal is to transfer the responsibility of a learning item to the student through scaffolding (i.e., elaboration training), while support fades over time [28]. We designed learning materials and elaboration training procedures based on previous research [13] and anticipated that the students who received the training would exhibit a higher metacognitive load and achieve better academic performance.
Based on the literature mentioned above, the present study explored the effectiveness of erroneous example elaboration on Chinese students' learning in medical statistics and the mediating effect of metacognitive load. The state of metacognitive load was examined across different conditions (i.e., without elaboration training vs. with elaboration training) over time. We proposed the following hypotheses: 1. The posttest performance of the experimental group (i.e., erroneous example learning with elaboration training) would be significantly higher than that of the control group. 2. The academic performance of the control group would decrease significantly from posttest to delayed test after retreating the erroneous example learning, whereas this difference would not be found in the experimental group. 3. The metacognitive load would explain the higher performance of the experimental group. Namely, the mediating effect of metacognitive load on the relationship between erroneous example learning and academic performance would be significant.
Participants and design
Ninety-one third-year undergraduate medical students were enrolled in a medical statistics course. Because of the impact of COVID-19, this course was divided into two parts. All students received medical statistics (Part 1) online from March 2020 to July 2020. After the COVID-19 situation alleviated in China, they returned to school and continued to learn medical statistics (Part 2) from September 2020 to January 2021. Ethics approval was granted by the West China hospital Sichuan University Institutional Human Research Ethics Committees (Verified in 2019 -No. 489). All students gave their informed consent before study inclusion.
Medical statistics is a compulsory course for the participants. Its content includes types of data, descriptive statistics for categorical data and continuous data, probability distributions, parameter estimation, hypothesis testing for categorical data and continuous data (e.g., t-test, ANOVA, ANCOVA, and Mann-Whitney U test), measures of association (e.g., Pearson's correlation coefficient), clustering analysis, simple linear regression, multiple regression analysis, data visualization, and application of statistical software (SPSS). Medical statistics (Part 2) mainly focuses on the content beyond hypothesis testing for categorical data. The course schedule consisted of 16 classes (once per week), and each class lasted 90 minutes.
Forty-four students were randomly assigned to the control group, in which the students were given a series of erroneous example items to engage in self-reflection (i.e., elaboration without training) in the in-class exercise section. After the students received their erroneous example items, the instructors demonstrated one or two worked-out examples on the blackboard (different from the erroneous example students received) relevant to the knowledge component of a given course. This procedure ensured that the two groups received instructors' guidance before the students explained the erroneous examples themselves. The intervention group included 47 students who were also asked to study erroneous examples in the coursework section. However, the instructors would act as a model and demonstrate metacognitive elaboration (e.g., self-explanation prompts, think aloud) utilizing a simple erroneous example before the students attempted self-explanations themselves. Then, the students applied the elaboration behaviors demonstrated by the model.
Erroneous examples
As shown in Fig. 1, each erroneous example item comprised four parts: question, incorrect answer, explanation of the error, and the student's answer. Students were informed that the solution was incorrect, and they were asked to study these materials and explain the error. Incorrect solutions often contain one or more common misconceptions relevant to a specific knowledge component. Students needed to respond to a multiple-choice question where they explained the hypothetical student's error. This kind of question was designed to encourage self-explanation. Instructors provided feedback if the students had any questions.
Prerequisite course performance
Medical statistics (Part 1) was considered a prerequisite course and included in the model as one of the covariates.
Learning motivation
We designed 11 items to assess students' learning interest in the course as one of the covariates. Some of the learning interest items were adapted from Marsh et al. 's academic interest scale [29]. The items were "I enjoy working on statistical problems", "Medical statistics is one of the things that is important to me personally", "I would even give up some of my spare time to learn new topics in medical statistics", and "When I'm working on medical statistical problems, time sometimes seems to fly by". The internal consistency of the learning interest scale was 0.73.
Metacognitive load
Participants' subjective mental effort during in-class exercises (i.e., erroneous example learning) was assessed on a 5-point Likert scale. It was recorded 10 times (once per week). After each in-class exercise, the participants scored the amount of mental effort they expended in monitoring/regulating their cognition/explanation/ understanding of the erroneous examples. We used this kind of self-rating measurement because it has been demonstrated that people are quite capable of giving a reasonably accurate numerical indication of their perceived mental burden [30]. Studies have also shown that reliable measures can be obtained with a unidimensional scale, are sensitive to relatively small differences in cognitive load and are valid, reliable, and unintrusive [31].
Tests
To evaluate the effects of erroneous example elaboration with training, we recorded the participants' mid-term and final exam scores. All participants received erroneous example learning until the mid-term exam. Thereafter, erroneous example learning was not included in the curriculum. Thus, the prerequisite course performance, mid-term exam, and final exam were considered the pretest, posttest, and delayed test, respectively.
Procedures
At the beginning of the semester, we required all students to complete the learning interest scale, and their medical statistics (Part 1) performance was recorded as a pretest score. Before the mid-term exam, they learned erroneous examples during in-class exercises (once per week) for 10 weeks. In each week of erroneous example learning, students evaluated their metacognitive load as related to understanding or explaining the examples.
Forty-four students were randomly assigned to the control group (i.e., elaboration without training), and 47 students were randomly assigned to the experimental group (i.e., elaboration with training). The two groups shared the same instructor team, learning materials, classroom, and schedule except for the erroneous example learning method. After learning for 10 weeks, all students participated in the mid-term exam (considered posttest) and continued to learn the remaining chapters (but without erroneous example learning). At the end of the semester, the final exam score was considered the delayed test score.
Data analysis
First, an unpaired t-test was used to examine the differences in baseline scores (i.e., learning motivation and prerequisite course performance) between the two groups. Second, one-way ANCOVA was conducted to test the significant differences in the outcomes between the two groups using prerequisite course performance and learning motivation as covariates. Third, we conducted a paired-sample t-test to examine the robustness of the effect of elaboration training (i.e., the difference between the posttest and delayed-test for each group). Fourth, repeated-measures ANOVA was chosen to test whether the metacognitive load significantly changed as students worked through erroneous examples. Fifth, the mediating effect of metacognitive load was examined by setting the group (i.e., control vs. experimental group) as the independent variable, posttest performance as the dependent variable, and prerequisite course performance and learning motivation as covariates. All data analysis and data cleansing procedures were conducted by using SPSS 20.0 and the PROCESS macro for SPSS [32]. Table 1, the prerequisite course performance and learning motivation were equivalent between the two groups. Table 2 presents the results of one-way ANCOVA. We considered learning motivation and prerequisite course performance as covariates. The results showed that erroneous example elaboration training significantly enhanced academic performance (i.e., mid-term exam and final exam scores). The participants in the experimental group reported a significantly higher metacognitive load than those in the control group.
As shown in
We conducted a paired-sample t-test to examine the robustness of the effect of elaboration training (See Table 3). After retreating with erroneous example learning, the academic performance of the control group declined significantly from the mid-term exam to the final exam but with a relatively small effect size (Cohen's d = 0.48). In the experimental group, a significant difference in academic performance between the mid-term exam and the final exam was not found, suggesting that the effect of erroneous example elaboration training was robust.
Repeated measures ANOVA was performed with metacognitive load as the within-subject factor, group as the between-subject factor, and learning motivation and prerequisite course performance as covariates. We found that both the main effect of the group (F = 2.77, p < 0.01, partial η 2 = 0.24) and the interaction effect between metacognitive load and the group were significant (F = 2.24, p < 0.05, partial η 2 = 0.20), indicating that the variation trend in the two groups was significantly different. This was especially apparent after the fourth round (see Fig. 2), where the metacognitive load of the experimental group was much higher than that of the control group.
The mediation analysis results (Model 4 in the PRO-CESS macro) showed that the indirect effect of the group
Discussion
This study examined the effectiveness of erroneous example elaboration training on Chinese students' learning outcomes in the domain of medical statistics. Metacognitive load was regarded as a possible mechanism to explain the positive effect of erroneous example elaboration training on academic performance. The main findings are as follows: 1). Erroneous example elaboration training significantly improved both posttest and delayed-test performance compared with no training; 2). The effect of treatment was robust; 3). The variation trend in metacognitive load between the two groups was significantly different; the metacognitive load of the experimental group was much higher than that of the control group after the fourth round; and 4). The mediating effect of metacognitive load on the association between group and academic performance was not found.
Elaboration training and erroneous examples learning
As anticipated, learners in the experimental group exhibited higher posttest and delayed-test performance than those in the control group. Additionally, the academic performance of the control group decreased significantly from the posttest to the delayed test after retreating with erroneous example learning, whereas this significant difference was not found in the experimental group, suggesting the robustness of the treatment effect. Consistent with our results, Stark et al. [33] implemented short elaboration training for apprentices of a bank who studied worked-out examples and found that participants with elaboration training exhibited both deeper elaboration and active metacognitive elaboration. The enhanced elaboration activities further improve learning outcomes.
In the experimental group of the present study, students were provided not only with well-designed erroneous examples but also with instructors' demonstrations. The instructor modeled how to set subgoals (i.e., planning) and think aloud (i.e., monitoring) and provide selfexplanation prompts (i.e., monitoring and evaluation) and self-regulation (i.e., regulation). These metacognitive components were presented to the students but not to controls as a well-structured "package". Students in the experimental group were more likely to achieve better performance than with learning erroneous examples alone, even with instructors providing feedback when required. After retreating with erroneous example learning (i.e., from week 10 to week 16), students in the two groups received the same instructional procedures, but the academic performance of the control group significantly declined. It is likely that since students without elaboration training or demonstrations had more passive and shallow elaboration activities, they did not form a solid knowledge structure during the first 10 weeks. The performance of the experimental group decreased as well but did not reach significance, possibly because the final exam covered more content than the mid-term exam. In addition, erroneous example learning seems to improve students' grades because both groups' mid-term exam and final exam scores were higher than those of medical statistics (Part 1). This result is consistent with those of previous studies. For example, Zhang found that in medical diagnostic knowledge learning, erroneous examples significantly improved the diagnostic ability of medical students [19]. Researchers have found that learners can deepen their understanding and application of knowledge in the process of interpreting correct and incorrect information [34]. Moreover, feedback can promote student learning [35]. The instructional design of erroneous example learning in both groups of the current study included immediate feedback provided by instructors; most students' confusion could be resolved in a timely manner rather than remaining stuck in the students' knowledge structure. Through feedback, learners can know whether they corrected the erroneous example themselves the very first time they interpreted it. It is helpful for students to grasp, understand and consolidate correct knowledge in a timely manner [18]. Metcalfe reviewed behavioral and neurological research and found that mistakes can greatly facilitate new learning [36]. Partially because the state of confusion focuses students' attention on discrepancies, it signals a need to initiate intensive deliberation and problem-solving processes. It also influences knowledge restructuring when impasse resolution or misconception correction leads to the reorganization of an incomplete or faulty mental model [21]. Richey et al. also suggested that students learn more from erroneous examples than from the problem-solving condition in an intelligent tutoring system [13]. Erroneous examples enhance the memory and generation of correct answers in the future, promote active learning, arouse learners' attention, and inform learners of error-prone knowledge points [36]. Thus, teachers should be encouraged to be open to mistakes and actively use erroneous example learning in instructional design to facilitate students' learning.
Metacognitive load is not the underlying mechanism
We initially expected that the difference in metacognitive load between the two groups may explain the higher academic performance of the experimental group. However, there was no mediating effect of metacognitive load on the association between group and academic performance. We found that the metacognitive load trajectories of the two groups were significantly different. After the fourth round, the metacognitive load of the experimental group was much higher than that of the control group. Namely, students in the experimental group invested more mental effort in monitoring/regulating their cognition of the erroneous examples. Perhaps the demonstration given by the instructors in the experimental group was more likely to elicit students' metacognitive load than those in the control group. Although metacognition increased, this mental effort does not seem to translate into academic performance. First, this finding might be because the complex learning material (i.e., erroneous examples) and demonstration in the experimental group pose vast cognitive and metacognitive demands on students. All these demands compete for limited mental resources. They may sometimes be beneficial, sometimes neutral, and occasionally detrimental to learning [26]. In the present study, these metacognitive demands on students exhibited neutral learning. Second, perhaps metacognitive knowledge/beliefs and the regulation/control of cognitive actions are more predictive than metacognitive load. Metacognitive load was assessed by a self-report scale, which is highly questionable as a source of data because people have no direct access to their mental processes [37]. In summary, erroneous example learning and metacognitive demonstration are effective for improving academic performance, but the underlying mechanism deserves further study.
Limitations and future studies
Some limitations of the present study should be noted. First, we did not measure metacognitive knowledge/ beliefs and regulation, which may be an important psychological mechanism. Second, use of the self-report scale to measure metacognitive load is questionable. Fine-grained data such as think-aloud data or log files in an intelligent tutoring system are more feasible. Third, the sample size in each group was relatively small; thus, we may not have acquired enough statistical power and may have further influenced the robustness of the current study results. Finally, our study did not design a group that used the worked-out example. The results of the current study may not be able to effectively prove that the effect of erroneous example learning is better than that of worked-out example learning.
Despite these limitations, our research still provides empirical evidence of applying erroneous example learning methods in medical statistics, proving that erroneous example elaboration training is an effective instructional design. Future research can refine the specific training process on this basis to improve training effectiveness and to develop effective long-term strategies, such as presenting both erroneous examples and worked-out examples in the same workbook and conducting a step-by-step problem-solving exercise. Simply exposing students to incorrect examples may not be enough to improve learning, as students may not understand what makes the error wrong [38]. Thus, it may be necessary for students to have sufficient scaffolds when learning from erroneous examples, especially if they do not have prior in-depth knowledge [39,40]. Second, future research should include a control group that uses worked-out examples and elaboration training to compare the learning effects of the two examples (i.e., worked-out example vs. erroneous example), refute the deficiencies of the erroneous example, and further explore the advantages of the erroneous example in the field of learning. Third, future studies could collect qualitative data such as survey comments and interviews to further examine learners' metacognitive load. The application of erroneous examples to multiple disciplines and fields to improve the generalizability of its learning effects is also a promising direction. As Metcalfe [36] wrote, an unwarranted reluctance to engage with errors may have held back our education. Encouraging educators and students to be open to mistakes is an important step to facilitate learning.
|
2022-06-17T13:40:19.532Z
|
2022-06-17T00:00:00.000
|
{
"year": 2022,
"sha1": "2ddc8ea768721cc569993e752ff185e71d0a6a8a",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "583b6e0e24d2902da6bae37d98cd76754b682af3",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
167379751
|
pes2o/s2orc
|
v3-fos-license
|
The Credit Risk and Its Measurement, Hedging and Monitoring
Credit risk or default risk involves inability or unwillingness of a customer or counterparty to meet commitments in relation to lending, trading, hedging, settlement and other financial transactions. The Credit Risk is generally made up of transaction risk or default risk and portfolio risk. The portfolio risk in turn comprises intrinsic and concentration risk. The credit risk of a bank's portfolio depends on both external and internal factors. The external factors are the state of the economy, wide swings in commodity/equity prices, foreign exchange rates and interest rates, trade restrictions, economic sanctions, Government policies, etc. The internal factors are deficiencies in loan policies/administration, absence of prudential credit concentration limits, inadequately defined lending limits for Loan Officers/Credit Committees, deficiencies in appraisal of borrowers' financial position, excessive dependence on collaterals and inadequate risk pricing, absence of loan review mechanism and post sanction surveillance, etc. This paper points out the measurement, hedging and monitoring of the credit risk.
Introduction
Credit risk management is the part of the comprehensive management and also the part of the control system. Credit risk can be considered as one of the major risk because it is associated with every active trade. Banks generally handled risk management strategy that incorporates the principles of risk management processes including risk identification, monitoring and measurement. The aim of the credit risk management is to maintain the efficiency of the business activities and the continuity of the business.
Credit risk is the risk of loss given default that does not meet its obligation under the conditions of the contract and thus causes the holders of creditor's loss. These obligations arise from lending activities, trade and investment activities, payment and settlement of securities trading on its own and foreign account. (Jílek, 2000) There may be cases if a counterparty fails to honour its undertaking and repay fully or partially due principal and interest, have not repaid on time. Credit risk is part of most balance sheet assets and off-balance sheet transections series (bank acceptances or bank guarantee). (Kašparovská, 2006) Credit risk includes credit risk default, risk of the guarantor or counterparties of the derivatives. This risk is present in all sector of the financial market, but most important is in banks, mainly from credit activities and offbalance sheet activities, such as guarantees. Credit risk also arises by entering into derivative transactions, securities lending, repurchase transactions and negotiation. For derivative transactions conducted an analysis of the creditworthiness of counterparties and watching its changes.
Measurement of the Credit Risk
It is necessary to measure the credit risk. The purpose of the credit risk measurement is the quantification of potential losses from credit operation. The amount of losses is never known with certainty therefore it is necessary to estimate it. There are two basic approaches to define credit losses and thus to quantify the credit risk.
The methods based on the absolute position in Credit risk
This approach is also known as "default-mode". Each borrower may be found at the end of the risk horizon in only two states -default or success. Credit risk then arises from default of the debtor.
Access to credit risk measurement through discrete models is typical for homogenous portfolio (mainly, banks´ exposures to retail small clients with unified credit products). Among the known methods using discrete models can be classified CreditRisk+, KMV model or CreditPortfolioView, (Vlachý, 2006) These methods show the volume of balance sheet assets, which is exposed to credit risk. When selling the loan to the client, the credit risk or potential loss, represented the entire amount of the loan together with accrued interest and fees, and it is possible to correct if there is the existence of quality collateral. By using this method, bank do not constitute reserves and adjusting entries to the sold loans. The reserves will be started to form only when there is a breach of loan agreement terms by client as an expression of possible loss of credit.
The methods based on the expected rate of default on credit claims
This approach is also known as "market-to-market". The debtor may be located in any from n-located rating grades including the failure in the end of the risk horizon. In this approach, the credit risk arises from the debtor transition to a lower rating grade.
This approach uses the method of continuous models to credit risk measurement. It is characterized by the fact, that unlike the discrete models that operate on a system of only two options for situation of client -failure or success, there are multiple values, which the debtor can acquire. This approach is more suitable for nonhomogeneous files such as loans to large companies. Determination of individual risk categories are usually based on external credit ratings. Credit migration is then likely to transition from category to the second category.
The differences between these two approaches for credit risk measurement are more than evident. Methods based on absolute position have positive approach to the credit risk. They assume that the loan will be repaid on time and properly. Reserves and remedies are starting to be created at the moment when the problem comes. In contrast, methods based on the expected rate of default are more realistic. Based on the assessment of the client´s credit, each loan have a risk weight of defaults and the bank begins to form reserves and remedies. Individual risk weights are based on historical data and represent the relationship between the risk of default and its credit rating. In practice, the methods on the expected rate of default are more use because they faithfully served the image of credit risk which the bank exposed.
These methods estimate the amount of expected losses but also the probability of the loss. Total risk amount (the amount of potential loss) is equal to the probability of default and the amount of loss. Each loan is included to the appropriate risk category and have its risk weighting.
|
2016-03-22T00:56:01.885Z
|
2015-01-01T00:00:00.000
|
{
"year": 2015,
"sha1": "7ccd28dd7c69a832a726b96ee04ae836a65158fd",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/s2212-5671(15)00671-1",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "cc6d763bbe795c4c8f717ded334370a25fa80e58",
"s2fieldsofstudy": [
"Economics",
"Business"
],
"extfieldsofstudy": [
"Business"
]
}
|
17426409
|
pes2o/s2orc
|
v3-fos-license
|
Intracardiac foreign body caused by cement leakage as a late complication of percutaneous vertebroplasty
To the Editor,
Percutaneous vertebroplasty (PVP) is a simple, convenient, and minimally invasive procedure for the management of back pain and spinal instability associated with osteoporotic compression fractures and other osteolytic spinal lesions [1]. Although very rare, cement leakage into the spinal canal or the vascular system has been reported as a troublesome late complication. In this report, we present a case of a foreign body in the heart revealed by transthoracic echocardiography and removed by open heart surgery.
A 75-year-old female patient was admitted for evaluation of progressively worsening dyspnea for 2 months. However, there was no medical history of dyspnea and intermittent palpitation, because she had been fairly active without diff iculty 2 months prior to admission. On examination, her vital signs were blood pressure 110/70 mmHg, heart rate 148 beats/min, respiratory rate 20 breaths/min, and body temperature 37.3℃. Physical examinations were unremarkable. Electrocardiography revealed atrial flutter with rapid ventricular response, whereas it had shown normal sinus rhythm 4 years prior to admission. Chest radiography showed an increased cardiothoracic ratio with mild pulmonary vascular congestion; in addition, radiographic high density was noted in the third lumbar vertebral body (Fig. 1A). With respect to her past medical history, she had undergone PVP at the level of the third and fourth lumbar spine 5 years previously for chronic back pain and had been asymptomatic since that time.
Figure 1
(A) Chest radiography shows the high density (arrows) of the 3rd lumbar vertebral body. (B) Coronary view in the chest computed tomographic scan shows linear high attenuating material (arrow heads) in the right atrium.
Transthoracic echocardiography exhibited severe global decreased wall motion abnormalities of the left ventricle (LV), poor systolic function (ejection fraction [EF], 27%), with rapid heart rate (136 beats/min) and normal LV end-diastolic dimension of 4.6 cm and dilated left atrium (LA) of 4.6 cm. However, moderate-to-severe tricuspid insufficiency (pulmonary artery systolic pressure [PASP], 57 mmHg) was noted, while there were no evidence of LA thrombus or pericardial effusion. Moreover, a calcified linear structure (approximately 6 cm), which was also conf irmed by chest computed tomography (CT) (Fig. 1B), was found in the right atrium (RA) and right ventricle (RV). It was anchored in the RA adjacent to the inferior vena cava opening, passed through the tricuspid valve, and reached around the posterior wall of the RV outflow tract (Fig. 2). As a result of malcoaptation of the tricuspid valve caused by the linear structure passing through the tricuspid opening, a laterally directed eccentric jet flow of moderate-to-severe tricuspid insufficiency was demonstrated. With regard to the increased pulmonary artery pressure, any pulmonary complications of foreign body embolism could not be found by chest CT.
Figure 2
(A) In the subcostal view, the foreign body (arrow heads) is attached to right atrium (RA) near the opening site of inferior vena cava. (B) Parasternal short axis view reveals that the echogenic linear structure (arrow heads) in the RA passed through ...
The patient had commenced diuretics with furosemide (increased to 80 mg daily) and β-blockers with carvedilol (up to 12.5 mg twice daily) for dyspnea and atrial flutter. The symptoms of chest discomfort and dyspnea seemed to be related at least in part to the foreign body in the heart. We considered the foreign body in the RA and RV to be a potential source of pulmonary thromboembolism or infarction in the near future and thus recommended surgical removal, even if the etiology of the clinical symptoms was not entirely correlated with the foreign body. Surgical findings revealed that the 6 cm long linear intracardiac foreign body was a calcified and fragile material (Fig. 3), and that it was attached to the confluence site of the inferior vena cava and RA, and reached to the RV. The foreign body was excised at its attachment, preserving the tricuspid valve.
Figure 3
(A) Operation photograph showing a linear material (arrowheads) in the right ventricle and right atrium. (B) Photograph of gross specimens showing cement materials that were removed from right atrium and ventricle; foreign body was broken into two pieces. ...
On follow-up echocardiography, systolic function was not much improved (EF 33%); however, the severity of tricuspid regurgitation was decreased from moderate to mild. The patient subsequently became free from dyspnea and chest discomfort, while atrial flutter remained.
After discharge, she visited the outpatient clinic regularly for management of heart failure.
PVP is an effective, minimally invasive procedure used mainly for the treatment of vertebral fractures in osteoporosis and metastasis. During the procedure, polymethylmethacrylate is injected into the lesion of the vertebral body, and organizes within a short time. Complications after PVP include bleeding at the puncture site, inaccurate needle placement, pain exacerbation, local infection, leakage of polymethylmethacrylate cement into the spinal canal or paravertebral tissues, perivertebral venous leakage, and pulmonary embolism [2]. There is always a risk of cement migration into the vena cava, which may result in pulmonary embolism. Vasconcelos et al. [3] have reported an incidence of 16.6% for minor passage of cement into perivertebral veins, including one case in which a minute amount of cement reached the inferior vena cava. Other cases have reported multiple cardiac perforations after PVP [4].
Usually, symptoms or signs of cement leakage complications occur during, immediately or within several months after the procedure. However, in the present case, the foreign body could not enter the pulmonary circulation because of the length and rigid nature of the material; otherwise, there would have been catastrophic complications. Thus, we speculated that the pathological process of heart failure progressed gradually, taking 5 years for the clinical manifestation of dyspnea to become apparent.
As regards the cause of heart failure, there was a possibility of acute exacerbation of chronic heart failure, and some explanations seem possible. Other than the conventional risk factors, such as old age, hypertension and diabetes, the shortening of ejection time or diastolic relaxation time in rapid heart rate could cause heart failure, such as tachycardia-induced heart failure [5], as is frequently seen in patients with atrial flutter or fibrillation. Although the foreign body might have increased tricuspid insufficiency, it was not the only cause of the heart failure. In other words, we do not know the cause of the aggravation of dyspnea. However, in this case, the symptom improved after heart rate control. The foreign body could increase PASP and tricuspid insufficiency severity. High pulmonary artery pressure can be caused by left heart failure. The foreign body was not solely responsible for dyspnea and could not have been an immediate cause of dyspnea. When the cause of heart failure is unknown, the symptom may be attributed to tricuspid insufficiency exacerbated by a foreign body, although pharmacological treatments such as diuretics and digoxin are used in heart failure. A definite relationship between the foreign body and atrial flutter with tricuspid insufficiency leading to heart failure could not be demonstrated in the present case. Although the foreign body was found incidentally, it might have been the source of pulmonary thromboembolism, valvular heart disease, or cardiac perforation in the near future. Because of the jamming caused by the linear structure in the tricuspid valve, we assumed that the heart failure with atrial flutter in our patient could be partly attributed to the foreign body; this is supported by the patient's clinical course after removal of the foreign body. Thus, given the deleterious effects of a foreign body on cardiovascular complications, surgical removal of the foreign body should be performed.
Here, we report a foreign body in the RA and RV complicating PVP 5 years previously. In this case, we exerted effort to prevent complications arising due to the foreign body. It is important to consider the possibility of late manifestation of complications; a high index of suspicion is also required in patients who have a cardiac foreign body, especially those with a history of PVP.
To the Editor,
Percutaneous vertebroplasty (PVP) is a simple, convenient, and minimally invasive procedure for the management of back pain and spinal instability associated with osteoporotic compression fractures and other osteolytic spinal lesions [1]. Although very rare, cement leakage into the spinal canal or the vascular system has been reported as a troublesome late complication. In this report, we present a case of a foreign body in the heart revealed by transthoracic echocardiography and removed by open heart surgery.
A 75-year-old female patient was admitted for evaluation of progressively worsening dyspnea for 2 months. However, there was no medical history of dyspnea and intermittent palpitation, because she had been fairly active without diff iculty 2 months prior to admission. On examination, her vital signs were blood pressure 110/70 mmHg, heart rate 148 beats/ min, respiratory rate 20 breaths/min, and body temperature 37.3°C. Physical examinations were unremarkable. Electrocardiography revealed atrial f lutter with rapid ventricular response, whereas it had shown normal sinus rhythm 4 years prior to admission. Chest radiography showed an increased cardiothoracic ratio with mild pulmonary vascular congestion; in addition, radiographic high density was noted in the third lumbar vertebral body (Fig. 1A). With respect to her past medical history, she had undergone PVP at the level of the third and fourth lumbar spine 5 years previously for chronic back pain and had been asymptomatic since that time.
Transthoracic echocardiography exhibited severe global decreased wall motion abnormalities of the left ventricle (LV), poor systolic function (ejection fraction [EF], 27%), with rapid heart rate (136 beats/min) and normal LV end-diastolic dimension of 4.6 cm and dilated left atrium (LA) of 4.6 cm. However, moderate-to-severe tricuspid insufficiency (pulmonary artery systolic pressure [PASP], 57 mmHg) was noted, while there were no evidence of LA thrombus or pericardial effusion. Moreover, a calcified linear structure (approximately 6 cm), which was also conf irmed by chest computed tomography (CT) (Fig. 1B), was found in the right atrium (RA) and right ventricle (RV). It was anchored in the RA adjacent to the inferior vena cava opening, passed through the tricuspid valve, and reached around the posterior wall of the RV outflow tract ( Fig. 2). As a result of malcoaptation of the tricuspid valve caused by the linear structure passing through the tricuspid opening, a laterally directed eccentric jet flow of moderate-to-severe tricuspid insufficiency was demonstrated. With regard to the increased pulmonary artery pressure, any pulmonary complications of foreign body embolism could not be found by chest CT.
The patient had commenced diuretics with furose-mide (increased to 80 mg daily) and β-blockers with carvedilol (up to 12.5 mg twice daily) for dyspnea and atrial flutter. The symptoms of chest discomfort and dyspnea seemed to be related at least in part to the foreign body in the heart. We considered the foreign body in the RA and RV to be a potential source of pulmonary thromboembolism or infarction in the near future and thus recommended surgical removal, even if the etiology of the clinical symptoms was not entirely
A B A B
correlated with the foreign body. Surgical f indings revealed that the 6 cm long linear intracardiac foreign body was a calcified and fragile material (Fig. 3), and that it was attached to the conf luence site of the inferior vena cava and RA, and reached to the RV. The foreign body was excised at its attachment, preserving the tricuspid valve.
On follow-up echocardiography, systolic function was not much improved (EF 33%); however, the severity of tricuspid regurgitation was decreased from moderate to mild. The patient subsequently became free from dyspnea and chest discomfort, while atrial flutter remained.
After discharge, she visited the outpatient clinic regularly for management of heart failure.
PVP is an effective, minimally invasive procedure used mainly for the treatment of vertebral fractures in osteoporosis and metastasis. During the procedure, polymethylmethacrylate is injected into the lesion of the vertebral body, and organizes within a short time. Complications after PVP include bleeding at the puncture site, inaccurate needle placement, pain exacerbation, local infection, leakage of polymethylmethacrylate cement into the spinal canal or paravertebral tissues, perivertebral venous leakage, and pulmonary embolism [2]. There is always a risk of cement migra-tion into the vena cava, which may result in pulmonary embolism. Vasconcelos et al. [3] have reported an incidence of 16.6% for minor passage of cement into perivertebral veins, including one case in which a minute amount of cement reached the inferior vena cava. Other cases have reported multiple cardiac perforations after PVP [4].
Usually, symptoms or signs of cement leakage complications occur during, immediately or within several months after the procedure. However, in the present case, the foreign body could not enter the pulmonary circulation because of the length and rigid nature of the material; otherwise, there would have been catastrophic complications. Thus, we speculated that the pathological process of heart failure progressed gradually, taking 5 years for the clinical manifestation of dyspnea to become apparent.
As regards the cause of heart failure, there was a possibility of acute exacerbation of chronic heart failure, and some explanations seem possible. Other than the conventional risk factors, such as old age, hypertension and diabetes, the shortening of ejection time or diastolic relaxation time in rapid heart rate could cause heart failure, such as tachycardia-induced heart failure [5], as is frequently seen in patients with atrial flutter or fibrillation. Although the foreign body
A B
might have increased tricuspid insufficiency, it was not the only cause of the heart failure. In other words, we do not know the cause of the aggravation of dyspnea. However, in this case, the symptom improved after heart rate control. The foreign body could increase PASP and tricuspid insufficiency severity. High pulmonary artery pressure can be caused by left heart failure. The foreign body was not solely responsible for dyspnea and could not have been an immediate cause of dyspnea. When the cause of heart failure is unknown, the symptom may be attributed to tricuspid insufficiency exacerbated by a foreign body, although pharmacological treatments such as diuretics and digoxin are used in heart failure. A definite relationship between the foreign body and atrial flutter with tricuspid insufficiency leading to heart failure could not be demonstrated in the present case. Although the foreign body was found incidentally, it might have been the source of pulmonary thromboembolism, valvular heart disease, or cardiac perforation in the near future. Because of the jamming caused by the linear structure in the tricuspid valve, we assumed that the heart failure with atrial flutter in our patient could be partly attributed to the foreign body; this is supported by the patient's clinical course after removal of the foreign body. Thus, given the deleterious effects of a foreign body on cardiovascular complications, surgical removal of the foreign body should be performed.
Here, we report a foreign body in the RA and RV complicating PVP 5 years previously. In this case, we exerted effort to prevent complications arising due to the foreign body. It is important to consider the possibility of late manifestation of complications; a high index of suspicion is also required in patients who have a cardiac foreign body, especially those with a history of PVP.
|
2017-06-22T19:42:09.613Z
|
2013-02-27T00:00:00.000
|
{
"year": 2013,
"sha1": "795fe4addfab830ba87df5f90c236dcaa2192a50",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3904/kjim.2013.28.2.247",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "795fe4addfab830ba87df5f90c236dcaa2192a50",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
8203147
|
pes2o/s2orc
|
v3-fos-license
|
Ten years of asthma admissions to adult critical care units in England and Wales
Objectives To describe the patient demographics, outcomes and trends of admissions with acute severe asthma admitted to adult critical care units in England and Wales. Design 10-year, retrospective analysis of a national audit database. Setting Secondary care: adult, general critical care units in the UK. Participants 830 808 admissions to adult, general critical care units. Primary and secondary outcome measures Demographic data including age and sex, whether the patient was invasively ventilated or not, length of stay (LOS; both in the critical care unit and acute hospital), survival (both critical care unit and acute hospital) and time trends across the 10-year period. Results Over the 10-year period, there were 11 948 (1.4% of total) admissions with asthma to adult critical care units in England and Wales. Among them 67.5% were female and 32.5% were male (RR F:M 2.1; 95% CI 2.0 to 2.1). Median LOS in the critical care unit was 1.8 days (IQR 0.9–3.8). Median LOS in the acute hospital was 7 days (IQR 4–14). Critical care unit survival rate was 95.5%. Survival at discharge from hospital was 93.3%. There was an increase in admissions to adult critical care units by an average of 4.7% (95% CI 2.8 to 6.7)/year. Conclusions Acute asthma represents a modest burden of work for adult critical care units in England and Wales. Demographic patterns for admission to critical care unit mirror those of severe asthma in the general adult community. The number of critical care admissions with asthma are rising, although we were unable to discern whether this represents a true increase in the incidence of acute asthma or asthma severity.
As the authors acknowledge the main limitation lies in the diagnosis asthma itself. The authors base their data on the diagnosis made by the physician who is actually referring the patient for acute hospital care. Indeed this is a limitation in virtually all studies of asthma epidemiology and in this respect the authors supply the best possible data for answering their research question. However, I have one clear comment. When studying adult asthma, patients under 20 years of age should have been excluded. Moreover, the likeliness of a COPD diagnosis instead of an asthma diagnosis increases with age and therefore it would have been logical to omit the patients older than 60 years of age. Probably the main results and conclusions would not differ, but in that case they would have been based on a group of patients who are adult and have less likely COPD. The authors should at least provide the data and conclusions based on this age-specific population.
Another limitation (in this case not extensively mentioned by the authors) is that the severity of asthma and/or acute symptom presentation are unknown. Hospital admissions are normally dependent on the acute presentation of symptoms, the severity of asthma, the availability of hospital beds at that specific moment and the availability of informal care at home for the patient. An increase in admissions could be due to all these factors and most likely to a combination of them. It would be good to add this limitation to the Discussion section. In general the introduction is somewhat light on description of the critical care units, more background would be helpful. It would be better to refer to costs of asthma, proportion of costs for acute exacerbations in the UK or Europe rather than just specifically for America.
REVIEWER
Methods: More detail on the denominator required. How many units are now operating and covered by ICNARC (60-70% of how many)? While a reference has been given for ICNARC methods a little more detail should be given, in particular the coding scheme used.
The statistics are inadequately described. Needs more information on the methods used for the Rate Ratios. Why were the data log-transformed? I assume this is because the data are skewed but this must be more explicitly stated before judgement on the suitability of the methods. Reference
RESULTS & CONCLUSIONS
Results: 1st sentence I assume this 11,948 is 1.4% of the total admissions (for all conditions) to adult critical care units? This is not clear. This could be clarified with a better basic demographics table would make this an easier paper to read and would lead on to table 2 which is difficult to interpret (it is unclear what the denominator is for each cell). A good template for this is Table 1 in the Gupta et al 2004 paper (ref 22), although with less detail as this paper is a more limited analysis. Table 2 -results text states this show rates while the title on the table states the data are numbers of admissions?
GENERAL COMMENTS
In many cases it was simply there was not enough information provided to judge the methods/statistics used. This paper does not attempt more detailed analysis or attempt to answer questions on the causes for its findings (as for example the Gupta et al paper) but appears to be a first description of the data available on adult asthma in ICNARC. Therefore as a first description this paper should aim to provide more demographic detail on the dataset. This paper is light on analysis and what has been done and why needs to be more clearly explained.
VERSION 1 -AUTHOR RESPONSE
Reviewer 1 1. As the authors acknowledge the main limitation lies in the diagnosis asthma itself. The authors base their data on the diagnosis made by the physician who is actually referring the patient for acute hospital care. Indeed this is a limitation in virtually all studies of asthma epidemiology and in this respect the authors supply the best possible data for answering their research question. However, I have one clear comment. When studying adult asthma, patients under 20 years of age should have been excluded. Moreover, the likeliness of a COPD diagnosis instead of an asthma diagnosis increases with age and therefore it would have been logical to omit the patients older than 60 years of age. Probably the main results and conclusions would not differ, but in that case they would have been based on a group of patients who are adult and have less likely COPD. The authors should at least provide the data and conclusions based on this age-specific population. Unlike most routine health data sources, ICNARC data are validated by trained data collectors and the admitting critical care doctors. Therefore we can be as secure as possible that the diagnosis of asthma is correcteven at the extremes of age. Nonetheless, we recognise the possibility that diagnostic and coding errors are a possibility, particularly in the young and the elderly. Sensitivity analyses in which we restrict the data by age would be one way of investigating this issue, but this is unfortunately difficult with the particular datatset since we have only been provided with aggregate data. This is because of concerns regarding small numbers in some cells and the associated risk of disclosing identities. We have therefore now reflected on this this issue in some detail when considering the potential strengths and limitations of this work in the revised Discussion section of the paper.
2. Another limitation (in this case not extensively mentioned by the authors) is that the severity of asthma and/or acute symptom presentation are unknown. Hospital admissions are normally dependent on the acute presentation of symptoms, the severity of asthma, the availability of hospital beds at that specific moment and the availability of informal care at home for the patient. An increase in admissions could be due to all these factors and most likely to a combination of them. It would be good to add this limitation to the Discussion section. We have added a paragraph in the revised Discussion section in which we now reflect on this concern.
Reviewer 2 1. Introduction: In general the introduction is somewhat light on description of the critical care units, more background would be helpful. It would be better to refer to costs of asthma, proportion of costs for acute exacerbations in the UK or Europe rather than just specifically for America. The Introduction has been revised to provide international readers with more details about critical care units in the UK. Furthermore, we now also include data and supporting references for UK estimates for the costs of asthma.
2. Methods: More detail on the denominator required. How many units are now operating and covered by ICNARC (60-70% of how many)? While a reference has been given for ICNARC methods a little more detail should be given, in particular the coding scheme used. We have revised the Methods to include more details on the denominator and have also expanded the Discussion to reflect on some of the challenges with unequivocally establishing what constitutes a critical care unit in the UK.
3. The statistics are inadequately described. Needs more information on the methods used for the Rate Ratios. Why were the data log-transformed? I assume this is because the data are skewed but this must be more explicitly stated before judgement on the suitability of the methods. We have included more detail on the statistical methods used and the reasons for doing so in the manuscript. Expenditures, United States, 1998-1999to 2008-2009 8. In many cases it was simply there was not enough information provided to judge the methods/statistics used. This paper does not attempt more detailed analysis or attempt to answer questions on the causes for its findings (as for example the Gupta et al paper) but appears to be a first description of the data available on adult asthma in ICNARC. Therefore as a first description this paper should aim to provide more demographic detail on the dataset. This paper is light on analysis and what has been done and why needs to be more clearly explained. As noted above and in the revised manuscript, ICNARC supply much of the data in aggregated form. The opportunity to undertake further analyses was therefore confined to the limited raw data that were provided. In revising the paper, we have tried to be as explicit as possible about the methods ICNARC use for data collection and processing as possible.
REVIEWER
Dr Rebecca Ghosh Research Associate Small Area Health Statistics Unit (SAHSU) MRC-HPA Centre for Environment and Health Imperial College London UK I declare that I have no competing interests.
REVIEW RETURNED
15-Aug-2013 -The reviewer completed the checklist but made no further comments.
|
2018-04-03T00:05:45.743Z
|
2013-09-01T00:00:00.000
|
{
"year": 2013,
"sha1": "abebf074437c508e5f8b9ec096de97b42b8153d8",
"oa_license": "CCBYNC",
"oa_url": "https://bmjopen.bmj.com/content/3/9/e003420.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c3dd6eb040a1d72eed2b27b92800c0949019e3a5",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
262982586
|
pes2o/s2orc
|
v3-fos-license
|
Expression of the gene encoding blood coagulation factor VIII without domain B in E. coli bacterial expression system
In this article, we have demonstrated the feasibility of generating an active form of recombinant blood coagulation factor VIII using an E. coli bacterial expression system as a potential treatment for hemophilia type A. Factor VIII (FVIII), an essential blood coagulation protein, is a key component of the fluid phase blood coagulation system. So far, all available recombinant FVIII formulations have been produced using eukaryotic expression systems. Mammalian cells can produce catalytically active proteins with all the necessary posttranslational modifications. However, cultivating such cells is time-consuming and highly expensive, and the amount of the obtained product is usually low. In contrast to eukaryotic cells, bacterial culture is inexpensive and allows the acquisition of large quantities of recombinant proteins in a short time. With this study, we aimed to obtain recombinant blood coagulation factor VIII using the E. coli bacterial expression system, a method not previously explored for this purpose. Our research encompasses the synthesis of blood coagulation factor VIII and its expression in a prokaryotic system. To achieve this, we constructed a prokaryotic expression vector containing a synthetic factor VIII gene, which was then used for the transformation of an E. coli bacterial strain. The protein expression was confirmed by mass spectrometry, and we assessed the stability of the gene construct while determining the optimal growth conditions. The production of blood coagulation factor VIII by the E. coli bacterial strain was carried out on a quarter-technical scale. We established the conditions for isolation, denaturation, and renaturation of the protein, and subsequently confirmed the activity of FVIII.
Introduction
Coagulation factor VIII (antihemophilic factor A) is a glycoprotein synthesized mainly in hepatocytes, but also in kidneys, endothelial cells, and lymphatic tissue.It is one of the largest coagulation factors, with a molecular weight of 293 kDa.Factor VIII circulates in the bloodstream bound to von Willebrand factor in a noncovalent complex.This association was first described by Vehar et al. (1984) and Fang and Wang (2007).The active form of factor VIII (FVIIIa) functions as a nonenzy-matic cofactor for the prothrombinase and tenase complexes in the intrinsic coagulation pathway.It enhances the activation of factor X by activating factor IX (FIXa) in the presence of phospholipids and calcium ions.The lack of this protein, which is characteristic of hemophilia type A, leads to a severe blood coagulation disorder.Factor VIII is composed of 2332 amino acids and consists of six domains, namely A1-A2-B-A3-C1-C2 (Kurachi and Davie, 1982).
In the bloodstream, proteolytic processes, specifically furin protease, cleave the factor VIII protein into two chains: a heavy chain (HC) of 200 kDa (A1-A2-B) and a light chain (LC) of 80 kDa (A3-C1-C2).These chains are connected by a covalent bond.Limited proteolysis of the B chain produces a heterogeneous population of active factor VIII forms with molecular weights ranging from 90 to 200 kDa.The smallest of these forms, composed of an HC of 90 kDa and an LC of 80 kDa, represents the active coagulation factor VIII. Notably, the resulting active form of the B domain lacks glycosylation sites, specifically amino acids Arg740 to Glu1649 (Stoilova-McPhie et al., 2014).Within the B domain, the N-terminal region's Ser743 is connected to the C-terminal's Glu1638, forming a specific site known as SQ comprising 14 amino acids (SFSQNPPVLKRHQR).This site is located between domains A2 and A3.The presence of this site enables intracellular cleavage of the 170 kDa single chain (SC) and the formation of the 80-90 kDa active complex.It is worth noting that the presence of specific amino acids at positions !1 and !4 relative to the Glu1649 site in SQ allows proteolytic cleavage by furin protease.
In its inactive or minimally active state, factor VIII serves as a cofactor in the blood coagulation process.Activation as a cofactor occurs only after proteolytic cleavage at the SQ site (Thompson, 2003;Ngo et al., 2008).Figure 1 illustrates the three-dimensional structure of B domain-deleted coagulation factor VIII.The gene responsible for encoding coagulation factor VIII is located on the X chromosome (Xq28) in humans.When a mutation occurs in this gene, it leads to a congenital bleeding disorder known as hemophilia A. It is important to note that this mutation primarily arises in male germ cells.The effect of this mutation is either the reduced or absent synthesis of factor VIII or the production of an abnormal protein (Hong et al., 2007).Treatment of bleedings in the course of hemophilia and related disorders consists of supplementation of missing coagulation factor i.e., its substitution (Marchesini et al., 2021).
DNA manipulation, transformation, and sequencing
DNA restriction, ligation, and gel electrophoresis were performed using standard techniques (Sambrook et al., 1989).All bacterial cell transformations with plasmid DNA were carried out through electroporation using 1 mm cuvettes (BTX) and the MicroPulserTM electroporator (BioRad).Electrocompetent E. coli cells were prepared following the standard techniques (Sambrook et al., 1989).
Plasmid DNA was isolated using the Plasmid Mini Isolation Kit (A&A Biotechnology), following the manufacturer's instructions.Restriction enzymes, ligase, and DNA ladder were purchased from Roche, New England Biolabs, and Fermentas, respectively, and their usage followed the instructions provided by the manufacturers.The amplification of DNA was performed using the Biotools DNA Polymerase (Biotools B&M Labs) on a PTC-100 cycler from MJ Research, in accordance with the manufacturer's guidelines.For protein molecular weight determination, prestained markers were used, namely the Multicolor Broad Range Protein Ladder (Thermo-Fisher) and the Full Range Rainbow (Amersham).DNA sequencing was conducted at Genomed SA (Warsaw, Poland), utilizing their commercially available sequencing facility.The primers used in this study were synthesized at the Institute of Biochemistry and Biophysics, Polish Academy of Sciences.PCR products were separated on agarose gels and purified using the Gel-out Kit (A&A Biotechnology), following the manufacturer's instructions.The Chromogenix Coamatic® Factor VIII Kit was purchased from DiaPharma Group, Inc.
Host strains
The laboratory strains of E. coli, namely NM522, DH5α, and E. coli Z, are derived from E. coli K12, a Gram-negative bacterium that belongs to the Enterobacteriaceae family.These strains exhibit characteristic colony morphology on TSA, forming round, flat, shiny colonies with smooth borders.In liquid LB medium, they grow uniformly as suspended cultures.Furthermore, these strains are capable of growth in liquid medium MM when supplemented with thiamine (2 μg/ml) and L-proline (at concentrations ranging from 50 to 200 μg/ml).The optimal growth conditions for these strains include a temperature of 37EC and a pH range of 6.5-7.5.Table 1 provides an overview of the host strains and their respective genotypes.
DNA origin
PCR amplification in this study utilized vectors that contained fragments of the factor VIII gene as a source of DNA.Table 2 provides information on clone IDs, collections, descriptions, and origins of the plasmids used in the experiment.For the synthesis of the factor VIII gene, primers and synthetic oligonucleotides were purchased from the Institute of Biochemistry and Biophysics, Polish Academy of Science.
Plasmid vectors
The gene fragment of factor VIII was synthesized (GenScript USA Inc., US).Subsequently, it was cloned into the cloning vectors pBluescript II SK(!) and pUC19 using standard techniques (Sambrook et al., 1989).The plasmid vector pBluescript II (GenBank Accession No. ATCC 87047) carries the ColE1 replicon and the ampicillin resistance gene (ampR).On the other hand, plasmid pUC (GenBank Accession No. ATCC 37254) also contains the ColE1 replicon and the ampR gene.The plasmid pDB (Hartman and Mendelovitz, 1996) containing the deoP1P2 promoter (Valentin-Hansen, 1982;1984), transcription terminator trpA (Pharmacia Biotech), and the tetracycline resistance gene (tetR ), was used for the construction of the prokaryotic expression system.
Construction of the gene encoding blood coagulation factor VIII without the B domain
To construct a synthetic gene for human coagulation factor VIII, we designed specific primers and synthetic oligonucleotides.These were utilized in the synthesis of factor VIII gene fragments.Given the large size of the blood coagulation factor VIII gene, which consists of 4320 base pairs, we devised three cloning vectors: pBluescript II SK(!) (referred to as pBluescript I), pUC19 (referred to as pUC II), and pUC19 (referred to as pUC III).The names of primers contain the letters: F -sense or R -antisense; the restriction sites are in bold Figure 2 illustrates the cloning strategy employed for plasmid pBluescript I, highlighting the restriction sites and factor VIII gene fragments, including the gene fragment from the vector IOH10704 (Table 2), as well as gene fragments A, B, and C. Fragments A, B, and C were constructed using the DNA Assembly Method (Sambrook et al., 1989).This method involved the chemical synthesis of shorter fragments, followed by hybridization and ligation to form the longer molecule.Subsequently, PCR amplification with specific primers was performed to amplify the correct DNA strands (Nowak et al., 2015).
Table 3 presents the primers used for the synthesis of the factor VIII gene segment referred to as Fragment 1. Figure 3 depicts the cloning plasmid pUC II, illustrating the restriction sites and fragments of the factor VIII gene.These include the gene fragments from the vectors F82385511 and F81629998 (Table 2), as well as gene fragments D, E, and F. Fragments D, E, and F were constructed using the DNA Assembly Method (Sambrook et al., 1989).
Table 4 provides the primers employed for the synthesis of the factor VIII gene segment known as Fragment 2. Figure 4 illustrates the cloning plasmid pUC III, displaying the restriction sites and fragments of the factor VIII gene.This includes the gene fragment from the vector F84641352 (Table 2), the synthetic oligonucleotide OligoF8P, and gene fragment G. Fragment G was constructed using the DNA Assembly Method (Sam- , 1989).Table 5 presents the primers used for the synthesis of the factor VIII gene segment referred to as Fragment 3. The final plasmid, pBluescript + Fragment 1 + Fragment 2 + Fragment 3, was constructed by cloning Fragment 2 from the cloning vector pUC II and Fragment 3 from the cloning vector pUC III into the cloning vector pBluescript I + Fragment 1 (Fig. 5).
All procedures throughout the study were conducted using standard techniques (Sambrook et al., 1989).To change restriction sites (SalI, ClaI ) into the original se- quence of the blood coagulation factor VIII site-directed mutagenesis was performed using the QuikChange Site-Directed Kit (ThermoFisher).Finally, the sequence of the entire plasmid was confirmed, and the resulting construct was named pBluescriptFVIII.
Construction of the prokaryotic expression vector pDBFVIII
The pDBFVIII prokaryotic expression vector was constructed using the gene encoding factor VIII obtained from the pBluescriptFVIII plasmid and pDB expression vector.To introduce the restriction sites (SacII, XhoI ) into the pDB vector, we utilized the QuikChange Site-Directed Kit (ThermoFisher) for performing site-directed mutagenesis.The first mutagenesis reaction was performed to remove the XhoI restriction site located in the pDB transcription terminator trpA.Subsequently, the SacII and XhoI restriction sites were added.The resulting plasmid was then cleaved using SacII and XhoI enzymes.The 4320 bp fragment of the FVIII gene, obtained by digesting the corresponding region from the recombinant plasmid pBluescriptFVIII with the same enzymes, was inserted into the cleaved pDB vector in a clockwise orientation.To restore the original transcription terminator sequence trpA, another round of site-directed mutagenesis was performed using the QuikChange Site-Directed Kit from ThermoFisher.Finally, the complete plasmid sequence was verified to ensure its accuracy, and the resulting construct was designated as pDBFVIII (Fig. 6).The constructed prokaryotic expression plasmid carried a tetracycline resistance gene, conferring resistance to tetracycline antibiotics.Transcription initiation was controlled by the deoP1P2 promoter derived from E. coli.
Cell culture
The experimental work involved both laboratory-scale cultures in flasks and quarter-technical scale cultures in bioreactors.In the laboratory scale, the E. coli Z strain carrying the pDBFVIII plasmid derivatives was aerobically cultured at 37EC in LB or MM medium supplemented with tetracycline (100 μg/ml) until the optical density (OD) at 600 nm (OD 600 ) reached a range of 0.5-1.5.
For the quarter-technical scale cultures, bacterial cells were obtained from a glycerol stock stored at !70EC.The E. coli Z strain containing the pDBFVIII plasmid with the factor VIII gene was initially grown for 12 h at 37EC in shaking flasks containing 50 ml of MM medium supplemented with a 50% glucose solution and tetracycline (100 μg/ml) until the OD 600 reached approximately 1.0.These flasks (4 × 50 ml) served as the inoculum for the Bioflo 310 (New Brunswick Scientific) 7.5 l bench-top bioreactor with a culture volume of 4 l, as well as the Bioflo 415 (New Brunswick Scientific) 15 l bench-top bioreactor with an 8 l culture volume.
During the fermentation process, no antibiotics were added to the media.The cells were grown for 16-17 h at 37EC with an aeration rate of 5 l/min.To maintain glucose concentration within the range of 70-120 mg/dl during the exponential growth phase, a 50% glucose solution was added.The pH was controlled at around 7 throughout the entire run by adding 16% NH 4 OH solution.Once the OD 600 reached approximately 25-37, the glucose feeding rate was automatically limited until the glucose concentration in the medium decreased to 0 g/dl.Subsequently, glucose feeding was controlled by the pH-stat method using a cascade system.Glucose was automatically added by the controller whenever the pH exceeded the set point of around pH 7 (with a dead band of 0.01).The glucose level was maintained within the concentration range of 30-40 mg/dl using the pH-stat approach.After induction, the culture was grown for an additional 4-7 h, reaching the stationary phase of growth.
Due to the minimal change in the culture volume (below 5%), the calculations for the batch fermentation were performed using a mathematical model based on Monod kinetics: ( where X is the number or mass of cells (mass/volume), t is time, and μ is the specific growth rate constant (1/time).
Isolation, dissolution, and renaturation of inclusion bodies
The procedure for obtaining the recombinant blood coagulation factor VIII in the form of inclusion bodies involved several steps.Firstly, the E. coli Z cells expressing the protein were harvested through centrifugation at 9000 × g for 15 min at 4EC.The pelleted cells were then resuspended in a solution containing 100 mM Tris-HCl (pH 7.5), 500 mM NaCl, 10 mM EDTA, 0.35 mg/ml lysozyme, and 0.5 mM β-mercaptoethanol.Additionally, 1% PMSF (protease inhibitor) was added to prevent protein degradation.The bacterial suspension was gently mixed for 30 min at 20EC and subsequently lysed by sonication.Afterward, the lysate was centrifuged at 18000 × g for 20 min at 4EC.
The resulting pellet, containing the inclusion bodies, was washed twice with a solution of 50 mM Tris-HCl (pH 7.5), 500 mM NaCl, and 1% Triton X-100 to remove bacterial proteins.Each wash step was followed by centrifugation at 18 000 × g for 20 min.Finally, the pellet was washed again with a buffer solution of 50 mM Tris-HCl (pH 7.5) and 500 mM NaCl.
To dissolve the inclusion bodies and extract the recombinant factor VIII, the pellet was suspended in a solution consisting of 8 M urea, 5 mM β-mercaptoethanol, and 50 mM phosphate buffer (pH 12).The suspension was gently stirred for 45 min at room temperature to facilitate the dissolution process.Subsequently, the suspension was centrifuged at 18000 × g for 15 min at 4EC to remove any remaining insoluble debris.The factor VIII protein was then permitted to undergo folding over a subsequent 16 h period, accompanied by vigorous stirring and aeration at 8EC, within a renaturation buffer that consisted of 100 mM NaCl and 50 mM Tris (pH 10).The volume of the renaturation buffer utilized was 20 times greater than that of the renatured sample.
Protein determination by Western blot
To conduct SDS-PAGE electrophoresis, a volume of 55 μl of lysate obtained from E. coli, which had overex- Electrophoresis was carried out for 24 h to determine the protein size of 160 kD, using a 10% developer gel (Fig. 7).Subsequently, the proteins were transferred from the gel to a polyvinylidene difluoride membrane using a transfer apparatus.The membrane was then blocked using a blocking buffer.
For the detection of coagulation factor VIII A2 (Pierce) domain, primary murine antibodies were diluted 1 : 1000 in the blocking buffer and applied to the membrane for incubation.Following a 1 h incubation period, excess unbound primary antibodies were removed by washing the membrane.In the subsequent step, horseradish peroxidase enzyme-conjugated secondary rabbit antimouse IgG antibodies (Pierce) were added.These secondary antibodies were diluted 1 : 1000 in the blocking buffer.Finally, a photographic plate was developed, revealing the analyzed coagulation factor VIII protein (Fig. 8).
Protein identification with 4800 Plus Maldi TOF/TOF
MALDI TOF/TOF spectra were obtained using a reflector mode on a 4800 Plus Analyzer from Applied Biosystems Inc.The matrix used for analysis was α-Cyano-4hydroxy-cinnamic acid.External calibration was performed using a 4700 proteomics analyzer calibration mixture provided by Applied Biosystems.The acquired spectra were processed using Data Explorer Software, Version 4.9 from Applied Biosystems.
For peptide identification, the Mascot search engine from Matrix Science Inc. was utilized.The search was conducted against the Swiss-Prot and National Library of Medicine sequence databases.The protein bands corresponding to three forms of factor VIII (SC, HC, and LC) were identified based on the protein marker and confirmed by Western blot analysis.Following the procedure described by Sączyńska et al. (2018), the protein bands were excised from the gel, washed, and subjected to a reduction in the presence of dithiothreitol.Subsequently, alkylation with iodoacetamide was performed.The samples were then incubated with a trypsin buffer (Promega) at 37EC for 18 h.Before MS analysis, the samples were dried and dissolved in 10 μl of 0.1% trifluoroacetic acid.
Determination of factor VIII activity by chromogenic method
The activity level of factor VIII without domain B was determined using a chromogenic method, as there are differences in binding to phospholipids between native factor VIII and recombinant factor VIII without domain B. The Chromogenix Coamatic® Factor VIII Kit, following the manufacturer's instructions from DiaPharma Group, Inc., was employed for the chromogenic FVIII activity assay.
The assay consisted of two stages (DiaPharma Group, Inc).In the first stage, the recombinant factor VIII without domain B, containing an unknown amount of functional FVIII, was added to a reaction mixture composed of thrombin, FIXa, FX, calcium, and phospholipid.This resulted in the rapid production of FVIIIa, which in turn, collaborated with FIXa to activate FX.The termination of the reaction allowed for the assumption that the production of FXa was proportional to the amount of functional FVIII present in the sample.
In the second stage of the assay, the measurement of FXa was carried out by utilizing a specific peptide nitro-anilide substrate cleaved by FXa.This cleavage generated P-nitroaniline, resulting in the development of color.The absorbance of the color at 405 nm was measured using photometry.The intensity of the color produced was directly proportional to the quantity of functional FVIII present in the sample, based on a standard curve.To calibrate the standard curves of the factor VIII activity assays, the 7th International Standard Factor VIII from NIBSC (National Institute for Biological Standards and Control) was used as the factor VIII standard (Moser and Funk, 2014).
Expression of factor VIII in E. coli prokaryotic system
To achieve overexpression of the factor VIII protein, we designed the prokaryotic expression vector pDB, which contained a synthetic factor VIII gene.This vector was then used for the transformation of the E. coli Z bacterial strain.In laboratory-scale experiments, we evaluated the expression of recombinant protein in two different media, namely LB and MM, supplemented with tetracycline at a concentration of 100 μg/ml.In the initial stage, the cultures were grown until they reached an optical density of OD 600 -1.In the case of cultures grown in MM+Tet medium, the expected OD 600 -1 was attained after 12 h, whereas in cultures grown in LB+Tet medium, the OD 600 -1 was achieved after 7.5 h.
In the subsequent stage, the cultures were further grown to attain maximum OD.After 24 h, the maximum OD 600 was reached, measuring approximately OD 600 -4.50 for MM+Tet medium and OD 600 -5.19 for LB+Tet medium.Subsequently, all four cultures were subjected to centrifugation, and the inclusion bodies were isolated.These inclusion bodies were then separated using SDS electrophoresis on a 12% polyacrylamide gel, allowing us to evaluate the expression level of the FVIII gene.This evaluation took into account the type of medium used, the OD of the culture, and the amount of associated bacterial proteins (Fig. 7).Furthermore, the expression of the FVIII gene was confirmed through Western Blot analysis, using primary murine antibodies targeting the coagulation factor VIII A2 domain in the SC of FVIII (Fig. 8).
Protein identification with mass spectrometry
The confirmation of protein identity was achieved using MALDI-TOF/TOF mass spectrometry.The mass spectrometry analysis identified amino acid sequences based on the peptides obtained, aligning with the amino acid sequence of coagulation factor VIII.The identified amino acid sequences are depicted in Figure 9.
Bioreactor high cell density batch fermentation
To achieve higher cell densities, the cultivation process was scaled up.Initially, a 7.5 l bench-top bioreactor was utilized, with a culture volume of 4 l.Subsequently, the scale was further increased by employing a 15 l bench-top bioreactor, with a culture volume of 8 l.
Batch fermentation in 7.5 L bench-top bioreactor
The cultivation process in the 4 l culture volume bioreactor began with an initial OD 600 (OD at 600 nm) of 0.13 and a glucose concentration of 90 mg/dl.Over the first 5 h, the bacterial culture utilized the glucose present in the medium for its growth.Subsequently, glucose was supplemented to maintain a concentration range of 60 mg/dl.The culture exhibited a particularly long lag phase lasting for 5 h.After this lag phase, the acceleration phase commenced, and after 11 h, the culture entered the exponential growth phase, which lasted for the following 7 h.The culture was considered complete when the OD reached its maximum value of 37. Several parameters of the cultivation process were monitored over time, including OD, real-time estimation of biomass (r x ), dissolved oxygen (DO), and the rotational speed of the agitator.These parameters are presented graphically in the diagrams (Fig. 10).
During the batch fermentation process in the 4 l culture volume, the r x was determined to be 1.67 × 10 13 cells/h (1 OD 600 = 8 × 10 8 cells/ml).The specific growth rate of the culture was calculated to be 0.2 l/h, with a corresponding generation time of 3.5 h.Following the fermentation of the E. coli Z strain with the pDBFVIII plasmid, approximately 160 g of wet biomass was obtained.From this biomass, 28 g of inclusion bodies were isolated, accounting for approximately 17.5% of the total biomass.The isolated inclusion bodies were subsequently dissolved and subjected to a renaturation process, as described in the Materials and Methods section.
Batch fermentation in 15 L bench-top bioreactor
In the second cell culture, the cultivation was conducted in an 8 l volume of LB medium.The initial OD 600 was measured at 0.20, and the glucose concentration in the medium was 90 mg/dl.Similar to the previous culture, during the initial 3 h, the bacterial culture utilized the glucose from the medium for growth.Subsequently, glucose was added to maintain a concentration range of 70 mg/dl.The lag phase for this culture also lasted 5 h, similar to the smaller bioreactor.Following the lag phase, an acceleration phase of approximately 10 h was observed.This was followed by an 8 h exponential growth phase until the OD reached its maximum value of 25.The parameters of the cultivation process, including OD, r x , DO, and rotational speed of the agitator, were monitored and presented graphically in the diagrams (Fig. 10).
During the batch fermentation process in the 8 l culture volume, the r x was determined to be 3.13 × 10 13 cells/h (1 OD 600 = 8 × 10 8 cells/ml).The specific growth rate of the culture was calculated to be 0.351/h, with a corresponding generation time of 2.0 h.Following the fermentation of the E. coli Z strain with the pDBFVIII plasmid, approximately 238 g of wet biomass was obtained.From this biomass, 49.9 g of inclusion bodies were isolated, accounting for approximately 20.9% of the total biomass.The isolated inclusion bodies were subsequently dissolved and subjected to a renaturation process, following the protocol described in the Materials and Methods section.To analyze the protein expression levels of the FVIII gene, samples obtained from the batch fermentations and laboratory-scale culture were subjected to SDS electrophoresis in a 12% polyacrylamide gel (Fig. 11).
Based on the findings, it has been demonstrated that it is feasible to scale up the production process of the recombinant blood coagulation factor.The initial laboratory-scale culture was successfully followed by fermentation in a 4 l culture volume bioreactor, and eventually scaled up to fermentation in an 8 l culture volume bioreactor.
Determination of factor VIII activity by chromogenic method
To assess the activity of recombinant blood coagulation factor VIII, a chromogenic method was employed.This method measures the biological activity of factor VIII as a cofactor for the activation of factor X by active factor IX (factor IXa) in the presence of calcium ions and phospholipids.Before measuring the activity level of factor VIII, a standard curve was established.This involved preparing dilutions of the FVIII standard and measuring the corresponding absorbance values.Table 6 presents the dilutions used and the corresponding activity level values expressed in International Units (IU/ml).
Based on the dilutions, a standard curve was determined, described by the equation: where x is the IU/ml activity level; y is the absorbance level A.
The relationship between the activity of the FVIII standard and the absorbance value is shown in Figure 12.
After the renaturation process, the sample containing recombinant factor VIII protein at a concentration of 2.35 mg/ml was diluted by adding 25 μl of the sample into 2000 μl of the dilution solution.The absorbance of the diluted sample was then measured at a wavelength of 405 nm.Using the standard curve equation, the level of coagulation factor VIII activity was calculated based on the absorbance value.This activity measurement was performed for samples obtained from the laboratory scale, the 7.5 l bench-top bioreactor (4 l culture volume), and the 15 l bench-top bioreactor (8 l culture volume).Each sample was measured three times.The average activity level value from all measurements was determined to be 0.04 (Table 7).To assess the level of error, the standard deviation was calculated, resulting in a value of 0.000258736.This indicates an error range for the measured activity of approximately ± 0.0008.
Discussion
This study presents a method for obtaining recombinant factor VIII using a prokaryotic expression system.The protein's antibody-binding properties were confirmed through Western blot analysis, and its activity was determined using the chromogenic method.
Hemophilia A is primarily treated with FVIII substitutive therapy.Early treatments involved lyophilized factor VIII concentrates derived from human plasma, which became available in the late 1960s (Pool et al., 1964).However, these concentrates were associated with serious side effects for patients.Due to the pooled plasma from multiple donors, they served as a source of hepatitis B virus and later hepatitis C virus since 1989 (Fletcher et al., 1989;Tobler et al., 1997;Rougemont, 2023).Moreover, in the early 1980s, 60-80% of hemophilia patients became infected with human immunodeficiency virus (HIV) through these lyophilized concentrates (Curran et al., 1983;Evatt et al., 2006;Rougemont, 2023).A significant breakthrough in hemophilia treatment came with the discovery of the human factor IX and factor VIII genes in 1982 and 1984, respectively (Lusher et al., 1993).Shortly after, Wood et al. (1984) demonstrated that mammalian cells (BHK -baby hamster kidney cells and CHO -Chinese hamster ovarian cells) transfected with human factor VIII cDNA could successfully synthesize the factor.Recombinant factor VIII produced through genetic engineering technology became commercially available in the early 1990s (Bray et al., 1994).
In the 1990s, the availability of recombinant factor VIII raised expectations that it would replace human plasma-derived concentrates in hemophilia treatment (Coppola et al., 2010;Orlova et al., 2013;Lieuw, 2017).However, the adoption of recombinant factor VIII varies across countries.For example, Sweden and Ireland have reached 100% utilization of recombinant factor VIII among affected patients.In the United States, approximately 93% of patients receive recombinant factor VIII, while in Germany, the rate is around 72%.In Hungary, it is 60%, and in Poland, the utilization of recombinant factor VIII remains low at only 8% (World Federation of Hemophilia, 2021).Indeed, there have been significant advancements in the production of recombinant coagulation factor VIII and different generations of recombinant factor VIII pharmaceuticals have been developed (Kenneth et al., 2017).It is worth noting that all currently available recombinant factor VIII formulations have been produced using mammalian cell lines such as CHO, BHK, and human embryonic kidney cells (Lucas et al., 1996;Fussengger et al., 1999;Sandberg et al., 2012;Casademunt et al., 2012;Orlova et al., 2013;Valentino et al., 2014;Winge et al., 2015).However, breeding mammalian cells is time-consuming and very expensive, and the amount of the product obtained is usually low.
In contrast to eukaryotic cells, a bacterial culture is inexpensive and allows the acquisition of large quantities of recombinant proteins in a short time (Rosano and Ceccarelli, 2014).
The findings presented in this paper highlight the successful production of an active form of recombinant blood coagulation factor VIII using a cost-effective prokaryotic expression system instead of the conventional eukaryotic-based system (Mazurkiewicz-Pisarek et al., 2016).This approach offers several advantages, including reduced manufacturing costs, shorter production time, improved product availability, and most importantly, elimination of the risk of infection associated with using plasma-derived products.The methods and results described in this study provide fundamental information for further research aimed at obtaining functional recombinant factor VIII in E. coli bacterial strains.While there are existing recombinant full-length factor VIII variants available for research or FDA approval, as well as studies reporting the production of recombinant factor VIII, to the authors' knowledge, only two papers describe the production of recombinant factor VIII domains rather than the full-length FVIII factor.One of these papers reports on the production method of obtaining a recombinant A2 domain using insect cells and its activity confirmation via ELISA (Srivastawa et al., 2013), while another one describes the production of GST-tag or His-tag conjugated A1, A2, A2, and C domains using E. coli (Choi et al., 2015).
The advantage of our research lies in the successful production of an active, full-length recombinant factor VIII without the B domain and the use of any tags.This achievement opens up new possibilities for the development of a novel drug for the treatment of hemophilia A. Furthermore, future research in this field should focus on optimizing the production process to increase the activity of the obtained protein.
Conclusions
In this study, we present a novel method for obtaining recombinant FVIII factors using a prokaryotic expression system.Our experimental work successfully demonstrated the feasibility of constructing a synthetic gene for human coagulation factor VIII within the prokaryotic system.We described the process of constructing the prokaryotic expression vector, which involved cloning the synthetic gene encoding factor VIII.The expression of the vector was confirmed through mass spectrometry analysis.
Constructing such a large prokaryotic expression vector (8428 bp) and transforming it into the E. coli bacterial strain posed significant challenges.However, we were able to overcome these challenges and successfully obtain recombinant coagulation factor VIII in the form of inclusion bodies.We developed a method to isolate these inclusion bodies and determined the conditions for protein dissolution and renaturation while maintaining its activity.
As a result, we have devised effective scale-up strategies for a bioprocess intended to achieve the overproduction of recombinant factor VIII on a quarter-technical scale.The successful development of a biotechnological method for producing recombinant blood coagulation factor VIII has validated our hypothesis that it is indeed feasible to obtain an active protein using an unexplored prokaryotic expression system.
In comparison to production in the eukaryotic system or by synthesis, the proposed biotechnological method of obtaining active recombinant factor VIII has the potential to significantly reduce production costs and increase production volume.
Fig. 2 .
Fig. 2. Construction scheme for the cloning plasmid pBluescript I with restriction sites and fragments of the factor VIII gene: gene fragment from vector IOH10704, gene fragments A-C; site-directed mutagenesis was necessary in order to change restriction sites into the original sequence of blood coagulation factor VIII (Vector NTI AdvanceTM10)
Fig. 4 .
Fig. 4. Construction scheme for the cloning plasmid pUC III with restriction sites and fragments of the factor VIII gene: gene fragment from vector F84641352, synthetic oligonucleotide oligoF8P, gene fragment G; site-directed mutagenesis was necessary in order to change restriction sites into the original sequence of the blood coagulation factor VIII; AmpR -ampicillin resistance gene (Vector NTI AdvanceTM10)
Fig. 5 .
Fig. 5. Construction scheme for the cloning plasmid pBluescript + Fragment 1 + Fragment 2 + Fragment 3; site-directed mutagenesis was necessary in order to change restriction sites into the original sequence of the blood coagulation factor VIII; AmpR -ampicillin resistance gene, ori -origin of replication (Vector NTI AdvanceTM10)
Fig. 6 .
Fig. 6.Construction scheme for the prokaryotic expression vector pDBFVIII; the pDBFVIII plasmid contains a factor VIII gene under the control of the deoP1P2 promoter; FVIII -factor VIII gene, TcR -tetracycline resistant gene (Vector NTI AdvanceTM10)
Fig. 9 .
Fig. 9.The amino acid sequence of coagulation factor VIII with marked peptides; the amino acid sequences identified by mass spectrometry of peptides according to the amino acid sequence of coagulation factor VIII are marked with red color; SQN -proteolytic cleavage site for cutting the single chain form (SC) into the heavy chain (HC) and light chain (LC)
Fig. 10 .
Fig. 10.Batch bioreactor fermentation for E.coli Z strain with pDBFVIII plasmid; A, B -cell growth curve for 4 and 8 l, respectively; C, D -real-time estimation of biomass (r x ) for 4 and 8 l, respectively; E, F -profiles of dissolved oxygen (DO) and stir speed for 4 and 8 l, respectively; R 2 -standard deviation Fig. 11.Expression level of the FVIII gene as revealed by SDS electrophoresis in a 12% polyacrylamide gel; M -protein marker: Multicolor Broad Range Protein Ladder (ThermoFisher), 1 -batch fermentation in 8 l culture volume, 2 -batch fermentation in 4 l culture volume, 3 -laboratory scale culture; SC -single chain, HC -heavy chain, and LC -light chain
Fig. 12 .
Fig. 12.Standard curve of the relationship between the activity of the FVIII standard and the absorbance value
Table 1 .
Host strains and their genotypes
Table 2 .
Clone IDs, collections, description, and origin of the plasmids IMAGE: 1629998 3N similar to gb: M14113 COAGULATION FACTOR VIII PRECURSOR (HUMAN); contains Alu repetitive element; mRNA sequence Invitrogen
Table 3 .
Primers for the construction of the cloning plasmid pBluescript I
Table 4 .
Primers for the construction of the cloning plasmid pUC II The names of primers contain the letters: F -sense or R -ntisense; the restriction sites are in bold
Table 5 .
Primers for the construction of the cloning plasmid pUC II
Table 6 .
Dilutions of the FVIII factor standard according to activity level values in International Units IU/ml
Table 7 .
The results of obtained absorbance values and the level of protein activity were calculated based on the equation of the standard curve
|
2023-09-27T15:10:08.712Z
|
2023-09-25T00:00:00.000
|
{
"year": 2023,
"sha1": "f7a079ee29b2b65c69729efaeded9ee68ccf2a71",
"oa_license": "CCBYNCND",
"oa_url": "https://www.termedia.pl/Journal/-85/pdf-51301-10?filename=BTA#323%2003%20str%20247-262.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "31670b0a1c199670c1e08c4f71d632f69b896c15",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
}
|
56831531
|
pes2o/s2orc
|
v3-fos-license
|
Neurofibromatosis type 1 and lymphocytic hypophysitis : Single trigger and double shots ?
Introductıon Neurofibromatosis type 1 (NF1) is an autosomal dominantly inherited disorder caused due to the loss of function of neurofibromin gene located on chromosome 17. This in turn causes the upregulation of p21 ras oncogene and the proliferation of melanocytes, endoneural fibroblasts, and Schwann cells (constitute the neural crest cells).1 The clinical manifestations of the disease are various skeletal, neurological, and dermatological malfunctions. We report here a case of NF1 with lymphocytic hypophysitis and its pathogenetic possibilities.
Introductıon
Neurofibromatosis type 1 (NF1) is an autosomal dominantly inherited disorder caused due to the loss of function of neurofibromin gene located on chromosome 17.This in turn causes the upregulation of p21 ras oncogene and the proliferation of melanocytes, endoneural fibroblasts, and Schwann cells (constitute the neural crest cells). 1 The clinical manifestations of the disease are various skeletal, neurological, and dermatological malfunctions.We report here a case of NF1 with lymphocytic hypophysitis and its pathogenetic possibilities.
Case report
An 18-year-old female presented to the emergency medicine department with persistent vomiting and giddiness of one week duration.Her medical history revealed a previous diagnosis of NF1 and resection of right sided optic glioma around two years back.Her periods were regular.There was no history of any drug intake (including steroids).Physical examination revealed stable vitals, pulse rate of 86 per minute, and blood pressure of 108/78 mm Hg without postural fall.Her face was deformed with a neurofibroma.There were café au lait macules on her trunk (Fig. 1) and freckling in the axillae.She was admitted in acute medical ward and was commenced on intravenous fluids and antiemetics.Her initial biochemical and hematological investigations were within normal limits, but the ESR level was 45 mm/hr.MRI scan showed an extracranial temporoparietal neurofibroma and nodular thickening (3.2 mm; normal <2 mm) with contrast enhancement of the infundibulum and the gland suggestive of lymphocytic hypophysitis (Fig. 2).Subsequent hormonal estimations revealed a very low random serum cortisol (0.46 ug/dl), elevated serum prolactin (57.12 IU/l), and normal TSH, gonadotropins, and free T3 and T4.She was negative for pregnancy test, and anti-thyroid peroxidase antibody (anti-TPO) and ANA estimations.Her serum calcium and ACE (angiotensin convertase enzyme) levels were normal.Chest X-ray and ultrasound abdomen did not reveal any abnormality.Mantoux test done prior to commencing steroids was negative.maintenance dose of 20 milligrams per day.The patient is currently stable and is on daily dose of physiological replacement hydrocortisone therapy.
Discussion
Lymphocytic hypophysitis (infundibular neurohypophysitis) is an autoimmune inflammation affecting the infundibular stalk, and the anterior and posterior lobes of pituitary.Around 80% of the affected patients have circulating antiarginine vasopressin antibodies.In the current case, very low cortisol levels, elevated prolactin, enhanced pituitary stalk, and high ESR were suggestive of the diagnosis.Sarcoidosis was one of the differential diagnoses considered.A dramatic improvement in symptoms with treatment was observed and they did not recur on rapid tapering of hydrocortisone to physiological doses.In addition, absence of leptomeningeal or systemic involvement ruled out the possibility of sarcoidosis.A pituitary biopsy was necessary to confirm the diagnosis, but it was withheld, as the patient showed symptomatic improvement.
There is literature evidence on the conjoint occurrence of autoimmune conditions with NF.Yaccin et al. have reported a case of Hashimotos' thyroiditis and vitiligo in a patient with NF1. 2 Graves' disease and connective tissue diseases have also been previously diagnosed in patients with NF1. 3,4 . 1 : Café au lait spot on the abdomen Fig. 2
: The axial view of MRI scan showing the thickening of the pituitary stalk
The pathogenesis of autoimmunity in NF1 is considered to be related to the malfunctioning neurofibromin gene.Suppression of Fas ligand expression by the defective neurofibromin causes lack of apoptosis of CD4+ T lymphocytes and the subsequent development of autoinmmunity. 5,6 the best of our knowledge, this is the first case of coexistence of NF1 and lymphocytic hypophysitis.This report further strengthens the possibility of the dual effect of the defective neurofibromin gene supporting its role in autoimmune diseases.Although not conducting a biopsy is the limitation of the case, there is enough evidence to conclude the diagnosis.
|
2018-12-15T10:44:03.540Z
|
2015-03-20T00:00:00.000
|
{
"year": 2015,
"sha1": "b59e6967efa1b5ce5de34a95dd910f98692eeeb7",
"oa_license": "CCBY",
"oa_url": "https://www.chanrejournals.com/index.php/rheumatology/article/download/104/pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "b59e6967efa1b5ce5de34a95dd910f98692eeeb7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
18773297
|
pes2o/s2orc
|
v3-fos-license
|
Hyperpigmented Torpedo Maculopathy with Pseudo-Lacuna: A 5-Year Follow-Up
Purpose The aim of the study was to describe a case of globally hyperpigmented torpedo maculopathy that also contained a novel central lesion resembling a ‘pseudo-lacuna’. We compare the morphology of the lesion after 5 years of follow-up. Case Presentation An asymptomatic 10-year-old Caucasian male was referred by his optometrist after having found a hyperpigmented lesion on routine dilated examination in 2010. Color fundus photography OS from October 2015 showed a 1.74 × 0.67 mm hyperpigmented oval-shaped lesion temporal to the macula. Since June 2010, the hyperpigmented torpedo lesion appeared to have assumed a more ovoid shape and increased in size in the vertical axis. Centrally, there was a small pearlescent-colored pseudo-lacuna lesion that seemed to also have significantly increased in size since June 2010. Enhanced depth imaging optical coherence tomography of this pseudo-lacuna showed retinal pigment epithelium clumping and migration. Fundus autofluorescence revealed reduced autofluorescence of the torpedo lesion and marked hyperautofluorescence of the pseudo-lacuna. Fluorescein angiography shows no neovascular disease or leakage. Conclusion Torpedo maculopathy has been described previously as a hypopigmented, nonprogressive lesion of unknown etiology. The findings of global hyperpigmentation, pseudo-lacuna formation, and morphologic changes over time in this lesion challenge these classically held descriptions, and necessitate long-term follow-up with multimodal imaging.
Introduction
Torpedo maculopathy is a rare, typically hypopigmented, oval-shaped lesion of the retinal pigment epithelium (RPE) that usually presents as an incidental finding upon examination. It was first described in 1992 by Roseman and Gass [1] in a 12-year-old boy. The term 'torpedo maculopathy' was later introduced by Daily [2] in 1993, given its characteristic torpedo-like tip that points towards the fovea.
The vast majority of reported cases of torpedo maculopathy are visually asymptomatic, although some have shown a corresponding scotoma [3]. The natural history of the lesion is not well studied, and the longest follow-up to date is 5 years [4]. Past authors have suggested that torpedo maculopathy is nonprogressive in both appearance and function [3,5]; however, some recent studies indicate gradual structural and functional changes over time [4,6]. The origins of this lesion remain unclear although previous reports have suggested abnormal nerve fiber layer, faulty development of the choroid, malformation of the long posterior ciliary neurovascular bundle, or a persistent defect in RPE development at the fetal temporal bulge [3,7,8].
In this report, we describe color fundus photography, fundus autofluorescence (FAF), fluorescein angiography (FA), and enhanced depth imaging optical coherence tomography (EDI-OCT) findings in a 15-year-old male patient with torpedo maculopathy. To our knowledge, we are identifying the first case of torpedo maculopathy that is globally hyperpigmented, as well as describing a novel central lesion within the torpedo resembling a 'pseudo-lacuna'. Also important to our study is a fundus photograph from 5 years ago, showing significant morphological changes over time. These multimodal findings further clarify the etiology of this rare lesion, and challenge the classic description of torpedo maculopathy as a nonprogressive and primarily hypopigmented lesion.
Case Report
A 10-year-old Caucasian male was referred by his optometrist due to a 'chorioretinal scar' found incidentally in the left eye during a routine dilated examination in 2010. The patient had no ocular complaints and his best-corrected visual acuity was 20/20 OU. The patient had no significant past medical or ocular history, including exposure to retinotoxic drugs or infectious processes such as toxoplasmosis. There was no known family ocular history. Observation was elected and a fundus photograph was taken at this time.
The patient returned for a routine follow-up visit in 2015. He remained asymptomatic and there was no change in vision and no new medical history. Color fundus photography from October 2015 ( fig. 1b) shows a well-defined, ovoid-shaped, hyperpigmented lesion that is 1.74 × 0.67 mm in size and temporal to the macula. It is surrounded by a halo of patchy hypopigmentation. Color fundus photography from June 2010 ( fig. 1a) shows the same lesion 5 years ago.
Over these 5 years, the hyperpigmented torpedo lesion appears to have assumed a more ovoid shape and increased in size in the vertical axis. There also seems to be a change in the area of the surrounding hypopigmentation, most notably an increased amount of hypopigmentation superonasal to the lesion. The appearance of a small, hypopigmented, almost pearlescent-colored spot becomes more apparent in the 2015 image (arrow in fig. 1b). The same hypopigmented spot can also be seen in the photo from 2010, although it was much smaller at that time ( fig. 1a).
EDI-OCT of the lesion from 2015 demonstrates RPE clumping and migration that corresponds to the pseudo-lacuna ( fig. 2a). There is also disruption of the ellipsoid zone and mild thinning of the outer retina, which is overlying a subretinal cleft ( fig. 2b). The inner retinal layers appear normal and well organized. The choroid beneath the lesion demonstrates hyporeflectivity and normal choroidal thickness.
FAF reveals globally reduced autofluorescence of the hyperpigmented torpedo lesion with a surrounding rim of hyperautofluorescence. The pseudo-lacuna lesion demonstrates an area of markedly increased autofluorescence ( fig. 3).
Early and late phases of FA show a hypofluorescent lesion with a rim of hyperfluoresence ( fig. 4a, b). The area of the pseudo-lacunae in the center of the lesion also remains hypofluorescent in both phases (arrow in fig. 4a, b). There is no leakage on the angiogram.
Given the lack of symptoms and the absence of neovascular activity or leakage on the OCT and FA, it was decided to continue to observe the patient on an annual basis. The patient was given an Amsler grid and instructed to call if he experienced any new vision changes.
Discussion
Several of the clinical and OCT findings in this case were consistent with previous cases [4-6, 9, 10]. The clinical findings include having an ovoid shape longer in the horizontal axis, a location temporal to the fovea, absence of foveal involvement, unilaterality, and a lack of visual symptoms. Additionally, on OCT, this lesion displays thinning of the outer retinal layer, a normal inner retinal structure, and a subretinal cleft. Given this, we believe we have identified a true torpedo maculopathy lesion. Differential diagnoses include congenital nevus, choroidal melanoma, hamartoma of the RPE, congenital retinal pigment epithelial hypertrophy (CHRPE), inflammatory causes (toxoplasmosis, etc.), and trauma.
Traditionally, torpedo maculopathy has been described as a mostly hypopigmented or nonpigmented lesion, with some containing a variably hyperpigmented tail [5,[8][9][10]. Golchet et al. [5] indicated that pigmentation may be variable, with some lesions showing significant hyperpigmentation within the hypopigmented lesion. Villegas et al. [11] used hypopigmentation as a characteristic to distinguish it from similar lesions of the retina such as CHRPE. However, in contrast to the previous classical descriptions, the torpedo lesion in this case has a unique, globally hyperpigmented appearance.
Although most cases of torpedo maculopathy in the literature have referred to it as nonprogressive lesion, the lesion in this particular case noticeably changes in size and shape over a 5-year follow-up period. The only other study with a 5-year follow-up period also reported a size increase over 5 years [4]. Wong et al. [6] further categorized torpedo lesions into two different categories and proposed the possibility of the lesions undergoing structural and functional changes over time.
Perhaps the most striking difference over the 5-year follow-up period is the progression of a pseudo-lacuna within the torpedo lesion. The lacuna in this case is unlike the characteristic lacuna belonging to CHRPE. In CHRPE, the lacunae are associated with RPE atrophy and loss with a resultant increase in optical transmission on OCT [12]. In figure 2a, we see RPE thickening, as opposed to atrophy, and decreased optical transmission through the pseudolacuna lesion. Additionally, the pseudo-lacuna lesion exhibits hyperautofluorescence on FAF ( fig. 3) and blockage hypofluorescence on FA (fig. 4). The absence of leakage on FA confirms that there is no underlying exudative or neovascular process involved. Interestingly, these findings are in contrast to those found by Golchet et al. [5] in their case series of torpedo maculopathy. In that series, the FA in all cases revealed generalized transmission hyperfluorescence, and the only FAF done exhibited global hypoautofluorescence. These findings led the authors to conclude that there is thinned and nonfunctional RPE in torpedo lesions. The pseudo-lacuna within the current torpedo lesion may simply represent accumulated debris due to dysfunctional (but not absent) RPE -somewhat similar to a vitelliform-like lesion. However, its characteristics actually appear to be more closely related to a variant type of subretinal drusenoid deposits (VTD) as described by Lee and Ham [13]. This is especially interesting given that VTD are also found in the perifoveal region, are discrete lesions, exhibit hyperautofluorescence, and do not appear related to macular degeneration [13]. Although it is currently unknown if there is any progression of VTD, other related types of deposits are well-known to undergo remarkable dynamism over time [14].
EDI-OCT enables superior visualization and measurement of the choroid compared to standard spectral domain OCT [15]. Teitelbaum et al. [8] postulated that abnormal choroidal development in the macula could result in torpedo maculopathy. In this case, the EDI-OCT did not reveal any abnormality in choroidal thickness directly beneath the torpedo lesion. This further supports the theory that torpedo maculopathy may be primarily due to defective RPE, specifically to an RPE developmental defect in the fetal temporal bulge [7]. However, the decreased optical transmission made it difficult to make any qualitative observations of the choroid underlying the lesion. Future studies utilizing en face OCT angiography of the choroid may enhance the understanding of its potential role in this condition.
In this unique case, the presence of global hyperpigmentation, pseudo-lacuna formation, and morphologic changes over time challenge some of the classically held descriptions about torpedo maculopathy. These findings suggest that a torpedo lesion may be more dynamic than previously believed, and necessitate long-term follow-up with multimodal imaging.
Statement of Ethics
Informed consent was obtained from the patient prior to the study.
Disclosure Statement
The authors have no conflicts of interest to disclose.
|
2017-09-17T07:05:36.724Z
|
2016-05-26T00:00:00.000
|
{
"year": 2016,
"sha1": "0949d300a46542d7fc1cd56eaf4b019ff968b651",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1159/000445497",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0949d300a46542d7fc1cd56eaf4b019ff968b651",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
119141695
|
pes2o/s2orc
|
v3-fos-license
|
The Stokes paradox in inhomogeneous elastostatics
We prove that the displacement problem of inhomogeneous elastostatics in a two--dimensional exterior Lipschitz domain has a unique solution with finite Dirichlet integral $\u$, vanishing uniformly at infinity if and only if the boundary datum satisfies a suitable compatibility condition (Stokes' paradox). Moreover, we prove that it is unique under the sharp condition $\u=o(\log r)$ and decays uniformly at infinity with a rate depending on the elasticities. In particular, if these last ones tend to a homogeneous state at large distance, then $\u=O(r^{-\alpha})$, for every $\alpha<1$.
Introduction
Let Ω be an exterior Lipschitz domain of R 2 . The displacement problem of plane elastostatics in exterior domains is to find a solution to the equations where u is the (unknown) displacement field,û is an (assigned) boundary displacement, C ≡ [C ijhk ] is the (assigned) elasticity tensor, i.e., a map from Ω × Lin → Sym, linear on Sym and vanishing in Ω × Skw. We shall assume C to be symmetric, i.e., C ijhk = C hkij and positive definite, i.e., (1.2) µ 0 |E| 2 ≤ E · C[E] ≤ µ e |E| 2 , ∀ E ∈ Sym, a.e. in Ω.
A weak solution to (1.1) is a weak solution to (1.1) 1 which satisfies the boundary condition in the sense of the trace in Sobolev's spaces and tends to zero at infinity in a generalized sense. If u ∈ W 1,q loc (Ω) is a weak solution to (1.1) the traction field on the boundary s(u) = C[∇u]n exists as a well defined field of W −1/q,q (∂Ω) and for q = 2 the following generalized work and energy relation [9] holds for every large R, where with abuse of notation by Σ u · s(u) we mean the value of the functional s(u) ∈ W −1/2,2 (Σ) at u ∈ W 1/2,2 (Σ) and n is the unit outward (with respect to Ω) normal to ∂Ω. It will be clear from the context when we shall refer to an ordinary integral or to a functional. It is a routine to show that under assumption (1.2), (1.1) 1,2 has a unique solution u ∈ D 1,2 (Ω), we shall call D-solution (for the notation see at the end of this section). Moreover, it exhibits more regularity provided C, ∂Ω andû are more regular. In particular, the following well-known theorem holds [8], [12]. Theorem 1.1. Let Ω be an exterior Lipschitz domain of R 2 and let C satisfy (1.2) 1 . If u ∈ W 1/2,2 (∂Ω), then (1.1) 1,2 has a unique D-solution u which is locally Hölder continuous in 3 ? For constant C (homogeneous elasticity) the situation is well understood (see, e.g., [16], [17]), at least in its negative information. Indeed, a solution to (1.1) 1,2 is expressed by a simple layer potential is the simple layer with density ψ such that , with Φ 0 ∈ Lin and Φ : R 2 \ {o} → Lin homogeneous of degree zero, is the fundamental solution to equations (1.1) (see, e.g., [10]). The space C = {ψ ∈ L 2 (∂Ω) : v[ψ] |∂Ω = constant} has dimension two and if {ψ 1 , ψ 2 } is a basis of C, then { ∂Ω ψ 1 , ∂Ω ψ 2 } is a basis of R 2 ; (1.3) assures that u − κ = O(r −1 ), where the constant vector κ is determined by the relation Hence it follows Theorem 1.2. Let Ω be an exterior Lipschitz domain of R 2 and let C be constant and strongly elliptic. Ifû ∈ W 1/2,2 (∂Ω), then (1.1) has a unique D-solution, analytic in Ω, if and only if Moreover, u is unique in the class }. An immediate consequence of (1.4) is nonexistence of a solution to (1.1) corresponding to a constant boundary data. This phenomenon for the Stokes' equations is popular as Stokes paradox and goes back to the pioneering work of G.G. Stokes (1851) on the study of the (slow) translational motions of a ball in an incompressible viscous fluid of viscosity µ (see [7] and Ch. V of [6]). Clearly, as it stands, Stokes' paradox can be read only as a negative result, unless we are not able to find an analytic expression of the densities of C. As far as we know, this is possible only for the ellipse of equation f (ξ) = 1. Indeed, in this case it is known that C = spn {e 1 /|∇f |, e 2 /|∇f |} (see, e.g., [19]) and Theorem 1. The situation is not so clear in inhomogeneous elasticity. In fact, in such a case it is not known whether u converges at infinity and even the definition of the space C needs to be clearified.
The purpose of this paper is to show that results similar to those stated in Theorem 1.2 hold in inhomogeneous elasticity, at least in its negative meaning.
By M we shall denote the linear space of variational solutions to div C[∇h] = 0 in Ω, We say that C is regular at infinity if there is a constant elasticity tensor C 0 such that The following theorem holds. (ii) Ifû ∈ W 1/2,2 (∂Ω), then system (1.1) has a unique D-solution u if and only if (iii) u is unique in the class (1.5) and modulo a field h ∈ M in the class there is a positive α depending on the elasticities such that Moreover, if C is regular at infinity then (1.8) holds for all α < 1.
Also, for more particular tensor C we prove Let Ω be an exterior Lipschitz domain of R 2 and let C : Ω × Lin → Lin satisfies A variational solution to the system is unique in the class for all positive ǫ, where u 0 is the constant vector defined by Notation -Unless otherwise specified, we will essentially use the notation of the classical monograph [9] of M.E. Gurtin. In indicial notation (div Lin is the space of second-order tensors (linear maps from R 2 into itself) and Sym, Skw are the spaces of the symmetric and skew elements of Lin respectively. As is customary, if E ∈ Lin and v ∈ R 2 , Ev is the vector with components E ij v j and∇u,∇u denote respectively the symmetric and skew parts of ∇u.
is the Hardy space. As is usual, if f (x) and φ(r) are functions defined in a neighborhoof of infinity ∁S R 0 , then f (x) = o(φ(r)) and f (x) = O(φ(r)) mean respectively that lim r→+∞ (f /g) = 0 and f /g is bounded in ∁S R 0 To alleviate notation, we do not distinguish between scalar, vector and second-order tensor space functions; c will denote a positive constant whose numerical value is not essential to our purposes; also we let c(ǫ) denote a positive function of ǫ > 0 such that lim ǫ→0 + c(ǫ) = 0.
Preliminary results
Let us collect the main tools we shall need to prove Theorem 1.4 and 1.5 and that have some interest in themselves. By I we shall denote the exterior of a ball S R 0 ⋑ ∁Ω. [12] Let u ∈ D 1,q (I), q ∈ (1, +∞). If q > 2 then u/r ∈ L q (I) and if q < 2, then there is a constant vector u 0 such that Moreover, if u ∈ D 1,q (I) for all q in a neighborhood of 2, then u = u 0 + o(1).
Proof. Assume first that u is regular. Taking into account that a simple computation yields by Schwarz's inequality, Cauchy's inequality and Wirtinger's inequality and taking into account that by the basic calculus . Hence (2.1) follows by a simple integration. The above argument applies to a variational solution by a classical approximation argument (see, e.g., footnote (1) in [13]).
Remark 2.1. If u is a variational solution to (1.1) 1 vanishing on ∂Ω and such that ∂Ω s(u) = 0, then by repeating the steps in the proof of Lemma 2.2, it follows with f having compact support, then for large R Proof. Let By a simple application of Cauchy's inequality (2.8) implies Hence (2.6) follows by the properties of the function g R .
Remark 2.2. Under the stronger assumption u is a D-solution, we can repeat the previous argument to obtain instead of (2.6) the following inequality, for R sufficiently large In such case instead of the function g R we have to consider the function and the thesis follows similarly. Proof. Let η R be the function defined in (2.10). For largeR the field Let v 1 and v 2 be the variational solutions to the systems A simple computation and the first Korn inequality By Schwarz's inequality and since by (2.11) TR C ijhk ∂ k u h ∂ j ηR = 0, Hence (2.20) Hence, taking into account (2.12), letting R → +∞, we obtain ∇u ∈ L 2 (Ω). Let consider now the function (2.7). Multiplying (1.1) 1 scalarly by g R u and integrating by parts, we get
From (2.11) it follows that
T R C[∇u]e r = 0, so that by applying Schwarz's inequality and Poincaré's inequality Therefore, (2.13) follows from (2.22) by letting R → +∞ and taking into account the properties of g R and that ∇u ∈ L 2 (Ω).
Remark 2.3.
In the previous Lemma we proved, in particular, that a variational solution which satisfies (2.11) and (2.12) is a D-solution. Another sufficient condition to have a D-solution is to assume (2.11) and u ∈ D 1,q (∁S R 0 ), for some R 0 sufficiently large and for some q ∈ 2, 4 2−γ . Indeed, by reasoning as in (2.21) and applying Hölder's inequality we obtain Then we get ∇u ∈ L 2 (Ω) on letting R → +∞.
Proof. As in the proof of Lemma 2.2, it is sufficient to assume u regular. Multiplying (1.1) 1 by the function (2.7) and integrating over Ω, we have Hence (2.11) follows taking into account that by Schwarz's inequality and letting R → +∞. A standard computation yields for ̺ ≫ R 0 . Hence, since by (2.4), Schwarz's inequality and Wirtinger's inequality Now proceeding as we did in the proof of Lemma 2.2, (2.24) yields Since by the basic calculus (2.23) follows from (2.25) by a simple integration.
Proof. Let ηR be the function (2.10) for largeR. The field v = ηRu is a variational solution to div and consider the functional equation Choose C 0ijhk = µ e δ ih δ jk .
there is ǫ > 0 such that (2.26) is a contraction in D 1,q , q ∈ (2 − ǫ, 2 + ǫ). If C is regular at infinity, then, choosingR large as we want, we can do |C(x) − C 0 | arbitrarily small and, as a consequence, Q[v] D 1,q ≤ β v D 1,q , for every positive β and this is sufficient to conclude the proof.
Extend C to the whole of R 2 by setting C =C in ∁Ω (say), withC constant and positive definite. Clearly, the new elasticity tensor (we denote by the same symbol) satisfies (1.2) (almost everywhere) in R 2 .
The Hölder regularity of variational solutions to (1.1) 1 is sufficient to prove the unique existence of a fundamental (or Green) function G(x, y) to (1.1) 1 in R 2 (see [2], [4], [11], [18]), which satisfies in y] in every domain not containing y [resp. x]. Moreover, G(x, y) = G ⊤ (y, x) and for f ∈ H 1 (R 2 ) the field is the unique variational solution to G(x, ·) belongs to the John-Niremberg space BMO(R 2 ) (see, e.g., [8]) and has a logarithm singularity at x and at infinity. Set w(x) = G(x, o)e, with e constant vector. Let us show that ∇w ∈ L 2 (∁S R 0 ) and ∇w ∈ L q (∁S R 0 ) for all q in a right neighborhood of 2. Indeed, if w ∈ D 1,2 (∁S R 0 ), then, by applying (2.9) and Hölder's inequality, we get Therefore, from (2.23) it follows Hence, choosing q > 4/γ, letting ρ → 0 and taking into account that w ∈ L q loc (R 2 ), we have the contradiction ∇w = 0. The field v = η R 0 w is a solution to (2.27) where η R and f are defined by (2.10), (2.16), respectively. By well-known estimates [18] and (2.6) for large R, for q ∈ (2,q), withq > 2 depending on µ 0 , where c f is a constant depending on f . Hence, letting R → +∞ and bearing in mind the behavior of w at large distance, it follows that ∇w ∈ L q (∁S R 0 ). Collecting the above results we can say that the fundamental function satisfies: (ı) G(x, y) ∈ D 1,2 (∁S R (x)) for all R > 0; (ıı) G(x, y) ∈ D 1,q (∁S R 0 (x)), for all q ∈ (2,q), withq > 2 depending on µ 0 . (ii) -Multiply (1.1) 1 scalarly by g R h, with h ∈ M. Integrating by parts we get Choosing s(< 2) very close to 2 we have Therefore, letting R → +∞ in (3.1), in virtue of Lemma 2.1 and 2.6 and the properties of G, we see that Hence it follows that u 0 = 0 if and only ifû satisfies (1.7).
Let now C satisfy (1.6) and let u ′ , u ′′ be the variational solutions to the systems , respectively. Applying Poincaré's and Caccioppoli's inequalities we have Hence, taking into account that it follows [1] (see also [15] Proof of Theorem 1.5. If C satisfies the stronger assumption (1.9), by the argument in [14] one shows that a variational solution to div C[∇u] = 0 in S R (x) satisfies for every ρ ∈ (0, R] and the Lemmas hold with γ replaced by 2/ √ L. Hence the desired results follow by repeating the steps in the proof of Theorem 1.4.
A Counter-example
The following slight modification of a famous counter-example by E. De Giorgi [3] assures that the uniqueness class in Theorem 1.5 and the rates of decay are sharp.
|
2018-05-03T11:32:28.000Z
|
2018-05-03T00:00:00.000
|
{
"year": 2020,
"sha1": "e759cb35a45ebdba3f8a064ae0aaf9af2eab1aea",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1805.01232",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e759cb35a45ebdba3f8a064ae0aaf9af2eab1aea",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
}
|
211073932
|
pes2o/s2orc
|
v3-fos-license
|
Network structure of depression symptomology in participants with and without depressive disorder: the population-based Health 2000–2011 study
Purpose Putative causal relations among depressive symptoms in forms of network structures have been of recent interest, with prior studies suggesting that high connectivity of the symptom network may drive the disease process. We examined in detail the network structure of depressive symptoms among participants with and without depressive disorders (DD; consisting of major depressive disorder (MDD) and dysthymia) at two time points. Methods Participants were from the nationally representative Health 2000 and Health 2011 surveys. In 2000 and 2011, there were 5998 healthy participants (DD−) and 595 participants with DD diagnosis (DD+). Depressive symptoms were measured using the 13-item version of the Beck Depression Inventory (BDI). Fused Graphical Lasso was used to estimate network structures, and mixed graphical models were used to assess network connectivity and symptom centrality. Network community structure was examined using the walktrap-algorithm and minimum spanning trees (MST). Symptom centrality was evaluated with expected influence and participation coefficients. Results Overall connectivity did not differ between networks from participants with and without DD, but more simple community structure was observed among those with DD compared to those without DD. Exploratory analyses revealed small differences between the samples in the order of one centrality estimate participation coefficient. Conclusions Community structure, but not overall connectivity of the symptom network, may be different for people with DD compared to people without DD. This difference may be of importance when estimating the overall connectivity differences between groups with and without mental disorders. Electronic supplementary material The online version of this article (10.1007/s00127-020-01843-7) contains supplementary material, which is available to authorized users.
Introduction
Depressive disorders (DD), including major depressive disorder (MDD) and dysthymia, are highly prevalent mental disorders with high comorbidity with other mental disorders. Although they have been under systematic investigation for decades, depressive disorders remain poorly understood, and treatment efficacy has been modest [1]. It has been traditionally assumed that depressive symptoms arise from common pathogenic pathways. Recently, this common cause-approach has been challenged [2][3][4] by research showing that different depressive symptoms are associated with different risk factors [5], different patterns of comorbidity [6], and are associated with different levels of impairment [3]. Consistent with the above evidence of differential relations between symptoms and varying outcomes, depression symptomology has been conceptualized as a dynamic network, suggesting that depressive disorders are an emergent property that derives from mutual interactions among symptoms in a causal system [7]. The model assumes that depression is a complex dynamic system where individuals suffering from depression have a different architecture of symptom relations than those who experience depressive symptoms but have not passed the threshold of clinical diagnosis. The architecture of symptoms that characterizes those with a high risk of depression may form an emergent state: 'depression'. Such a state can be sustained via vicious circles, and can be difficult to escape [8].
This network theory of depression is grounded in theories in clinical psychology. For instance, cognitive behavioral therapy focuses on negative feedback loops potentially leading to more severe emotional problems [9]. Although the network approach has generated much interest [4,8,[10][11][12], numerous questions have remained open. We introduce three especially relevant topics below. First, one of the important features which are discussed as potentially differentiating the symptom networks of depressed people and others is connectivity [7], i.e. the amount and strength of relations among symptoms. People more vulnerable to develop depression have been suggested to have a denser symptom network and overall stronger ties between symptoms than those who are less vulnerable. In clinical samples, this may mean that more densely connected networks in patients with MDD would also predict less probable recovery [8]. However, the literature on the topic is very limited, and empirical evidence is mixed: one previous study has supported this notion [13], and a second one has not [14].
Second, many network studies have examined what symptoms are the most central (i.e. interconnected) in MDD symptom networks, because such symptoms have been speculated to be promising targets for intervention [10]. Most of the studies so far have used clinical samples [12,[15][16][17], and few have used community samples analyzing also the sub-threshold symptoms [10,18]. Interestingly, results are mixed, and do not seem to replicate well across studies. For example, whereas in a large clinical sample, Fried et al. [12] found that sad mood and energy loss were the most central symptoms, a time-series study conducted by Bringmann and colleagues [11] concluded that loss of pleasure was the most central symptom. Contrary to these findings, in a sample of 5952 Han Chinese women with recurrent MDD, psychomotor changes, hopelessness and decreased self-confidence were found to be the most central symptoms and among the least central was loss of interest [16]. Jones and others [17] concluded that concentration impairment, sadness, and fatigue were the most central nodes among individuals with obsessive-compulsive disorder with comorbid depression. These differences might be explained by variability in the samples, designs, and depression inventories used [12].
In addition, of the three most widely used centrality measures, i.e., closeness, betweenness and node strength [19], closeness and betweenness have suggested to be difficult to interpret in psychological networks, because they are based on assumption that do not hold when studying association between variables [20]. The most suitable centrality measure may thus be strength centrality that measures the weighted number of connections of a focal node and thereby the degree to which it is involved in the network. Moreover, "expected influence" indices, which distinguish between positive and negative edges, may be more suitable for evaluating centrality in networks with various community structures [21]. Similarly, indicators, such as participation coefficient, that measures the strength of a node's connections within its community may be useful when comparing symptoms networks with different community structures. It has also repeatedly been shown that measures detecting depression severity often fail to show uni-dimensionality or measurement invariance over time [4], which makes cliques highly likely in depression data, given the mathematical equivalence between factor and network models [22]. Furthermore, the most commonly used centrality measures are often calculated without considering the effects that the potential differences in the local systemic entities, such as community structures within the networks [23], have on centrality measures. This could have contributed to the inconsistent results of centrality estimates across publications in the literature.
Third, many previous studies have estimated depressive symptoms (and also other symptom networks) without considering the fact that symptom networks may include many locally connected structures, referred to as communities or cliques. To the best of our knowledge, so far only few studies have investigated the community structure of depressive symptoms networks [16,24]. This is a gap in the literature, as connectivity is a global measure: different community structures can lead to the same connectivity; for illustration of this effect, see Fig. 1. On the left, there is a network of nine nodes with three nodes structured in three communities that are fully and strongly connected; all edge weights are 0.99; however, there are no edges from any community to any other. The overall connectivity of 9 present and 27 absent edges is 9. On the right-hand side is a fully connected network with 9 edges but only one community, with much smaller edge weights of 0.25. This network has the same overall connectivity as network 1 (9), although the architecture and the conclusions we potentially make of the connections between nodes are probably different (for details of the example, see Online Supplement Appendix).
In sum, although the number of studies analyzing depressive symptom networks have increased rapidly, some crucial question remains open. First of all, results are conflicted to whether individuals with depression have higher or lesser symptom connectivity, or whether high connectivity is actually "good or bad" [8]. Second, it remains unclear which of the symptoms are more central than others in the depression symptom network. Third, it is also not clear whether there are differences in the community structures in the symptom networks between individuals with and without depression, and whether these differences affect the conclusions about the connectivity and centrality of individual symptoms.
To address these open questions, the present study examined self-reported depressive symptom networks using data from the nationally representative Health 2000-2011 surveys in Finland. The specific aims were to examine whether there were differences between those with depressive disorder (DD) and those without (A) in overall connectivity in symptom networks, (B) in centrality of the symptoms, and (C) in community structures of the symptom networks using metrics taking into account the community structure and local connectivity (expected influence step 2 and participation coefficients) to find out whether there are differences between the groups that network theory has traditionally not analyzed.
Sample
The data were derived from two data collection phases of the multidisciplinary epidemiological survey, "The Health 2000-2011", which was carried out in Finland in 2000-2001 and in 2010-2011. As described in detail elsewhere [25], in 2000, a nationally representative sample was drawn among adults aged 30 years or over and living in the mainland of Finland. Two-stage clustered sampling of 15 largest towns and 65 health districts in Finland was used and individuals over 80 years were oversampled (2:1). In addition, young adults' sample of individuals who were between 18 and 29 years old were collected using shortened version of the study protocol. In 2011, all participants who were alive, living in Finland, and had not refused to participate, were invited to take part of new data collections wave [26]. In addition, participants from the young adults' sample of Health 2000, were included.
In Health 2000, a total of 7419 participants (93% of the 7977 subjects alive at the first day of the first phase of the survey) participated to one or more phase of the study. Of these, 6354 participated in the clinical examination, which included, e.g., the Composite International Diagnostic Interview (CIDI), which was reliably performed for 6005 participants (75% of the original sample). In Health 2011, a total 6740 participants (67% of those who were invited) participated at least one to one phase of the study. Of these participants, 4729 participated in the health examination.
The present study was restricted to those participants who had participated in CIDI and responded to BDI-13 questionnaire in 2000 and/or 2011. This resulted to a total of 5998 participants without depressive disorder (DD) and 595 with depressive disorder. Participants with other mental disorders in 2000 or in 2011 were excluded. [27]. CIDI uses operationalised criteria for DSM-IV diagnoses and allows an estimation of DSM-IV diagnoses for mental disorders. In the present study, a computerized version of CIDI was used [27]. The translation of the CIDI-items into Finnish was based on the original English items of CIDI and was made pairwise by psychiatric professionals. The process included consensus meetings, expert opinions, an authorized translator's review, and pilot testing with both informed test participants and unselected real participants.
The CIDI interview has been found to be a valid and reliable instrument [28,29]. The interviews were performed to determine the 12-month prevalence of depressive (dysthymia or major depressive disorder, MDD), anxiety and alcohol use disorders. The interviewers were non-psychiatric health professionals who were trained in conducting CIDI interviews. Trainers were psychiatrists or physicians trained by a WHO authorized trainer. The Kappa values for the two interviews were 0.88 (95% CI 0.64-1.0, observer agreement 94%) for major depressive disorder, and 0.88 (95% CI 0.64-1.0, observer agreement 98%) for dysthymia [30]. In depressive disorders, the CIDI interview differentiates also between dysthymic disorder and MDD. Furthermore, the most recent timing (or appearance) of each symptom was also recorded (time frame of depression), allowing for estimates about when the diagnostic criteria were fulfilled most recently. In the current study, the variable for psychiatric diagnosis was coded as DD (includes MDD with or without dysthymia and dysthymia) and no DD (or other mental disorders).
Depressive symptoms
Depressive symptoms were assessed using the Beck Depression Inventory (BDI) [31]. In 2000 21-item version was used, and in 2011 the 13-item version [32]. In the current study, we used those 13 items of BDI that were measured at both time points (Fig. 2).
Statistical methods
We estimated network models, community structures and graph-theoretical measures in multiple steps. All statistical analyses and used statistical packages are explained in detail in the online supplement. First, we estimated network structures of depression symptoms in two sub-groups based on CIDI: (1) Second, we assessed the predictability of the individual symptoms (how much of the variance of each symptom is explained by the other nodes in the network) using Mixed Graphical Models. After calculating the predictability results, we included these parameters into the FGL networks. We used the R package "qgraph" [33] to plot the networks. Third, we compared the connectivity of the networks between DD− and DD+ groups using the "Network-ComparisonTest" (NCT) R-package [34]. Fourth, we evaluated the community structure of the symptom networks in both groups using the walktrap-algorithm [35] and "igraph"package [36] (for robustness analyses via the spinglass algorithm, see the online supplement). Sub-network structures of depressive symptoms in different groups were tested using the minimum spanning trees (MST) [37].
Fifth, we calculated node strength, which was our primary centrality measure, and also estimated "Expected influence" centrality index as well as participation coefficient for each node. Correlations between strength centrality measures and expected influence were calculated to evaluate the overall similarity between the groups. Sixth, we tested the parameter accuracy of edges and centrality estimates in the symptom networks, using the R package "bootnet", via a bootstrap sampling procedure with 1000 iterations. We evaluated the stability of the strength centrality metrics using the correlation stability (CS) coefficient by repeatedly correlating centrality metrics of the original data set with those calculated from subsamples including progressively fewer participants. The CS-coefficient represents the maximum proportion of participants that can be dropped while maintaining 95% probability that the correlation between centrality metrics from the full data set and the subset data are at least 0.7, and should be above 0. 5 As additional sensitivity analyses, we bootstrapped centrality scores (1000 samples) to estimate the uncertainty in the correlation between the centrality scores of the DD− and DD+ group and examined the community structures in more detail (for details see the online supplementary appendix).
All analyses were conducted using R 3.5.1 (R Core Team 2018).
Descriptive statistics
There were 5998 DD− participants and 595 DD+ participants with data for either or both measurement points. Differences between the average symptom level over time points were all significant between DD− and DD+ groups, the mean of all symptoms in DD− group was 0.19 and in DD+ group 0.55 (difference = − 0.35, 95% CI [− 0.40, − 0.33]). The greatest differences were found for sadness (means 0.15/0.72) and guilty feelings (means 0.22/0.72), and the smallest difference was found for change in appetite (means 0.07/0.22) and in self-dislike (means 0.08/0.36). The means and standard deviations and zero-order correlation matrices of individual symptoms are presented in the Online Supplement (Supplement Figs. 1 and 2). The Spearman correlations between the symptom profiles was 0.80 suggesting rather strong similarities across MDD groups.
Network structure
The visualization of the FGL networks for DD− and DD+ groups are presented in Fig. 2. The predictability (amount of explained variance of each symptoms by all the other symptoms) is illustrated by the percentage of shaded area in the pie. Depressive symptoms descriptively explained a larger proportion of the variance of the other symptoms in DD+ (mean explained variance 41%) than in DD+ participants (mean explained variance 31%). This finding translates into somewhat stronger associations between symptoms in participants with MDD than in those without (average edge weight 0.07 in the DD+ group vs 0.06 in the DD− group). The internal consistency (Cronbach's alpha) was also slightly higher in DD+ (0.89) than in DD− (0.84) group. When comparing the networks across groups, the Network Comparison Test revealed no significant differences regarding network structures (M = 0.12; p = 0.77) or network connectivity (global strength) (difference 0.08, p = 0.87), with connectivity estimates of 5.4 for DD+ group and 5.5 for DD− group. Overall similarity was evaluated by calculating the correlations between the edge weights across networks for each pair of networks (Supplement Figs. 3 and 4). Spearman correlation was 0.65, also indicating rather strong similarity.
All else being equal, we identified some differences in the community structures of the networks between MDD groups (Fig. 3). In DD− group, the walktrap-algorithm suggested four different communities, but only three communities were suggested in DD+ group (results remained the same when rerunning the algorithm ten times with random seeds). Minimum spanning trees supported the less uniform structure of symptoms in DD− group compared to DD+ group although the most central nodes were partly the same in both groups (Fig. 4). The nodes closer to the center of the tree (i.e. nodes that feature more edges) are most central. Loss of pleasure, past failure, and indecisiveness were the most central symptoms in DD− and in DD+ group they were loss of pleasure, self-dislike, and loss of energy.
The centrality estimates (node strength and expected influence) and participation coefficients are shown in Fig. 5. Loss of pleasure, sadness, loss of energy, and self-dislike had the greatest node centrality strength in DD+ and in DD− group and they were also more central than 50% or more of other nodes in the network (bootstrapped difference tests for strength centrality are presented in Supplement Figs. [5][6][7][8]. The strength centrality profiles were very similar in DD− and DD+ groups, with a correlation of 0.85 suggesting strong similarity across groups. The expected influence profiles were also similar (r = 0.89) and again especially loss of pleasure, self-dislike, and sadness were high, all but even higher in DD+ group. The participation coefficients suggested that the loss of energy and self-dislike were central symptoms in DD+ group (Fig. 4), with a correlation of r = 0.67 across groups. The CS coefficients indicated a stable order of strength centrality estimation, with values of 0.67 (DD+) and 0.75 (DD−).
Sensitivity analyses regarding the centrality indexes confirmed our findings reported above (see online supplement for details). Sensitivity analyses related to community structure showed that the community structure in those participants with DD+ were relatively stable and three-community-solution was the most common (Supplement Fig. 9). Among the DD− participants, network was clearly less stable (Supplement Fig. 10).
Discussion
The present study examined depressive symptom networks using data from a nationally representative general population sample. Results showed that there were no differences in the overall connectivity of symptom networks between participants with and without DD (major depressive disorder and dysthymia). Whereas simpler community structure was observed among those participants with DD, the differences in centrality measures between participants with and without DD were relatively small.
Our findings regarding the overall network connectivity were somewhat unexpected and not consistent with all prior work. Specifically, some studies showed an increased network connectivity in participants with depression [38,39]. The network theory-supported by prior studies using both empirical and simulated data-has suggested that network connectivity may be a key feature leading to attractor states with large number of active symptoms and thus to clinical depression [8]. Strong connections between symptoms indicate that symptoms more easily affect each other and thus maintain and trigger negative systemic states. Our findings suggesting that there were no differences in connectivity between groups of people with and without DD, do not provide strong support for the inferences of the theory. However, these findings are in line with a intervention study, where stronger symptom connectivity was not associated with treatment prognosis [14]. They also are in line with another study where the connectivity of depressive symptoms was found to increase during antidepressant treatment in a very large clinical trial (the STAR*D study) while the overall severity of depression decreased [4].
In the present study, fewer communities, and thus simpler community structure was found in those participants with DD. This was unexpected given previous work that found that decreases in depressive symptoms across time were associated with structures that became less multifactorial (i.e. increasingly more unidimensional) [4]. Especially our finding that the community structure was less stable among participants without depression (see online supplement for more details) warrants further investigation.
The most central symptoms in the depressive symptoms network of the participants with DD were (a) loss of pleasure, (b) self-dislike, (c) sadness and (d) loss of energy. In the present study, some less frequently used centrality measures, which took more efficiently into account the community structure, were used. However, only minor differences compared to the strength centrality measures (correlation range from 0.59 to 0.89) were found. Based on all indicators used in this study, loss of energy, and loss of pleasure were consistently central in those with DD and the differences between those without and with DD were relatively small. Similar results have been previously reported, suggesting that sadness [12] or loss of pleasure (Bringmann et al. 2015) would be the most central symptoms in MDD. However, other studies have found different symptoms to be most central [16,17], indicating that central symptoms might differ across samples. It is also important to notice that symptoms of sadness and/or anhedonia were required for depression diagnosis in this sample, which may bias centrality statistics.
Although it is tempting to assume that the most central symptoms also have a strong causal role in the network, empirical investigations into the matter are scarce. Rodebaugh and co-workers [40] examined whether central symptoms in a network constructed using a cross-sectional data predicted the correlation between change in a given node using the same data and change in other symptoms across treatment also in another dataset. They found that centrality predicted which nodes were more strongly associated with change above and beyond other predictors, but that prediction was restricted to that specific network and data where the centrality was determined. Thus, the higher centrality was associated with a stronger association with change across the entire symptom network, but only among those specific symptoms where the centrality measures were detected. There are multiple problems in interpreting central symptoms as the most influential, (the most central may be just the end point or just the one with the greatest variability, see: https ://psych -netwo rks.com/how-to-not-inter pret-centr ality -value s-in-netwo rk-struc tures /) and recently the whole basis of measuring centrality in psychological networks that do not have similar features (serial flow of connections) as social networks, has been challenged [20]. In the present study, we tried to overcome some of these problems using centrality measures that are not based on shortest path measures (strength centrality) and by taking into account the community structure within the network (participation coefficient) [41].
Recently, some work has criticized the application of centrality metrics derived from social network analysis to psychological data [20]. This may be especially problematic if centrality measures are considered-as they often seem to be-as measures of symptom importance. These metrics assume that there are no qualitative differences between nodes, which is a contentious assumption. In psychological networks, especially symptom networks, it is difficult to interpret that suicidal thoughts would be as important as changes in appetite and thus, focusing only on the connections in psychological networks to find the most central node would be problematic. It is also possible that the observed differences in central symptoms between groups may be a result of sampling variability changing the absolute rank order of symptoms without there necessarily being any differences in centrality of the symptoms [42]. Given that prior research was in part based on small samples and lacked investigations whether the most central symptoms was substantially or significantly more central than other symptoms (e.g. via the centrality difference test [43]), this raises doubts as to how meaningful differences in reported centrality differences in the literature are, and we hope the at least in part large sample size of the present study adds to the literature in that regard.
Strengths and limitations
Main strengths of the current study are a population-based sample, which is a representative of Finnish general adult population, and the use of CIDI to identify participants with DD during the last 12 months. Some limitations need to be taken into account when current findings are interpreted. The original sample of the Finnish Health 2000 survey included 8028 subjects of whom 6005 (75%) were interviewed with the CIDI. It has been shown participants who did not participate had more depressive symptoms than those who participated, indicating that they were more likely to suffer from DD. However, the aim of the current study was not estimate the prevalence of DD, and CIDI has been found to have acceptable psychometric properties [44]. Second, cohort effects could bias or confound our results, although we do not think this is highly likely, because there were no differences in the levels of depressive symptoms or DD prevalence between the two time points [45]. Third, the analytical design was cross-sectional, preventing us making any inferences about the direction of the associations or development the network structures. For example, participants who were not diagnosed with depression could be in remission. Fourth, we used regularized models which make groups with different sample sizes difficult to compare (see supplement analyses for analyses in which we subsampled participants to obtain equal sample sizes). Fifth, we mainly relied on community detection results based on the walktrap-algorithm (see supplement analyses for results based on the spinglass algorithm). Six, from the all possible depressive symptoms, our investigation is limited to those which are included in BDI-13, and thus other important symptoms may be missing. Finally, although depression diagnosis was based on structured interview (M-CIDI), and not on BDI scores, Berkson's bias could potentially influence our results [46].
Conclusions
To conclude, we found that community structure, but not overall connectivity or symptom centrality, of the symptom network may be different between participants with and without DD. This difference could be important when estimating the overall connectivity differences in symptoms between groups with and without mental disorders.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/.
|
2020-02-11T16:14:45.254Z
|
2020-02-11T00:00:00.000
|
{
"year": 2020,
"sha1": "2ac8682bb551b84eef292abc9337c25275150f6d",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00127-020-01843-7.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "2ac8682bb551b84eef292abc9337c25275150f6d",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
53214994
|
pes2o/s2orc
|
v3-fos-license
|
Compounded Disturbance Chronology Modulates the Resilience of Soil Microbial Communities and N-Cycle Related Functions
There is a growing interest of overcoming the uncertainty related to the cumulative impacts of multiple disturbances of different nature in all ecosystems. With global change leading to acute environmental disturbances, recent studies demonstrated a significant increase in the possible number of interactions between disturbances that can generate complex, non-additive effects on ecosystems functioning. However, how the chronology of disturbances can affect ecosystems functioning is unknown even though there is increasing evidence that community assembly history dictates ecosystems functioning. Here, we experimentally examined the importance of the disturbances chronology in modulating the resilience of soil microbial communities and N-cycle related functions. We studied the impact of 3-way combinations of global change related disturbances on total bacterial diversity and composition, on the abundance of N-cycle related guilds and on N-cycle related activities in soil microcosms. The model pulse disturbances, i.e., short-term ceasing disturbances studied were heat, freeze-thaw and anaerobic cycles. We determined that repeated disturbances of the same nature can either lead to the resilience or to shifts in N-cycle related functions concomitant with diversity loss. When considering disturbances of different nature, we demonstrated that the chronology of compounded disturbances impacting an ecosystem determines the aggregated impact on ecosystem properties and functions. Thus, after 3 weeks the impact of the ‘anoxia/heat/freeze-thaw’ sequence was almost two times stronger than that of the ‘heat/anoxia/freeze-thaw’ sequence. Finally, we showed that about 29% of the observed variance in ecosystem aggregated impact caused by series of disturbances could be attributed to changes in the microbial community composition measured by weighted UniFrac distances. This indicates that surveying changes in bacterial community composition can help predict the strength of the impact of compounded disturbances on N-related functions and properties.
INTRODUCTION
All ecosystems are exposed to increasing natural and human disturbances and therefore understanding the effects of disturbances is fundamental. Most studies addressing ecosystem stability have included only one of numerous potential disturbances. For instance, the responses of ecosystems, plants and soil microbial communities to elevated CO 2 or to changes in precipitation patterns have been the subject of major research efforts (Easterling et al., 2000). A rapidly growing number of studies have taken a more comprehensive approach to investigating interactive and cumulative effects between multiple co-occurring disturbances such as elevated CO 2 and warming (Crain et al., 2008;Wu et al., 2011;Liang and Balser, 2012). However, while there is some generality in our understanding of the effects of single or simultaneous disturbances, the consequences of temporal series of disturbances for ecosystem functioning and stability are unclear. As the research front on the effects of human-driven environmental changes on ecosystems advances, it is now a timely challenge to anticipate the dynamics of the ecosystems response to multiple sequential disturbances (O'Gorman et al., 2012).
Disturbances are a strong driver of ecosystem dynamics over time. Whether the disturbance is of single or multiple nature, it is the ecosystem response to the series of disturbances that shapes its adaptive trajectory over time and hence the stability of its functioning. Because of their central role in Earth's biogeochemical cycles (Falkowski et al., 2008), the effects of repeated disturbances of same nature, e.g., pollution or drought episodes on microbial communities have been investigated in several studies, often in relation to pollution induced community tolerance after exposure to heavy metals (Wakelin et al., 2014;Azarbad et al., 2016). These studies either revealed increased ecosystem adaptability (Bouskill et al., 2013) or ecosystem disruption (Shade et al., 2012), highlighting that history of disturbance regimes may play a role in responses of microbial communities toward new disturbances. However, ecosystems are also facing sequences of disturbances of multiple natures and above mentioned studies appear to be of limited value for predicting ecosystem response to a new or any type of disturbance. Series of disturbances of different nature have been considered when assessing whether a first disturbance could mediate the ecosystem response to a subsequent second disturbance (Crain et al., 2008;Darling and Côte, 2008;Philippot et al., 2008;Jackson et al., 2016;Jurburg et al., 2017). Although no clear overall pattern of response for the stability of soil microbial communities has emerged from such studies (Griffiths and Philippot, 2013), this approach could undeniably contribute to improve understanding of soil functioning. However, studies assessing the importance of the sequence in a succession of disturbances (i.e., disturbance chronology) are still lacking.
Here, we examine whether the responses of soil microbial community structure and function differ when subjected to the same series of disturbances but in different chronological order. Indeed, a disturbance B occurring after a disturbance A may very well displace the ecosystem to a different position on the adaptive trajectory, compared to its position if it had been exposed to these two disturbances in the reverse order. Therefore, we hypothesize that the disturbance chronology would affect ecosystem stability in terms of resistance (i.e., the ability to withstand a disturbance) and resilience (i.e., the capacity to recover after being disturbed). We studied the multiple functional responses of a soil system to sequences of three disturbances using heat, freeze-thaw and anaerobic cycles as model pulse disturbances, i.e., short-term ceasing disturbances. We focused on nitrogen cycling as an ecosystem function because nitrogen is the major nutrient limiting primary production in terrestrial ecosystems (LeBauer and Treseder, 2008). Among Earth-system processes, the nitrogen cycle is also one which was pushed by human activities outside critical thresholds representing the safe operating space (Rockström et al., 2009).
Soil Sampling and Experimental Design
Soil samples were collected from the Epoisses site in France (47 • 30 22.1832 N, 4 • 10 26.4648 E) during autumn 2014. During summer 2014, 12 days with temperature > 30 • C were counted but the average summer temperature that year was relatively close to the average summer temperature in that area. Finally, winter and summer were rainier than average that year while spring was drier than average. Given the clayey nature of the soil, an increase in the precipitation amounts compared to normal might have caused anoxic conditions in the field. Soil properties were clay, 43.6%; sand, 14.3%; silt, 33.1%; organic carbon, 0.14%; organic nitrogen, 0.12% and pH 5.6. At each sampling site, soil was collected from four locations ca. 20 m apart from one another, by pooling 5 soil cores (20 cm depth) from 1 m × 1 m area at each location. All following steps were conducted by keeping the four replicate samples independent. All soils were sieved to 4 mm. 144 plasma flasks filled with 100 g of soil were then closed with sterile lids allowing gas exchanges between the atmosphere of the flask and atmospheric air. The microcosms were incubated at 20 • C and regularly opened in a sterile laminar flow hood when adjusting the water holding capacity (WHC) between 60 and 80%.
Disturbances Sequences
We considered three qualitatively different disturbances: freezethaw (−20 • C; F), heat-drought (42 • C; H) and anoxia cycles (A). These disturbances were chosen as model disturbances of interest because they have been used in multiple studies in microbial ecology (Sharma et al., 2006;Yergeau and Kowalchuk, 2008;Griffiths and Philippot, 2013). Soil microcosms (n = 120) were subjected to disturbance cycles consisting in two periods of 30 h of a given disturbance separated by a 40 h interval at 20 • C. For the heat-drought disturbance, microcosms were placed into an incubator at 42 • C; for the 20 • C disturbance, microcosms were placed into a −20 • C cold room and for the anoxia disturbance, oxygen was removed from the microcosms using a gas pump. Each disturbance cycle was then followed by 3 weeks incubation at 20 • C with a maintained WHC ranging between 60 and 80% for all microcosms (Supplementary Figure S1). Note that soil humidity was monitored over the course of the experiment to allow us to control the WHC whatever the disturbance regime of the microcosms. Three cycles of disturbances were performed with either repeated disturbances of a same nature or compounded disturbances of different nature in every possible order (n = 4). Control microcosms were incubated in the same conditions without being exposed to disturbance cycles, which resulted in a total of 140 microcosms. We destructively sampled microcosms at day 9 (T 0 : before any disturbance), day 36 (T 1 : 3 weeks after the 1st cycle of disturbance), day 64 (T 2 : 3 weeks after the 2nd cycle of disturbance), day 92 (T 3 : 3 weeks after the 3rd cycle of disturbance) and day 148 (T 4 : 10 weeks after the 3rd cycle of disturbance). Note that measurements over time are independent because at each time point, different microcosms were sampled.
Nitrogen Pools
Mineral nitrogen pools (NO 3 − and NH 4 + ) present in soil were extracted using 50 ml of KCl 1M that was added to ca. 10 g fresh soil, shaken vigorously (80 rpm for 1 h at room temperature), filtered and kept frozen until quantification according to ISO standard 14256-2 (Calderón et al., 2017). Quantification was performed using at least two blanks in each series by colorimetry in a BPC global 240 photometer.
Potential Nitrification Activity (PNA)
Potential nitrification activity (PNA) was performed according to ISO 15685. Briefly, 1.4 mM sulfate ammonium was added to 10 g of fresh weight soil supplemented with 500 mM of sodium chlorate to block the oxidation of nitrite. Ammonium oxidation rates were determined in each sample by measuring the accumulated nitrite every 2 h during 6 h via a colorimetric assay (Kandeler, 1995).
Potential Denitrification Activity (PDA) and Potential N 2 O Emissions
Potential denitrification activity (PDA) (N 2 O + N 2 ) and potential nitrous oxide emissions (N 2 O) were measured using the acetylene inhibition technique (Yoshinari and Knowles, 1976). For each sample, 10 g of fresh weight soil was wetted with 20 ml of distilled water and was amended with a final concentration of 3 mM KNO3, 1.5 mM succinate, 1 mM glucose, and 3 mM acetate. To determine the potential denitrification activity, acetylene was added to reach 0.1 atm partial pressure followed by 30 min incubation at 25 • C and agitation (175 rpm). Gas samples were taken every 30 min for 150 min (Pell et al., 1996). The N 2 O concentrations were determined using a gas chromatograph (Trace GC Ultra, Thermo Scientific) equipped with an EC-detector.
Quantification of Microbial Communities
DNA was extracted from 250 mg dry-weight soil samples according to ISO standard 11063 "Soil quality-Method to directly extract DNA from soil samples" (Petric et al., 2011). Total bacterial communities were quantified using 16S rRNA primer-based qPCR assays, respectively, (Muyzer et al., 1993;Ochsenreiter et al., 2003). Quantification of the bacterial and archaeal ammonia-oxidizers (AOB and AOA, respectively) was performed according to Leininger et al. (2006) and Tourna et al. (2008) whereas quantification of denitrifiers was performed according to Henry et al. (2004Henry et al. ( , 2006 and Jones et al. (2013). For this purpose, the genes encoding catalytic enzymes of ammoniaoxidation (bacterial and archaeal amoA), of nitrite reduction (nirK and nirS) and of nitrous oxide reduction (nosZI and nosZII) were used as molecular markers. Although not covering the extent genetic diversity of each group, the nirS and nirK primer sets used still allow for a comparative analysis of the relative abundance of each across the different soils samples by sampling a standard subset of each group for which denitrification functionality is verified (Penton et al., 2013). Reactions were carried out in a ViiA7 (Life Technologies, United States). Quantification was based on the increasing fluorescence intensity of the SYBR Green dye during amplification. The real-time PCR assays were carried out in a 15 µl reaction volume containing SYBR green PCR Master Mix (Absolute Blue QPCR SYBR Green Low Rox Mix, Thermo, France), 1 µM of each primer, 250 ng of T4 gene 32 (QBiogene, France) and 0.5 ng of DNA as previously described . Three independent replicates were used for each real time PCR assay. Standard curves were obtained using serial dilutions of linearized plasmids containing appropriated cloned targeted genes from bacterial strains or environmental clones. PCR efficiency for the different assays ranged from 70 to 99%. No template controls gave null or negligible values. The presence of PCR inhibitors in DNA extracted from soil was estimated by mixing a known amount of standard DNA with soil DNA extract prior to qPCR. No inhibition was detected in any case. qPCR data are presented in number of copies of a given gene per ng DNA.
Assessment of Microbial Community Composition and Diversity
A 2-step PCR approach was used for amplification of the V3-V4 hypervariable region of the 16S rRNA gene according to Berry et al. (2011). The first step was run on three subsamples that were subsequently pooled. It consisted of 20 µM of the forward primer 515F 5 -GTGCCAGCMGCCGCGGTAA-3 , 20 µM of the reverse primer 806R 5 -GGACTACHVGGGTWTCTAAT-3 (Eurogentec Seraing, Belgium), together with 10X buffer with MgSO 4 (Promega), 1U Pfu DNA polymerase, 20 µM dNTPs (MP Biomedicals, Europe), 250 ng T4 gp32 bacteriophage (MP Biomedicals, Europe) and 50 ng DNA template in a final volume of 25 µL. Reaction conditions were as follows: 2 min at 95 • C followed by 20 cycles of 30 s at 95 • C, 30 s at 53 • C and 60 s at 72 • C on an MJ Research PTC-200 Thermal Cycler (Bio-Rad, CA, United States). In the second step, 1 uL of the pooled PCR products of the first step was amplified in triplicate in a 10-cycle PCR using the forward primers preceded by 10 basepairlong barcodes, the sequencing key and the forward sequencing adapter; the reverse primers being preceded by the sequencing key and the reverse sequencing adapter only. The final PCR products were pooled and extracted from 2% agarose gel with the QIAEX II kit (Qiagen; France) and finally quantified using the Quant-iT PicoGreen dsDNA Assay Kit (Invitrogen, Cergy-Pontoise, France). Pyrosequencing was performed by Genoscreen sequencing service (Lille, France) on a Roche 454 FLX Genome Sequencer using Titanium chemistry (Roche Diagnostics).
Bioinformatic Analysis of the 16S rRNA Amplicons
The sequences obtained were analyzed using QIIME pipeline software (Caporaso et al., 2010b). Sequences of poor quality (score < 25 on a 50 base pair sliding window) or shorter than 230 base pairs were removed. Reference-based chimera detection was performed using greengene's representative set of 16S rRNA sequences and 1,297,290 quality-filtered reads were clustered in Operational Taxonomy Units (OTUs) at 97% similarity using USEARCH (Edgar, 2010). Representative sequences for each OTU (6394 OTUs retrieved) were then aligned using PyNAST (Caporaso et al., 2010a) and their taxonomy assigned using the greengenes database 1 . No choloroplast or mitochondrial OTUs were retrieved in our dataset. A phylogenetic tree was then constructed using FastTree (Price et al., 2010). Raw sequences were deposited at the NCBI under the accession number SRP117152. The process of raw sequences submission was greatly simplified by using the make.sra command of Mothur software (Schloss et al., 2009). 1 http://greengenes.lbl.gov/Download/Sequence_Data/Greengenes_format/ Diversity metrics, i.e., Faith's Phylogenetic Diversity (Faith, 1992), richness (observed species) and evenness (Simpson's reciprocal index), describing the structure of microbial communities were calculated based on OTU tables that were rarefied to 2200 sequences per sample, corresponding to the minimum number of sequences in a given sample, from which singletons were removed. Unweighted and weighted UniFrac distance matrices (Lozupone et al., 2010) were also computed to detect global variations in the composition of microbial communities. Principal Coordinates Analyses (PCoA) were then calculated and plotted. Discriminant OTUs between control and disturbed microcosms were detected using the pamr package (Wood et al., 2007).
Statistical Analyses
All statistical analyses were performed in R Studio (version 3.0.2) using the following R packages: vegan (Oksanen et al., 2016), RcolowBrewer (Neuwirth, 2014), gplots (Warnes et al., 2016), and car (Fox et al., 2016). Differences in gene copy abundance (16S rRNA, bacterial and archaeal amoA, nirK and nirS, nosZI, and nosZII), total nitrogen, ammonium and nitrate concentrations, and α-diversity indexes were tested using ANOVAs at each timepoint with the following model: Y ij = µ + treatment i + residual ij , followed by Tukey HSD tests. Normality and homogeneity of the residuals distribution FIGURE 1 | Aggregated impact of repeated disturbances on ecosystem properties and functions. The "Ecosystem Aggregated Impact" was calculated as the sum of the absolute value of Hedges' g for the 26 studied variables. The corresponding variance is the sum of the variance of each variable Hedges' g. 95% confidence intervals are represented for each treatment (A) corresponds to the Freeze-Thaw disturbances: F; Panel B to the Heat disturbance: H; and panel C to the Anoxia disturbance: A. In each panel, the corresponding treatment effects on ammonium and nitrate pools as well as on chosen ecosystem properties and functions is given. Note that the control treatment cannot be plotted on this figure because each ecosystem functions and properties (EAI) is calculated as an effect size of a given treatment relative to the control. Different letters above the bars indicate significant differences.
Frontiers in Microbiology | www.frontiersin.org was inspected and log-transformations were performed when necessary. Variance partitioning techniques were used to explain variations of different nitrogen pools (i.e., ammonium or nitrate) by variations of N-cycle microbial activities (nitrification, potential N 2 O emissions, potential denitrification and N 2 O emission ratio), variations of the abundance of different N-cycle microbial guilds (16S rRNA, AOA and AOB, nirK and nirS, nosZI, and nosZII), and variations of the total microbial diversity (observed species, Faith's PD and Simpson's reciprocal indices).
Ecosystem Aggregated Impact
We calculated effect sizes, and their respective 95% confidence intervals, using Hedges' g, an estimate of the standardized mean difference not biased by small sample sizes (Gurevitch et al., 2001) that is classically used in ecology to quantify effects of disturbances on ecosystem properties, for each variable comparing control and treated samples. We calculated an "Ecosystem Aggregated Impact" as the sum of the absolute value of Hedges' g for the 26 studied variables (List A, Supp Mat). Note that we chose to take the sum of absolute value because we did not want to have subjective a priori regarding what could be a better or worse performance for a given variable. The objective of this study was not to determine whether or not ecosystem performances of disturbed microcosms are better or worse than the control ones, but to quantify how much it changed. The corresponding variance was calculated as the sum of the variance of each variable Hedges' g, which is valid for approximately normally distributed variables. Non-overlapping 95% confidence intervals were then considered as significantly different.
Resilience or Global Drift of Microbial Communities and N-Related Functions After Repeated Disturbances of the Same Nature
We calculated an index describing the aggregated measures of the impact of disturbances on ecosystem functions and properties (EAI). We observed three different patterns of EAI depending on the nature of the disturbances. When facing a series of freeze-thaw disturbances, the measured functions and properties showed an overall tendency to resilience, defined here as a relative proximity to control microcosms, with significantly decreasing EAI values from T 2 [EAI = 36.5; CI 95% = (27.2 -45.7)] to T 4 [EAI = 18.2; CI 95% = (10.1 -26.3)] ( Figure 1A). We detected a significant and transient NO 3 − accumulation in disturbed microcosms compared to control ones ( Figure 1A). These shifts in NO 3 − pools were significantly positively related to shifts in the abundance of ammonia oxidizing archaea and to the relative proportion of ammonia oxidizing bacteria, but negatively related to shifts in PDA (Supplementary Table S1). This indicates that denitrification was temporarily slowed down by repeated freeze-thaw disturbances but recovered after 10 weeks, while the disturbances did not impact nitrification over the course of the experiment as shown by the absence of difference at all time points between control and disturbed microcosms ( Figure 1A). This supports previous findings reporting that freeze-thaw cycles had no inhibitory effect on the nitrification potential in several different soils (Yanai et al., 2004).
FIGURE 2 | Aggregated impact of compounded disturbances with alternative chronologies on ecosystem properties and functions. The "Ecosystem Aggregated Impact" was calculated as the sum of the absolute value of Hedges' g for all studied variables. The corresponding variance is the sum of the variance of each variable Hedges' g. 95% confidence intervals are represented for each treatment. Panel A corresponds to T 3 and panel B to T 4 , Disturbance sequences are encoded with F: Freeze-thaw, H: Heat and A: Anoxia. Note that the control treatment cannot be plotted on this figure because each EAI is calculated as an effect size of a given treatment relative to the control. Different letters above the bars indicate significant differences.
Frontiers in Microbiology | www.frontiersin.org Heat disturbances caused relatively strong modifications of soil functions and properties [33.5; CI 95% = (24.3 -42.7)] < EAI < 51.6; CI 95% = (41.3 -61.9)] ( Figure 1B) that remained over time (no significant differences between EAI values at T 1 , T 2 , T 3, and T 4 ). We found a significant accumulation of NO 3 − in heat-disturbed microcosms compared to control ones during the course of the experiment. PDA decreased significantly in disturbed compared to control microcosms and was more heavily impacted than potential N 2 O emissions, especially at T 3 and T 4 . This decrease in potential N 2 O emissions was negatively correlated to NO 3 − accumulation. These results parallel those from previous studies highlighting alteration of microbial processes such as N-cycling in response to a heat disturbance (Mooshammer et al., 2017). We also found that repeated heat disturbances was the only series of the same disturbance treatment having an impact on bacterial diversity with a significant decrease of phylogenetic diversity and richness in disturbed microcosms at T 2 , T 3, and T 4 , associated with NO 3 − accumulation (Supplementary Table S1 and Figure 1B). Former studies showing weak relationships between microbial diversity and ecosystem functions, suggested that functional redundancy, i.e., the ability of different species to perform similar roles under the same environmental conditions, was important in microbial communities (Wertz et al., 2007;Schimel and Schaeffer, 2012;Philippot et al., 2013). In contrast, we found that the modifications in N-cycling were related to a significant decrease of bacterial richness in disturbed microcosms, which challenges the extent of functional redundancy. However, the degree of redundancy within different microbial functional groups is still hotly debated since divergent controversial results are reported (Bell et al., 2005;Allison and Martiny, 2008;Strickland et al., 2009;Calderón et al., 2017).
We observed an apparent short-term resistance of ecosystem properties and functions to repeated anoxia disturbances [relatively constant EAI over T 1 : EAI = 19.3; CI 95% = (10.9 -27.6), T 2 : EAI = 26.9; CI 95% = (18.3 -35.5) and T 3 : EAI = 18.7; CI 95% = (10.4 -26.9)] ( Figure 1C). However, this period of relative stability was followed by a strong and significant alterations of soil properties and functions after 10 weeks [EAI = 52.4;.3) at T 4 ] ( Figure 1C). Thus, larger modifications in N-cycling were observed at T 4 with increased PDA and potential N 2 O emissions ( Figure 1C). This stimulation of denitrification -a facultative respiratory process during which nitrate is reduced into gaseous nitrogen when oxygen is limitedin microcosms exposed to anoxia, and the concomitant depletion of both NO 3 − and total N (Supplementary Table S1), was largely expected. Interestingly, we also observed a transient NH 4 + accumulation after repeated anoxia cycles, especially at T 3 that was not related to a change in potential nitrification, nor to a change in the abundance of AOA or AOB although the abundance of AOA decreased at T 4 .
Overall, these results suggest that depending on the nature of the disturbance, repeated environmental disturbances can lead either to a resilience of soil properties and functions once the disturbance ceases or to a shift in soil properties and functions indicating that a number of repeated pulse disturbances can gradually impair the ecosystem capacity to sustain its domain of stability (Villnas et al., 2013). However, not only disturbance frequency but also disturbance intensity can alter ecosystem stability . Because it is not feasible to assess differences in intensity between various types of disturbances, the effects of the nature of disturbance and of its intensity are therefore intrinsically linked in our study.
The Chronology of Compounded Disturbances Impacting Soil Ecosystems Determines Microbial Community Composition as Well as Ecosystem Properties and Functions
After applying the selected disturbances in every possible order, the aggregated impact of compounded disturbances on ecosystem properties and functions was calculated as described above. Results of these analyses revealed a significant effect of the chronology of disturbances on the EAI, which supports our hypothesis (Figure 2). This was particularly obvious for the 'anoxia/heat/freeze-thaw' sequence, whose impact after 3 weeks (T 3 ) was almost two times stronger )] than that of the 'heat/anoxia/freeze-thaw' sequence [EAI = 23.2 (14.9 -35.7)]. A striking result of our study is that differences between sequences of disturbances were even more pronounced at T 4 than T 3 , with the 'anoxia/freeze-thaw/heat' sequence having the strongest impact ] and the 'freezethaw/anoxia/heat' one being the less disturbing ]. While the idea that microbial communities display a high resistance and resilience is pervasive in ecology (Allison and Martiny, 2008), our results suggest instead a poor resilience since legacy effects of compounded disturbances are increasing with time. Because 10 weeks elapsed after the last disturbance, it is not possible to decipher whether these differences observed at T 4 are still increasing of if they had reached a plateau somewhere in between the two sampling events.
When considering individual variables separately, we identified the abundance of AOB, nirK-denitrifiers (at T 3 ) and of the nosZII clade (at T 3 and T 4 ) as significantly impacted by the disturbance chronology (Figures 3C, 4A). Regarding AOBs, estimates were up to five times lower for the 'anoxia/heat/freeze-thaw' sequence (AHF) comparing to the 'heat/freeze-thaw/anoxia' one (HFA) at T 3 ( Figure 3A). Accordingly, Wessen and Hallin (2011) proposed that the abundance of ammonia-oxidizers could be a good bioindicator for soil monitoring while the composition of this guild was suggested among the possible ecologically relevant biological indicators of soil quality (Ritz et al., 2009). For both nirK-and nosZII-communities at T 3 , we found their maximum abundances in the 'freeze-thaw/anoxia/heat' sequence (FAH), while their abundances were significantly lower in the AHF (Figures 3B,C). At T 4, the nosZII community was undetectable in two of the chronology treatments (the ones starting with the 'anoxia' disturbance) while its abundance in the FAH remained in the same range than that of the control treatment ( Figure 4A). Not only the abundance of microbial guilds involved in N-cycling but also N-pools were affected by disturbance chronology. In particular, the NH 4 + concentration was about three times higher in the 'freeze-thaw/heat/anoxia' sequence (FHA) than in the HFA ( Figure 4B). However, disturbance chronology had no effect on the measured processes and the disturbance sequences impacting significantly the abundance of N-cycling communities and the NH 4 + pools were not the same. This adds fuel to the debate about the links between functional community abundances and corresponding process rates and/or pools of products (Rocca et al., 2015;Graham et al., 2016).
We also demonstrate that disturbance chronology caused significant shifts in bacterial phylogenetic diversity and richness at T 3 but also at T 4 (Figures 3D, 4C) . Such differences in bacterial community diversity at T 4 , 10 weeks after the last disturbance had occurred, highlight the importance of legacy effects of the disturbance chronology, which can overwhelm the short term effect of the last disturbance. This is exemplified by the FAH and the 'anoxia/freezethaw/heat' (AFH) sequences leading to significantly different EAI at T 4 but not at T 3 (Figure 2). The AHF sequence was detected as the one with the strongest impact on bacterial phylogenetic diversity and richness with losses up to ∼20% of the PD at T 3 and ∼15% of the richness a T 4 compared to control treatments (Figures 3D, 4C). In contrast, the 'heat/anoxia/freeze-thaw' sequence (HAF) for example did not display any significant diversity loss at both times. As expected, these changes in diversity levels due to the chronology of disturbances were most often concomitant with significant differences in bacterial community structure (measured as differences in weighted UniFrac distances, pairwise-PERMANOVAs) at T 3 ( Figure 5A) and T 4 ( Figure 5B). Altogether, these results highlight that the chronology of compounded disturbances impacts significantly the resilience of ecosystem properties and functions through modifications of the bacterial community structure, abundance and diversity. Consequently, historical information about the succession of disturbances is another element that would improve our understanding of patterns in microbial communities, which is particularly important in the context of global change leading to increasing extreme climatic events.
Predicting the Strength of the Ecosystem Gggregated Impact of Series of Disturbances on Soil Properties and Functions
A significant part of the observed variance in EAI values caused by series of disturbances was explained by changes in total bacterial community structure and in N 2 O reducer abundances ( Table 1). We found that ∼29% of the EAI variance could be attributed to changes in weighted UniFrac distances. This means that a significant part of the changes observed in ecosystem properties and functions can be linked to changes in bacterial community composition. This indicates that the prediction of the stability of aggregated ecosystem properties and functions after series of disturbances is possible, to some extent, based on measured changes of microbial community composition. The concept of functional redundancy is often used as a justification for considering community composition as less relevant for ecosystem processes, even when facing environmental disturbances (reviewed in Griffiths andGraham et al., 2016). For example, Wertz et al. (2006) found that decline in biodiversity did not impair either the resistance or resilience of two N-cycling guilds following a heat disturbance. On the contrary, our results indicate that the impact of compounded series of disturbances on soil properties and function is reflected by the degree of phylogenetic relatedness between microbial communities. This yields additional lines of evidence supporting the importance of microbial community composition and diversity for maintaining ecosystem functioning under fluctuating conditions (Yachi and Loreau, 1999;Fetzer et al., 2015). Moreover, our findings suggest that shifts in microbial community composition in disturbed environments can be a quantitative indicator of the degradation of the ecosystem functioning. We were also able to detect OTUs that were significantly enriched/depleted in control versus disturbed microcosms. While the sensitive-to-disturbances OTUs belong to diverse phyla (Supplementary Figure S2A), 4 out of 6 Multiple linear regressions were used to assess the % of ecosystem functions and properties (EAI) variance explained by each explaining variables. Note that regarding the weighted UniFrac distance, we calculated the distance between each treatment centroid and its corresponding control centroid. Therefore we ended up with a vector of control-to-treatment distances that we used as an explaining variable in the multiple regression. % of EAI variance explained by each explaining variables is given between parentheses. Significance levels: * * * < 0.001, * * < 0.01, * < 0.05.
of the resistant-to-disturbances OTUs have been classified as members of the Actinobacteria phylum (Supplementary Figure S2B with two members of the Actinomycetales order and two members of the Thermoleophilia class). Besides the composition of the total bacterial community, we also detected shifts in N 2 O-reducer abundances (using the abundance of nosZI and nosZII genes as proxies) as indicative of changes in EAI values with, respectively, ∼35 and ∼13 % of explained variance for nosZI and nosZII-clade N 2 O reducers. This higher susceptibility of N 2 O-reducers to environmental changes makes the abundance of this guild an effective candidate marker of disturbances when considering N-cycle related ecosystem functions. Altogether, using model, controlled pulse disturbance sequences that are not necessarily environmentally relevant, our results demonstrate the non-commutative property of sequential environmental disturbances of a different nature. The chronological order in which disturbances are occurring can make a soil ecosystem increasingly vulnerable to subsequent disturbances due to legacy effects affecting durably soil microbial community composition. History of disturbances can therefore help us to elucidate the mechanisms underlying observed patterns in microbial communities. Ecosystems worldwide are experiencing higher pressures due to the combined and intricate effects of anthropogenic activities and climate change. Building a predictive framework of the impact of compounded disturbances on soil functioning strongly depends on the identification of ecological markers of disturbances for assessing ecosystems health in a context of sustainable land use. In this perspective, we show that the aggregated impact of series of disturbances on soil properties and functions were reflected by shifts in community composition, which suggest that assessing the stability of microbial communities can be an effective proxy for monitoring the ecosystem functional resilience to compounded disturbances. This also further emphasizes the benefit of incorporating microbes into ecosystem process models.
|
2018-11-06T14:04:11.933Z
|
2018-11-06T00:00:00.000
|
{
"year": 2018,
"sha1": "0d3222ff251b0c089a2420ba089f38ea4a73a92f",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2018.02721/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0d3222ff251b0c089a2420ba089f38ea4a73a92f",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Environmental Science",
"Medicine"
]
}
|
261544649
|
pes2o/s2orc
|
v3-fos-license
|
Mutational spectrum of DNA damage and mismatch repair genes in prostate cancer
Over the past few years, a number of studies have revealed that a significant number of men with prostate cancer had genetic defects in the DNA damage repair gene response and mismatch repair genes. Certain of these modifications, notably gene alterations known as homologous recombination (HRR) genes; PALB2, CHEK2 BRCA1, BRCA2, ATM, and genes for DNA mismatch repair (MMR); MLH1, MSH2, MSH6, and PMS2 are connected to a higher risk of prostate cancer and more severe types of the disease. The DNA damage repair (DDR) is essential for constructing and diversifying the antigen receptor genes required for T and B cell development. But this DDR imbalance results in stress on DNA replication and transcription, accumulation of mutations, and even cell death, which compromises tissue homeostasis. Due to these impacts of DDR anomalies, tumor immunity may be impacted, which may encourage the growth of tumors, the release of inflammatory cytokines, and aberrant immune reactions. In a similar vein, people who have altered MMR gene may benefit greatly from immunotherapy. Therefore, for these treatments, mutational genetic testing is indicated. Mismatch repair gene (MMR) defects are also more prevalent than previously thought, especially in patients with metastatic disease, high Gleason scores, and diverse histologies. This review summarizes the current information on the mutation spectrum and clinical significance of DDR mechanisms, such as HRR and MMR abnormalities in prostate cancer, and explains how patient management is evolving as a result of this understanding.
Introduction
Prostate cancer (PCa) is the most common cancer among men worldwide and a significant public health burden.The disease is diagnosed more frequently than any other type of cancer (Sanjose et al., 2011;Lozano et al., 2012;Sung et al., 2021).Previous studies (Siegel et al., 2011;El Noor and El Noor, 2015;Bugoye et al., 2019), have shown that PCa is the most frequent solid tumor in males and one of the main causes of cancer-related deaths in men worldwide.The incidence and mortality of PCa are alarming, 2,293,818 new cases are anticipated to be diagnosed between now and 2040, and a 1.05% rise in mortality is expected (Rawla, 2019).Incidence and death of PCa are both associated with age at the time of diagnosis.African-American men have higher incidence rates than white men.Previous researchers reported 158.3 new cases per 100,000 African-American men and twice the mortality rate compared to reported data in white men, and the disparity is caused by a variety of social, environmental, and genetic factors (Panigrahi et al., 2019).PCa progression is typically gradual and symptom-free, and treatment may not even be necessary, but most frequent complaints are difficulty urinating, increased urination frequency, and nocturia, though prostatic enlargement may induce similar symptoms.The advanced stages of the disease may appear with back discomfort and urinary incontinence because the axial skeleton is the most usual site of bone metastatic sickness.
In 2018, 1,276,106 new cases of PCa were reported worldwide, making up about 7.1% of all male malignancies (Bray et al., 2018;Rawla, 2019).Prostate cancer incidence rates vary by geographic area, and these variations have been connected to a range of modifiable and non-modifiable factors (Rawla, 2019;Sung et al., 2021).Smoking, food, obesity, and exposure to chemicals are all aspects of lifestyle that can be modified to lower the chance of developing prostate cancer.These risk factors can be changed or managed to reduce a person's chance of developing the illness.Age, race/ethnicity, family history, and some genetic alterations such as mutations in the Homologous recombination-related DNA Damage Repair genes PALB2, CHEK2, BRCA1, BRCA2, ATM, and MLH1, MSH2, MSH6, PMS2 Mismatch Repair (MMR) genes are risk factors that cannot be modified to reduce risk of developing disease (Lang et al., 2019;Sedhom and Antonarakis, 2019;van Wilpe et al., 2021).In this review, the Mutation spectrum of HRR genes, and DNA MMR genes involved in DNA damage repair mechanisms and their therapeutic applications are discussed.
Materials and methods
The peer-reviewed articles in this review spanned the time between January 2000 and June 2023 and were taken from PubMed and Google Scholar.The search terms "prostate cancer OR genetic alteration OR mutation variant OR gene OR DNA Damage repair OR individual OR Mismatch repair OR Homologous recombination OR immunotherapy OR prevalence OR frequency OR Genetic OR Spectrum OR Landscape" were used.1,222 publications from Google Scholar and 12,773,760 articles from Pubmed made up 12,774,982 total results for our search phrase.Since broad key terms were employed, the search string's filtering mechanism also supplied irrelevant results.The following inclusion criteria were used to manually screen retrieved articles: 1) Articles describing a phenotypic or genetic condition associated with prostate cancer 2) Articles that detail genetic variations 3) Articles describing gene changes related to DNA damage repair 4) Articles outlining genes involved in homologous recombination repair (HRR) 5) Articles discussing DNA damage repair modifications and the clinical or therapeutic use of the Mismatch Repair (MMR) gene.To retrieve and validate the reported Mutations, Either ClinVar, European Nucleotide Archive (ENA), DNA Databank of Japan (DDBJ), or Gene Bank Genetic Variants databases were used.Also, with the exception of two articles published in 1999 and 1998, all other selected articles published in 2000 and above were considered.However, articles with irrelevant information for this review and those with redundant information and duplicate information were not selected.Thus, in this review, 174 articles were used.
Race and ethnicity in prostate cancer
Hereditary factors are known to play a role in the higher mortality and incidence rates of PCa in African American males compared to European American men (DeSantis et al., 2016).Greater genomic mutation, which results in a more aggressive phenotype, is one of the characteristics of the ethnic disparities in PCa biology.Knowing how to target genetic anomalies like DDR gene mutations offers the way to efficient treatments that can improve clinical outcomes (Kohaar et al., 2022;White et al., 2022).Compared to white males, men with African heritage and men from South America continue to have higher incidence and mortality rates of PCa (Belkahla et al., 2022).Men from the Middle East, North Africa, and Asia often have the lowest incidence of PCa in comparison to other ethnic groups, such as non-Hispanic White males, who have a chance of PCa diagnosis of 1 in 8 men over their lifetime, data from the National Cancer Institute suggest that men of African descent often have the greatest rate (1 in 6 men) (Akaza et al., 2011;Powell and Bollig-Fischer, 2013).A number of factors, including the downregulation of DNA repair genes in African men, contribute to the increased incidence and mortality rate of prostate cancer in African men.When compared with Non-Hispanic white, African American men were found with upregulation genes governing inflammatory pathways including CCL4, IFNG, CD3, IL33, and ICOSLG despite the downregulation of DNA damage response mechanisms (DeSantis et al., 2016;Nair et al., 2022).However, it should be highlighted that race is a social rather than a biological construct; in PCa, there have been conflicting results and challenges to demonstrate the prevalence and differences in DNA repair gene mutation between races, and results ought to be handled with caution (Reizine et al., 2023).(Tolkach and Kristiansen, 2018;Haffner et al., 2021).Because of these heterogeneities, there is no one-size-fits-all approach to treating PCa.This has led to controversies over how to treat patients (Shore et al., 2022).For instance, the issue of whether routine PCa screening is more advantageous than the risk of overdetection and over-treatment is still up for debate and necessitates individualized treatment approaches (Force, 2018;Shore et al., 2022).Despite the promise of precision therapy in the treatment of PCa has increased due to advances in genetic testing, the use of genetic data by practitioners in developing countries is hampered by issues with insufficient capacity, the number of patients requiring genetic testing, and accessibility (Szymaniak et al., 2020).Similar to this, worries about overtreatment have changed recommendations for active monitoring, and physicians' approaches to managing PCa vary widely (Shore et al., 2022).According to research, more than 50% of respondents disagree with management concerns relating to PCa which is advancing (Saad et al., 2019).These findings, along with others of a similar nature, have highlighted the necessity of improved knowledge of the genetic mutation spectrum in prostate cancer (Finch et al., 2022) and other concerted efforts to improve the illness management (Gillessen et al., 2015;Gillessen et al., 2018;Saad et al., 2019;Gillessen et al., 2020).Genetic Mutation in Prostate Cancer.
Genetic and epigenetic changes normally occur at different levels, and these genetic modifications have provided potential applications as biomarkers in cancers.The genetic origins of prostate cancer have been the subject of research for decades.This is because genes involved in the DNA damage repair pathways can increase the risk of developing PCa (Robinson et al., 2015a).In Cells, there are sophisticated networks of nonredundant mechanisms, including base alterations, strand breaks, and interstrand crosslinks, these mechanisms detect and repair DNA damage.Direct repair, base excision repair, nucleotide excision repair, mismatch repair (MMR), homologous recombination repair (HRR), non-homologous end joining (NHEJ), and the Fanconi anemia repair pathways are important DNA damage repair mechanisms.According to research, this PCa is substantially more likely to be developed in carriers of HRR and MMR gene mutations (Robinson et al., 2015a).
Mismatch repair gene mutations
A post-replication technique called DNA mismatch repair (MMR) is used to correct base mismatches and replicationrelated insertion or deletion mistakes.Genetic instability is brought on by a loss of MMR gene function resulting in a buildup of errors that typically occur during DNA replication (Chen et al., 2001).The risk of PCa in men with MMR gene mutations has been demonstrated to be significantly enhanced (Rantapero et al., 2020).Individuals that have MMR mutations are typically predicted to account for 2%-5% of PCa cases (Ritch et al., 2020;Ye et al., 2020), and they are generally identified by their Gleason score 8 and de novo metastases (Dominguez-Valentin et al., 2016;Ye et al., 2020;Jiang et al., 2021;Graham and Schweizer, 2022).The genetic mutations in MMR genes in the prostate may aid in understanding PCa carcinogenesis, which may have additional repercussions for ICBs and other types of treatment.This review effort focuses on the MSH2, MSH6, MLH1, and PMS2 genes (Table 1), which are more frequently found in PCa and have recently been described by Pecina et al., though previous authors have reported on other MMR genes which are also connected to high microsatellite instability and high tumor mutational load involved in PCa progression (Chen et al., 2011;Sedhom and Antonarakis, 2019;Pećina-Šlaus et al., 2020).
MutL homolog 1
The MutL homolog 1 (MLH1) protein, a member of the MutL family of DNA mismatch repair proteins.MLH1 consists of 56kilobases is located in chromosome 3 and consists of 11 exons for coding MLH1 Protein which recognizes incorrect nucleotides, causing them to be expelled and replaced in a strand-directed manner (Fukuhara et al., 2014;Zhen et al., 2018).Several PCa cohorts have been examined for the breadth of MLH1 gene mutations using sequencing data (Pande et al., 2012;Haraldsdottir et al., 2014;Dominguez-Valentin et al., 2016).MLH1 mutations were found in 0.7% of the 150 mCRPC patient (Robinson et al., 2015b).Additionally, PCa with the MLH1 gene mutation has been connected to an elevated acute form of the illness, a higher Gleason score, a lack of differentiation, and a higher rate of distant metastasis (Shenderov et al., 2019;Antonarakis et al., 2020).Frameshift, deletion, and missense mutations are additionally the most prevalent MLH1 mutation types in PCa (McVety et al., 2005).The MLH1 mutation previously identified in PCa patients comprises the following variants associated with a higher risk of PCa; c. 350C>T, c.588+5G>A, c.15371547delInsC, c.1667+2delTAAATCAinsATTT, c.1667+2delTAAATCAinsATTT, and c.1732-2A>T (Raymond et al., 2013;Dominguez-Valentin et al., 2016).
MutS homolog 2 (MSH2) gene
The MSH2 gene is responsible for the production of proteins involved in the MMR mechanism.MSH2 gene is located in the short arm of chromosome 2 (2p21) and encodes proteins that aid in DNA mismatch repair (Salo-Mullen et al., 2018;Zhen et al., 2018).In collaboration with the MutL homolog (MLH1) protein, MSH2 detects and corrects mismatched DNA base pairs that occur during DNA replication (Chakraborty and Alani, 2016;Furman et al., 2021).The MSH2 gene is commonly changed in PCa by deletion, missense, and frameshift mutations that also cause the buildup of neoantigens, raise the burden of tumor mutations, and increase the density of lymphocytes that infiltrate tumors (Haraldsdottir et al., 2014;Dominguez-Valentin et al., 2016).
MutS homolog 6 (MSH6) gene
The MSH6 gene is found on the short arm of chromosome 2 (2p16.3)(Salo-Mullen et al., 2018;Zhen et al., 2018).The MSH6 gene participates in MMR pathways and mutation of this gene has been associated with an increase in the likelihood of developing PCa (Pritchard et al., 2014) According to recent research, structural rearrangements such as insertions and deletions are what make MSH6 more susceptible to mutations (Dominguez-Valentin et al., 2016).
PMS1 homolog 2 (PMS2) gene
The PMS2 gene with approximately 38,000 base pairs is located on chromosome 7.The gene consists of 15 exons that code for the 862 amino acids that make up the PMS2 protein (Fukuhara et al., 2015).The produced protein is essential to the mismatch repair mechanism, which corrects small insertions and deletions as well as DNA mismatches that may occur during homologous recombination and DNA replication.As previously reported, Genomic integrity is protected by DNA mismatch repair, which corrects mismatches brought on by DNA replication and recombination (Wu et al., 2003).A multitude of essential phases in mismatch repair are orchestrated by the human MutL-alpha heterodimer.As has been previously noted, the nuclear import of MutL-alpha may be the initial regulatory step in initiating the mismatch repair mechanism (Leong et al., 2009).When this protein forms heterodimers with the MutL homolog 1 (MLH1) gene product, the resulting structure is known as the MutL-alpha heterodimer.The MutL-alpha heterodimer's endonucleolytic activity is essential for the removal of the mismatched DNA, and the MutS-alpha and MutS-beta heterodimers identify mismatches and insertion/deletion loops (Reyes et al., 2015).Lynch Syndrome (LS), a multi-organ cancer syndrome, is caused by genetic abnormalities in the four MMR genes (MLH1, MSH2, MSH6, and PMS2).The most recent MMR gene to be connected to Lynch Syndrome is PMS2 (Hendriks et al., 2006).It has been hypothesized that PMS2 mutations, in contrast to MLH1 and MSH2 mutations, may be linked to a later stage of cancer development (Hendriks et al., 2006).In addition to the several MMR genes, the PMS2 gene like many other genes associated with PCa, has also been identified as having polymorphisms and mutations (Fukuhara et al., 2015).Earlier studies have shown that males with PMS2 mutations are more likely to develop PCa (Haraldsdottir et al., 2014).
Mutation in homologous recombination repair genes
Genomic instability and eventually cancer are brought on by the accumulation of mutations brought on by faults in the DNA damage response pathways (Bartek et al., 2001;Bartek and Lukas, 2003;Antoni et al., 2007), and the path that is frequently blocked is the homologous recombination pathway (van Wilpe et al., 2021).Since some of the proteins in this system are frequently altered in human cancers and a number of heritable conditions that are predisposed to developing cancer, disruption of this route has been demonstrated to play a significant part in the genesis of cancer (Khanna and Jackson, 2001).In the face of various DNA-damaging events, the homologous recombination mechanism is essential for maintaining genomic integrity, Men who carry inherited mutations in essential homologous recombination pathways have a significantly elevated lifetime risk of developing PCa compared to noncarriers (van Wilpe et al., 2021).The ability to test for carriers of those mutations using DNA next-generation sequencing technology has enabled the creation of prospective risk-reduction strategies and the justification for the therapeutic strategy for these individuals.There are currently promising new prospects for treating PCa in people who have mutations in homologous recombination repair genes.The use of immune-based therapies, chemotherapy with platinum, and PARP inhibitors are all viable treatment alternatives (Antonarakis et al., 2019).Castrate-resistant PCa (CRPC) patients have not responded well to immuno checkpoint blocking (ICB) or inhibitor therapy.Only 3%-5% of CRPC patients benefit from anti-PD-1 therapy (Antonarakis et al., 2019).Additionally, research has shown that using ICBs in conjunction with other treatments can improve response rates.For instance, the response rate of a combination of ipilimumab and nivolumab is predicted to be between 10%-26% (Boudadi et al., 2018).When homologous recombination repair genes BRCA1/2, ATM, PALB2, and CHEK2 are inactive, a variety of errorprone and non-conservative DNA repair pathways are used, increasing the incidence of cancer and worsening its prognosis while also causing genomic instability (Castro and Eeles, 2012;Amsi et al., 2020).The details of homologous recombination genes involved in DNA damage repair are summarized in Table 2 below.
BRCA1 and BRCA2
Epidemiological research has connected Breast cancer gene 1(BRCA1) and Breast Cancer gene2 (BRCA2) mutations to the risk of PCa; nevertheless, 5%-15% of cases of prostate cancer are caused by high-risk hereditary variables (Ferrís-i-Tortajada et al., 2011).Pathogenic mutations, which account for less than 1% of BRCA1, and about 2% in BRCA2 of incident PCa cases, are likely to be the cause of the disease (Kote-Jarai et al., 2011;Leongamornlert et al., 2012) Click or tap here to enter text in mutation carriers.Male BRCA2 carriers have a higher lifetime chance of acquiring PCa than BRCA1 carriers (Roy et al., 2012).The DNA damage repair (DDR) process uses non-conservative and potentially mutagenic methods when these BRCA1 and/or BRCA2 genes are absent.There is evidence that this genomic instability contributes to the cancer risk associated with harmful BRCA gene mutations (Salmi et al., 2021).BRCA1 and BRCA2 pathogenic mutation carriers have a relative risk of PCa that is enhanced by 1.8-3.8 and 2.5 to 8.6 times, respectively, by the time they are 65 years old (Thompson et al., 2003;Agalliu et al., 2007;Leongamornlert et al., 2012).BRCA1 and BRCA2 gene mutation testing, as well as their correlation with higher Gleason scores (8>), and therapeutic use in PCa management, have drawn the attention of experts to familial PCa in particular (Amsi et al., 2020).Numerous authors have discussed the mutation spectrum in relation to PCa risk, but Ashkenazi Jewish and Icelandic populations have been found to have frameshift deletion 185delAG and a frameshift insertion 5382insC for BRCA1 and BRCA2 frameshift deletion, as well as 999del5 founder mutations linked to poor survival in young people (Wilkens et al., 1999;Tryggvadóttir et al., 2007).In addition, PCa patients in the United Kingdom with significantly younger ages (<56 years) were found to have two base pair deletions 5531delTT and four base pair deletions 6710del ACAA frameshift mutation in the BRCA2 gene (Gayther et al., 2000).Additionally, different studies reported various types and different mutation frequencies (Wilkens et al., 1999;Vazina et al., 2000;Ikonen et al., 2003;Gallagher et al., 2010;Castro et al., 2013;Maia et al., 2016;Shenoy et al., 2016).Most of the reported mutations are frameshift mutations.The differences in the prevalence of BRCA1/2 mutations amongst populations may be due to differences in study sample size, inclusion criteria, patient ethnic origins, and other factors.In this review, some of the previously reported BRCA1 and BRCA2 pathogenic mutations associated with PCa in different populations are summarized in Table 3.
Ataxia telangiectasia mutated (ATM) gene
The Ataxia Telangiectasia Mutated (ATM) gene is in charge of producing the serine/threonine kinase that is connected to PI3K and is thought to be involved in maintaining genomic integrity.On chromosomes 11q22-23, ATM is located and has 66 exons and a coding sequence of 9,168 base pairs (Choi et al., 2016).In the double-strand break (DSB) repair process, ATM functions as a signal transducer and is essential for the detection of DNA damage and the cellular response to it (Virtanen et al., 2019).The rearrangement of antibody genes during B-cell maturation or meiotic recombination, as well as ionizing radiation, chemotherapeutic drugs, or oxidative stress, can all result in DSB, according to other findings (Lieber, 2011;Bednarski and Sleckman, 2012;Choi et al., 2016).RAG DSBs trigger traditional DNA damage responses by activating ATM and DNA-PKcs, both of which are serine-threonine kinases that are a member of the phosphatidylinositol-3-kinase (PI3K)-like family (Callén et al., 2009).This is similar to other DSBs created in the G1 phase of the cell cycle (Bednarski and Sleckman, 2012).Both ATM and DNA-PKcs function as transducers in the DNA damage response, phosphorylating a range of downstream effectors (Callén et al., 2009).
Due to the different and redundant functions that ATM and DNA-PKcs play in the response to RAG-mediated DSBs, immune system deficiencies and errors in DNA end repair occur in mice and individuals who lack either of these kinases (Bednarski and Sleckman, 2012).According to the information that is currently available, castration-resistant PCa have an enrichment of approximately two times the frequency of localized PCa in terms of tumoral germline or somatic ATM mutations, which vary from 5% to 8% of the tumors, also Men with pathogenic ATM mutations have an increased risk of developing PCa, which may potentially cause the condition to appear earlier (Giri and Beebe-Dimmer, 2016;Thalgott et al., 2018;Wokołorczyk et al., 2020).There is limited information about populations in Africa, despite some populations being known to carry these mutations and being discussed in this review.In the research conducted on 390 Polish people, eight genes were mutated and 76 males (19.5%) had a mutation in one of the BRCA1, BRCA2, ATM, CHEK2, MSH2, or MSH6 genes.A total of 11 mutations (2.8%) associated with ATM were among the reported gene variations.A few one-stop gains in the ATM gene have been documented, including c.8545C>T, c.742C>T, c.5932G>T, and c.7096G>T.A frameshift mutation, c.7010_7011del, two splice acceptor variations, c.7630-2A>C, and two missense mutations, c.6095G>A and c.8147 T>C, are all found in the ATM gene, but all of those alterations were not reported in the control groups (Wokołorczyk et al., 2020).
Partner and localizer of BRCA2 (PALB2)
The BRCA1-PALB2-BRCA2 protein, which is encoded by the partner and localizer of BRCA2 (PALB2), is found on chromosome 16 and forms the PALB2 complex.The molecular scaffold, PALB2, interacts with BRCA2.It is essential for homologous recombination and DNA double-strand break (DSB) repair (Sy et al., 2009).Fanconi anemia is also brought on by germline homozygous loss of function (LoF) mutations of PALB2, like BRCA2 (Reid et al., 2007;Xia et al., 2007), although heterozygous LoF mutations have been associated with hereditary breast cancer and pancreatic cancer (Jones et al., 2009;Antoniou et al., 2014).Despite new correlations between pathogenic PALB2 mutations and an elevated risk of different cancer, research on the PALB2 gene's role in PCa has produced conflicting results.The authors of this study are aware of new results linking PALB2 to a statistically significant increase in the chance of developing PCa (Yang et al., 2020).While it has long been known that these polymorphisms increase the risk of breast cancer, data on PALB2's role in PCa have also been decisively proven (Horak et al., 2019;Wokołorczyk et al., 2020;Bouras et al., 2022).
The authors also acknowledge that there has not been much research on PALB2 pathogenic mutations in PCa patients, but research conducted in Poland in 2021 by Wokolorczyk and others found that aggressive cancers with high Gleason scores of 8-10 were more frequently diagnosed with the two founder mutations of PALB2 c.509_510delGA and c.172_ 175delTTGT, which together account for 80% of all PALB2 mutations.Furthermore, the 5′ ends of the PALB2 gene were the location of these founder mutations, as described by Wokolorczyk (Wokołorczyk et al., 2021).Additionally, these c.509_510delGA and c.172_ 175delTTGT variants result in a translational frameshift with a predicted alternate stop codon and are predicted to cause PALB2 function to be lost through premature protein truncation or nonsense-mediated messenger RNA decay.These research findings are strengthening the case that the two Polish founder mutations of PALB2 are also pathogenic for breast cancer (Wokołorczyk et al., 2021).
Checkpoint kinase 2(CHEK2)
Cell cycle checkpoint kinase 2 (CHEK2) is a protein that participates in DNA damage response in many distinct cell types.Ataxia Telangiectasia Mutated (ATM) protein activates CHEK2 to prevent the buildup of mutations and reduce the risk of cancer when DNA is damaged (Bartek et al., 2001;Bartek and Lukas, 2003;Antoni et al., 2007).By activating CHEK2 in response to DNA damage, the checkpoint control pathways' downstream targets, such as p53, Cdc25A, Cdc25C, BRCA1, E2F1, Pml1, Plk3, and other Frontiers in Genetics frontiersin.org07 substrates, are signaled.These substrates altered biological activities resulting in cell cycle blockage, enhanced DNA repair, or apoptosis (Bartek et al., 2001;Bartek and Lukas, 2003;Antoni et al., 2007).Cell cycle regulator CHEK2 controls the homologous recombinant DNA repair process, suppresses tumors, and genetic changes in it render cancers more susceptible to more advanced targeted therapies.CHEK2 is phosphorylated and activated in an ATM-dependent way in response to certain DNA-damaging substances (Matsuoka et al., 1998).
Gene Population Reported Mutation References
In the cell-cycle checkpoint control, active CHEK2 and other DNA damage-triggered protein kinases stabilize TP53 or speed up Cdc25A degradation through the coordination of DNA repair, cellcycle progression, and death (Matsuoka et al., 1998;Hirao et al., 2000;Falck et al., 2001;Zannini et al., 2014) The tumor-suppressing function of the cell cycle regulator CHEK2 is mediated via homologous recombinant DNA repair, and genetic changes in it render malignancies susceptible to more advanced targeted therapies.CHEK2 is phosphorylated and activated in an ATMdependent way in response to certain DNA-damaging agents (Matsuoka et al., 1998).Activated CHEK2 and other DNA damage protein kinases stabilize TP53 or hasten Cdc25A degradation in the cell-cycle checkpoint regulation via coordinating DNA repair, cell-cycle progression, and apoptosis (Matsuoka et al., 1998;Hirao et al., 2000;Bartek et al., 2001;Falck et al., 2001;Zannini et al., 2014) The frameshift 1100delC mutation in the CHEK2 gene, which causes the translation of a truncated CHEK2 protein product lacking kinase function, is the gene variation that has received the most attention.A substitution G>A in the exon 2 splice site has also been known as the IVS2 + 1G>A (or 444 + 1G>A) mutation.Because of this substitution, an aberrant splicing event occurs which also causes the insertion of a 4base pair fragment, and finally, a frameshift that terminates at codon 154 in exon 3 (Dong et al. (2003)).Men with the CHEK2 1100delC, IVS2 + 1G>A, and I157T missense mutations have a higher chance of developing PCa, while there is no evidence that these CHEK2 1100delC, variants cause the disease in all cases of familial PCa.A previous study using 178 patients identified 13 CHECK2 mutations, of which 9 (10.7%) were found in the prostate cancer of 84 unselected patients and 4 (4.3%) were found in the earlyonset cancer of 94 patients (Dong et al. (2003)).Men from both Europe and Africa have shown similar evidence of a CHEK2 risk mutation for PCa.African men had a CHEK2 c.1343T>G OR of 3.03 (95% CI 1.53 to 6.03, p = 0.0006), whereas European men had a CHEK2 c.1312G>T OR of 2.21 (95% CI 1.06 to 4.63, p = 0.030) (Southey et al., 2016).
Genetic mutation and tumor antigenicity in prostate cancer
The tumor is more likely to be recognized by the immune system when it contains more neoantigens.The accumulation of these neoantigens is triggered by mutations in the DNA repair process.The prevalence of mutations, which might increase the number of neoantigens, would increase if specific DNA repair mechanisms, including MMR and HRR are altered (Jiricny, 2013;De Mattos-Arruda et al., 2020;Golan et al., 2021;Ma et al., 2022;Amodio et al., 2023).Neoantigen-reactive T cells have been demonstrated in studies to be one of the critical components of immunotherapy efficacy, particularly in tumors with a high tumor mutational burden (TMB) (Ye et al., 2020).When ICBs like CTLA4 and PD1 are used to treat some forms of cancer, tumors with high TMB have superior clinical results (Lu and Robbins, 2016;Ye et al., 2020;Graf et al., 2022;Shi et al., 2022) Cancer-associated antigens, including neoantigens derived from genetic mutations, are presented to CD8 + T cells through the major histocompatibility complex (MHC) on dendritic cells (DCs), and professional antigenpresenting cells (APCs).However, the majority of neoantigens are usually not recognized by the immune system, thus identification of highly tumor-specific antigens is important for the development of personalized immunotherapy (Kiyotani et al., 2018;De Mattos-Arruda et al., 2020) In patients with advanced and other types of cancers, greater neoantigen load is a predictive factor for a better result when utilizing ICBs (Hopfner and Hornung, 2020).Further studies are needed to fill this knowledge gap because there is a lack of enough evidence in the literature to relate specific DDR gene mutations to TMB, resistance to DNA-damaging treatments, or radiation therapy.
6 Clinical and therapeutic implications of DNA damage response mutation in prostate cancer In order to detect PCa and increase survival rates without over diagnosing and overtreating patients, screening techniques have been developed (Force, 2018).Oncologists can use risk stratification in therapeutic decision-making, and the stratification groups are routinely used to establish the inclusion or exclusion criteria for patients to take part in studies looking at drugs that are specifically targeted for a given risk group (Rodrigues et al., 2012).There are a number of prostate risk stratification methods in place, including the Gleason score, PSA levels, clinical staging, and histological staging, among others; however, none of these are enough to predict outcomes (S.-Y.Wang et al., 2014).As a result, this drove the need for additional research until recently, when 333 primary prostate tumors were examined for the first time by the Cancer Genome Atlas Research Network, which discovered that 19% of them contained DNA repair gene abnormalities ("Cancer Genome Atlas Research Network," 2015).New classification schemes based on molecular traits have been described, which will improve PCa precision medicine.Recent studies have demonstrated that the DDR gene mutation raises the risk of PCa (Castro et al., 2019;Chung et al., 2020).Pathogenic mutation in the DDR genes BRCA1/2 and ATM status differs between risk for aggressive and indolent PCa and is linked to a younger death age and a shorter survival period (Na et al., 2017).According to research, while some people have aggressive cancers that metastasize and result in disease-related death, many with indolent tumors can be successfully treated with early therapy or monitored (Sakellakis et al., 2022).Furthermore, studies conducted in various contexts with various populations have demonstrated that homologous recombination repair (HRR) and mismatch repair genes, which are mutated in a high proportion of patients, are involved in aggressive PCa biology (Lukashchuk et al., 2023).When managing familial cancer risk, knowledge of the therapeutic sensitivity to novel targeted medications imparted by these mutations in DNA damage response could be life-saving (Herberts et al., 2023;Lukashchuk et al., 2023;Sorrentino and Di Carlo, 2023).
Genetic mutation as prognostic biomarkers
Patients with homologous recombination repair (HRR) and mismatch repair (MMR) gene mutations are more likely to have poor clinical outcomes, intraductal and cribriform structure, and higher Gleason scores than patients without these mutations (Risbridger et al., 2015;Schweizer et al., 2019).According to studies, localized PCa with mutations in the HRR and MMR genes displays a biological profile that is very aggressive and resembles a treatment-resistant metastatic disease.The dysregulation of MED12L/MED12 in localized PCa with HRR and MMR gene alterations is an illustration of this.A combination of genomic dysregulation of the Wnt-pathway mediator complexes and enhanced genomic instability can account for this.This imbalance is a typical castration-resistant PCa molecular feature (Taylor et al., 2017).Genomic instability, which is a characteristic of cancer cells that is not present in normal cells due to DDR defects, is a well-known characteristic of these cells.Men with genomic instability seem to have a worse prognosis and are more likely to be diagnosed with PCa, especially in somatic cells with shorter telomere lengths (Heaphy et al., 2013).According to the most recent study that thoroughly investigated the function of DDR mutation in PCa survival (Zhang et al., 2022), the majority of DDR pathway mutations have been reported to be linked to a poor prognosis (Heaphy et al., 2013).Castro and others highlighted the function of the DDR gene mutation as a predictive biomarker after finding a substantial relationship between germline BRCA1/2 gene mutation and PCa aggressiveness.In hereditary BRCA1/2 mutation carriers, Castro found that the risk of stage T3/T4 cancer, nodal involvement, a Gleason score of 8 or higher, and metastases at the time of initial diagnosis was increased and that the survival time was also decreased (Castro et al., 2013).In another study, it was discovered that patients with lethal PCa had a significantly higher combined rate of germline BRCA/ATM alterations than those with localized PCa, and those with local disease or a diagnosis of metastases had a shorter prostatecancer-specific survival time than non-carriers (Na et al., 2017).These findings were recently supported by research conducted by Lukashchuk, Zhang, Silvestri, and others (Silvestri et al., 2020;Zhang et al., 2022;Lukashchuk et al., 2023) thus, in addition to other reported findings elsewhere, the current data suggest that DDR gene mutation would be a helpful marker for disease screening.Additionally, among reported DDR genes, the CHEK2 mutation provides reliable evidence of PCa risk in African men (Southey et al., 2016).
Genetic mutation and prostate tumor recognition of the adaptive immune response
The antigen recognition and immune cell activation in the adaptive immune system depends on the interaction between molecules of MHC on the surface of cells that present antigen (APCs) and costimulatory molecules found on the surface of naive T cells (Jiang et al., 2021).According to previous research, antigen recognition by MHC molecules of Intratumoral T cell lymphocytes depends on the activation and amount of T cells (Alexandrov et al., 2013;Jiang et al., 2021).As reported by Alexandrov and others (Alexandrov et al., 2013), the number of mutations that cause alterations in the sequence of amino acids of the protein in a tumor has a positive correlation with the quantity of tumor neoantigen.Prostate and various forms of cancer develop at various rates in terms of intra-tumoral T-cell activation and the number of cells involved (Alexandrov et al., 2013).Additionally, when the non-synonymous mutation occurs in DDR genes, the newly produced peptide is identified as foreign by immune cells and elicits an adaptive immune response (Riaz et al., 2016).These nonsynonymous mutation rates could rise as a result of DDR deficits and they have also been linked to the synthesis of peptides that affect the proinflammatory responses, particularly in MMR genes (Jiricny, 2013).
Genetic mutation in predicting immunotherapy
Immune Checkpoint Blockades (ICB)are therapeutic drugs that specifically target negative inhibition receptors on host T lymphocytes, such as cytotoxic T lymphocyte antigen 4 (CTLA-4) and programmed death 1 (PD1), which are frequently taken over tumors to hinder an efficient antitumor immune response.ICB acts on the immune system to strengthen antitumoral immunity, in contrast to conventional cancer therapies, which directly target cancer cells (Lei et al., 2021).The therapeutic efficacy of ICB using antibodies against inhibitory compounds produced on tumor and immune cells has been shown for a wide range of tumor types.Recent studies have demonstrated the efficacy and approval of anti-CTLA-4, anti-PD1, anti-PDL1, and their combinations with other medications.Compared to chemotherapy alone, the administration of these ICB has considerably reduced the incidence of cancer (Wei et al., 2018;Akinleye and Rasool, 2019).The findings indicate that not all PCa patients who are not actively selected benefit from ICB because of the diversity of the disease and therapy response.Unfortunately, the exact reason why different people respond to ICB therapy in different ways is still unknown.Additionally, while employing ICB, increased cancer treatment costs and unnecessary immunerelated side effects should be considered (Lei et al., 2021).However, significant research is needed to address inherent and acquired problems that limit the number of individuals who benefit from ICB.Similarly, for efficient utilization of ICBs, it will be necessary to identify potential candidates for therapies.This helps to determine whether a patient will be a long-term responder, a short-term responder, or a non-responder (Lei et al., 2021).Through the use of these biomarkers, we would be able to treat responders to the fullest extent possible without unnecessarily harming nonresponders (Lei et al., 2021).
Checkpoint inhibitors have also been demonstrated to be beneficial for patients with advanced PCa (Teply and Antonarakis, 2017;Boudadi et al., 2018;Hansen et al., 2018;Antonarakis et al., 2020).In men with BRCA1/2 mutations, subset analysis of larger trials revealed a greater effect for singleagent PD-1 inhibitors.For instance, in a clinical trial on pembrolizumab monotherapy for PCa, Antonarakis, and others found that men with BRCA1/2 or ATM mutations had an objective response rate of 12%, which was much greater compared to the 4% objective response rate seen in men with no those mutations (Antonarakis et al., 2020).Three of the five men (60%) who had tumors with altered HRR genes (BRCA1/2) demonstrated objective responses in previous clinical trials in 78 men with mCRPC treated with the combination of ipilimumab and nivolumab (Sharma et al., 2020).Similar results were also reported from a study carried out by Boudadi at John Hopkins (Boudadi et al., 2018).
6.4 Mutation in DNA damage as biomarkers for response to poly (ADP-ribose) polymerase inhibitors The amount of DNA damage brought on by internal and external sources would be fatal in the absence of the DNA damage repair (DDR).In order to prevent damage from being duplicated during the S-phase or transferred to the daughter cells during mitosis, the DDR evolved as a structured network of pathways that repair DNA and stop cell division.Hanahan and Weinberg assert that DDR gene alterations can cause genomic instability and mutations that may result in cancer (Hanahan and Weinberg, 2011).Additionally, active oncogenes enhance replication stress by driving cells into the S-phase before they are ready.This leads to DNA sequence alterations and aberrant DNA structures that need to be resolved by the DDR (Saxena and Zou, 2022).According to Curtin, a mutation in one DDR gene that is associated with the DDR pathway may result in enhanced activity of compensatory pathways, leading to resistance to DNA-damaging radiation therapy and chemotherapy (Curtin, 2023).In order to treat PCa, the development of drugs that target the DDR was initially justified by the need to get around these mechanisms of resistance.However, the approach used by chemo-or radiotherapies to stop the DDR also causes more harm to healthy cells while maintaining increased antitumor effects (Curtin, 2012).Previous researchers suggested the inhibition of PARP, which interferes with DNA damage repair and results in tumor cell death, is connected to the anticancer mechanism of medicines targeting DDR.While PARP inhibition is tolerated in normal cells, it has a significant impact on tumor cells with HRR mutations (Keung et al., 2019;Lang et al., 2019;Rose et al., 2020).
Single-strand breaks (SSB) which occur as a result of PARP inhibition facilitate the formation of damage in DSB which is mainly repaired by HRR (Yi et al., 2019).The coincident loss of PARP function in cancer cells with changed HRR proteins engaged in HRR inadequate repair leads to the formation of Double strand breaks (DSBs) and consequently cell death (Yi et al., 2019).Authors have recently reported a breakthrough in PCa treatment that uses the DDR gene mutation to target the tumor only while protecting healthy cells with intact DDR pathways.This discovery has led to numerous clinical trials that take advantage of these defects, and numerous PARP inhibitors, such as olaparib, rucaparib, niraparib, talazoparib, and veliparib, have been successfully tested in patients with a variety of DDR gene mutations, including BRCA1, BRCA2, ATR, ATM, CHEK1/2, and PALB2 (Catalano et al., 2023).
In recent years, the PCa treatment PARPi-poly (ADP-ribose) polymerase inhibitors-has been approved by the FDA (Catalano et al., 2023).Patients with mutations in the DDR response genes BRCA2, ATM, and BRCA1 reported significantly greater response rates to olaparib than noncarriers in one of the earliest trials of PARPi in patients with advanced PCa (Gurley and Kemp, 2001).Similar to this, rucaparib's phase II TRITON2 research results by Abida and others hastened its approval after demonstrating a 51% radiographic response rate for docetaxel-resistant patients with BRCA1/2 mutation in mCRPC (Abida et al., 2020).Researchers have recently demonstrated that next-generation hormonal drug combined with PARPi therapy for mCRPC provides the strongest PARPi benefit (Agarwal et al., 2022;Chi et al., 2022;Clarke et al., 2022;Herberts et al., 2023).
Conclusion
Despite the difficulties and variations in PCa diagnosis and treatment that have been mentioned, the most current study is showing gaps in the recognized therapy guidelines.To better diagnose and treat PCa patients, a multidisciplinary approach involving molecular biologists, oncologists, and immunologists is necessary.Research on effective and targeted treatments for prostate cancer is therefore critically needed, as well as investigations on the variety of genetic abnormalities prevalent in prostate cancer.Future studies should focus on generating genetic data that could be used to improve the health of PCa patients through immunologically based targeted therapies or combination therapeutic options, particularly in African settings where knowledge of the genetic spectrum of genes associated with the disease is limited.
(
Continued on following page)
TABLE 1
Summary of mismatch repair genes related to PCa.
TABLE 2
Summary details of homologous recombination genes related to PCa.
TABLE 3 (
Continued) Mutation spectrum of homologous recombination genes related to PCa.
|
2023-09-06T15:06:37.634Z
|
2023-09-04T00:00:00.000
|
{
"year": 2023,
"sha1": "d9151c1f3e2b120be8b0f2eaefda96c255f9c505",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fgene.2023.1231536/pdf?isPublishedV2=False",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e6c1137e8685a371048ec9cc4ef8019f66b91cb9",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
229271553
|
pes2o/s2orc
|
v3-fos-license
|
Assessment of mitral regurgitation and mitral complex geometry in patients after transcatheter aortic valve implantation
Introduction Mitral regurgitation (MR) of varying degrees and mechanisms is a common finding in patients with aortic stenosis with different improvement after transcatheter aortic valve implantation (TAVI). Aim To evaluate the impact of TAVI on mitral complex geometry and the degree of MR. Material and methods A total of 31 patients (29.0% males) with severe aortic stenosis and moderate or severe MR at the baseline who underwent TAVI were included in this study. Clinical and echocardiographic characteristics were determined at baseline and at 6 and 12 months. Results After TAVI, decrease of MR vena contracta width (p = 0.00002, p = 0.00004), aorto-mural mitral annulus diameter (p = 0.00008, p = 0.02), increase of mitral annular plane systolic excursion (p = 0.0004, p = 0.0003), left ventricular stroke volume (p = 0.0003, p = 0.0004), ejection fraction (p = 0.0004, p = 0.01) and decrease of major dimension of left ventricle in three chamber view (p = 0.05, p = 0.002) were observed in patients at both time points. Additionally, we observed a decrease of distance between the head of the papillary muscles (p = 0.003) at 6 months and a decrease of left atrium volume index (p = 0.01) and systolic pulmonary artery pressure (p = 0.01) at 12 months. Conclusions Patients with moderate or severe MR undergoing TAVI achieved significant improvement of mitral valve complex function resulting in the reduction of MR degree.
Introduction
The mitral valve apparatus is a complex consisting of the annulus, leaflets, commissures, tendinous cords, papillary muscles and the left atrial and ventricular walls [1,2]. Assessment of all components of the mitral apparatus is necessary to determine the mechanism of its incompetency. Mitral regurgitation (MR) can be classified as primary (due to structural abnormalities of the valve), secondary (functional), associated with left ventricle dysfunction (ischemic and non-ischemic MR) and as a consequence of atrial fibrillation (AF).
MR is frequently present in patients with severe aortic stenosis (AS) [3]. Mitral valve deformation and tethering, as well as an increase in transmitral pressure gradient caused by AS and increased left ventricle (LV) pressure, all contribute to MR [4]. Additionally, with increasing age, the incidence of AS and coronary artery disease may lead to development of functional ischemic MR [5,6]. In the PARTNER trial, the incidence of concomitant moderate-to-severe MR in patients with severe AS was approximately 20% [7,8]. However, the necessity of intervention for both valves may increase the risk of operation to an unacceptable level, especially in high-risk patients. Transcatheter aortic valve implantation (TAVI) is a feasible option for the treatment of severe AS in patients at high risk of conventional surgical aortic valve replacement [9][10][11].
Aim
In this study, we sought to assess the impact of TAVI on geometry and function of the mitral apparatus in patients with at least a moderate degree of MR in the pre-procedural transthoracic echocardiography (TTE). Detailed aims of this study were: 1) to characterize the degree of MR by TTE at 6-and 12-month follow-up after TAVI in patients with moderate or severe MR before TAVI; 2) to investigate the impact of TAVI on TTE parameters assessing geometry and function of the mitral valve complex: mitral annulus, left atrium and left ventricle.
Study population
This study was approved by the Bioethical Committee of the Jagiellonian University, Cracow, Poland. The study protocol conforms to the ethical guidelines of the 1975 Declaration of Helsinki. All patients provided informed consent for participation in the study. We analyzed all patients with severe AS admitted to the Department of Cardiology and Cardiovascular Interventions, University Hospital in Krakow between January 2015 and December 2016. We included 31 patients (29.0% males, 82.1 ±5.2 years old) with severe AS and concomitant moderate or severe MR, recognized during TTE according to the recommendations for the echocardiographic assessment of native valvular regurgitation [12]. Patients without a sufficient acoustic window were excluded. All patients had high risk or serious contraindications for surgical aortic valve replacement and were qualified for TAVI by a multidisciplinary "Heart Team". Clinical and TTE follow-up was conducted through control visits of the patient at 6 and 12 months after TAVI (Edwards Sapien XT valve; Edwards Lifesciences, Irvine, CA, USA, Evolut R Medtronic Scientific, Minneapolis, MN, USA).
Echocardiographic assessment
All patients underwent TTE, including M-mode, two-dimensional, three-dimensional, and Doppler im-aging, at baseline and during follow-up. Close attention was paid to all acquisition settings in order to maximize image quality. For better visualization of mitral and aortic valve anatomy and function, two-dimensional and three-dimensional transesophageal echocardiography (TEE) was performed at baseline. All TTE and TEE examinations were performed using Vivid E9 (GE Healthcare, Waukesha, WI, USA). The post-processing evaluation was performed using a dedicated workstation (EchoPAC, GE Healthcare, Waukesha, WI, USA). The linear measurements were taken using virtual calipers. The measurements were performed according to the current recommendations [12,13].
The following qualitative and quantitative echocardiography parameters were performed to assess the severity of MR: vena contracta (VC) width (in the case of multiple jets: vena contracta of the dominant jet) and peak E-wave; to assess mitral valve geometry: mitral annulus diameters at end-diastole: aorto-mural diameter in parasternal long-axis TTE view and intercommissural diameter in the modified apical two-chamber view; aorto-mural annulus/anterior leaflet ratio; maximal and mean transvalvular mitral gradient; maximal and mean transvalvular aortic gradient; peak aortic valve velocity (Vmax); and aortic valve area (AVA). The antero-mural diameter of the mitral annulus > 35 mm or the aorto-mural annulus/anterior leaflet ratio > 1.3 was considered to indicate mitral annular dilatation [12,14].
Additionally, particular left ventricle geometry and function parameters were evaluated: end-diastolic and end-systolic left ventricle diameters in parasternal long-axis; left ventricle sphericity index in apical four-chamber view; major left ventricle dimension in three-chamber view; distance between heads of left ventricular papillary muscles in end-diastolic phase; left ventricle ejection fraction; stroke volume; and mitral annular plane systolic excursion. Other parameters include left atrium diameter in parasternal long-axis view; left atrium indexed volume; right atrium area and indexed volume; right ventricle linear dimension (maximal transversal dimension in the basal one third of right ventricle inflow at end-diastole); systolic pulmonary artery pressure; and grade of tricuspid regurgitation.
Statistical analysis
Standard descriptive statistical methods were used. The data were presented as mean values with their corresponding standard deviations. The normality of the data was assessed with the Shapiro-Wilk test. One-way analysis with the unpaired two-sample T-test (for normal distribution) or the Mann-Whitney U test (for non-normally distributed data) was applied for quantitative variables. The analysis of variance (ANOVA) or non-parametric Kruskal-Wallis test was used to compare values between different groups. Detailed comparisons were performed using Tukey's post hoc analyses. Qualitative variables were compared using the χ 2 (chi-squared) test of proportions with Bonferroni correction to account for multiple comparisons. We performed statistical analyses with Statisticav13 (StatSoft Inc., Tulsa, OK, USA). The statistical significance was set at a p-value lower than 0.05.
Results
The baseline clinical characteristics of patients are shown in Table I. At baseline, patients presented heart failure symptoms that were mainly (58.1%) in class III of the New York Heart Association (NYHA) Functional Classification (Table II). Moderate MR was diagnosed in 93.5% of patients and severe in 6.5%. Calcifications in the mitral valve annulus were found in 61.3% of patients and in mitral valve leaflets in 22.6% of patients. Annular mitral dilatation was identified in 64.5% of patients. MR was accompanied by moderate and severe TR in 41.9% and 9.7% of patients, respectively.
One patient died 2 months after TAVI and therefore the follow-up group at 6 and 12 months consisted of 30 patients. Table II presents clinical and echocardiographic data at baseline and during follow-up. A significant decrease of heart failure symptoms assessed with NYHA class was observed both 6 and 12 months after TAVI as compared to baseline (Table II; p < 0.05). Several critical MR parameters showed significant improvement after the procedure, which included a decrease in VC (baseline: 5.9 ±1.5 mm vs. 6 months: 3.8 ±1.6 mm vs. 12 months: 3.2 ±1.4 mm, Table II, p < 0.001).
All patients in the studied group achieved a significant increase of AVA (p = 0.000006, p = 0.00002) and a decrease of maximal (p = 0.000003, p = 0.000008) and mean (p = 0.000003, p = 0.000008) transvalvular aortic gradient. After TAVI, decrease of MR, especially VC width (p = 0.00002, p = 0.00004) and aorto-mural mitral annulus diameter (p = 0.00008, p = 0.02), was achieved. Additionally, an increase of mitral annular plane systolic excursion (p = 0.0004, p = 0.0003), left ventricular stroke volume (LVSV) (p = 0.0003, p = 0.0004) and LVEF (p = 0.0004, p = 0.01) and a decrease of major dimension of the left ventricle in three-chamber view (p = 0.05, p = 0.002) and left ventricular end-systolic diameter (LVESD) (p = 0.004, p = 0.02) were observed in patients at both time points. We observed a decrease of distance between the head of the papillary muscles (p = 0.003) at 6 months and a decrease of left atrium volume index (LAVi) (p = 0.01) and systolic pulmonary artery pressure (sPAP) (p = 0.01) at 12 months. Detailed TTE data are listed in Table II.
Discussion
Mitral regurgitation in patients with severe SA referred for TAVI may have functional, organic, or complex etiology. The latter is most frequent in the group of elderly patients. Degeneration and calcifications of the valve are frequent and usually affect the leaflets, tendinous cord and annulus [15]. In the studied group, calcifications of the mitral annulus dominated (61.3%). On the other hand, coronary heart disease is common in patients after TAVI, including in the form of previous myocardial infarction, which may be the cause of left ventricular remodeling and ischemic MR. Additionally, severe SA with elevated end-systolic LV pressure may aggravate subendocardial ischemia with deterioration of the left ventricle function and cause changes of geometry which may also be a component of non-ischemic functional MR.
Both mentioned components of functional MR led to mitral apparatus geometry deformation, including leaflet restriction, enlargement of the mitral ring, and increase in distance between papillary muscles. Considering the complexity of the MR pathomechanism in patients with AS and improvement of left ventricular function after TAVI, a reduction in the significance of MR after the procedure should be expected [16].
There has been much discussion regarding the reduction of MR after isolated aortic valve replacement. Following surgical aortic valve replacement, improvement in MR was reported in 27-82% of the patients [17]. Webb et al. reported an improvement in MR in 24 out of the 50 patients (48%) with moderate to severe MR following the implantation of the Edwards Sapien Valve [18]. In another [19]. Functional MR, coronary artery disease and absence of atrial fibrillation have been identified as predictors of reduction of MR after TAVI [20]. Our study shows that TAVI improves the grade of MR (Figure 1) at 6 and 12 months, which could be a result of improvement in geometry of the mitral complex and left ventricle function. Only 9 (30%) patients did not achieve a reduction of MR grade at any analyzed time points. A significant reduction of the diameter of the mitral annulus and distance between the heads of the papillary muscles was observed. The detailed study of Tayyareci et al. also showed the importance of mitral annulus diameter and area reduction (2D and 3D analysis) in the group with MR improvement. Moreover, they observed no effect of TAVI on MR in the group with restriction of mitral leaflets, which is a consequence of mitral complex geometry change, as the greater distance between papillary muscles [16]. In our study, a significant decrease of sPAP was observed, although it may be the result of aortic valve implantation as well as a decrease in the significance of MR. This was reflected in a significant improvement in exercise capacity (NYHA class). As coronary artery disease was present in two-thirds of patients and one-third of patients had a history of myocardial infarction, the functional component of the MR etiology seems to be the most frequent in the studied group. A significant reduction in left ventricular diameter and an improvement in left ventricular systolic function were observed, reflected in an increase in LVEF and mitral annular plane systolic excursion (MAPSE). Interestingly, the reduction in end-diastolic dimension achieves statistical significance 1 year after the procedure, in contrast to the end-diastolic dimension and ejection fraction. Relatively immediate increase in ejection fraction observed as early as 6 months after TAVI may be the result of a significant decrease in afterload and transvalvular gradients after TAVI valve implantation. In contrast, the process of reversing negative remodeling is long-lasting, and hence the reduction in end-diastolic volume was confirmed only in an annual follow-up. The observed increase in LVSV may be caused by both an increase in LVEF and a reduction in the significance of MR.
The present study has two major limitations. The most important one is its single-center character and the relatively small sample size. To eliminate these disadvantages, further, large, multicenter clinical studies with extended follow-up are required to confirm the findings of our study.
Conclusions
Patients with severe AS and moderate or severe MR undergoing TAVI achieved a significant reduction of symptoms and improvement of mitral valve complex function resulting in the reduction of MR degree. Furthermore, a significant improvement in the left ventricle geometry and function was achieved.
|
2020-11-12T09:07:47.514Z
|
2020-09-01T00:00:00.000
|
{
"year": 2020,
"sha1": "fa087a3c4089667cb67a8d50a4253e0471c7162d",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.termedia.pl/Journal/-35/pdf-41851-10?filename=assessment%20of%20mitral.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "feae8c2d6748517f5cabde459f91873cdb08dc38",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
13504688
|
pes2o/s2orc
|
v3-fos-license
|
Embryonic Caffeine Exposure Acts via A1 Adenosine Receptors to Alter Adult Cardiac Function and DNA Methylation in Mice
Evidence indicates that disruption of normal prenatal development influences an individual's risk of developing obesity and cardiovascular disease as an adult. Thus, understanding how in utero exposure to chemical agents leads to increased susceptibility to adult diseases is a critical health related issue. Our aim was to determine whether adenosine A1 receptors (A1ARs) mediate the long-term effects of in utero caffeine exposure on cardiac function and whether these long-term effects are the result of changes in DNA methylation patterns in adult hearts. Pregnant A1AR knockout mice were treated with caffeine (20 mg/kg) or vehicle (0.09% NaCl) i.p. at embryonic day 8.5. This caffeine treatment results in serum levels equivalent to the consumption of 2–4 cups of coffee in humans. After dams gave birth, offspring were examined at 8–10 weeks of age. A1AR+/+ offspring treated in utero with caffeine were 10% heavier than vehicle controls. Using echocardiography, we observed altered cardiac function and morphology in adult mice exposed to caffeine in utero. Caffeine treatment decreased cardiac output by 11% and increased left ventricular wall thickness by 29% during diastole. Using DNA methylation arrays, we identified altered DNA methylation patterns in A1AR+/+ caffeine treated hearts, including 7719 differentially methylated regions (DMRs) within the genome and an overall decrease in DNA methylation of 26%. Analysis of genes associated with DMRs revealed that many are associated with cardiac hypertrophy. These data demonstrate that A1ARs mediate in utero caffeine effects on cardiac function and growth and that caffeine exposure leads to changes in DNA methylation.
Introduction
Increasing evidence indicates that alteration of normal prenatal development influences an individual's lifetime risk of developing obesity and cardiovascular disease [1][2][3][4][5][6]. Thus, understanding how in utero exposure to chemical agents leads to increased susceptibility to adult diseases is an important issue.
One substance that fetuses are frequently exposed to is caffeine, a non-selective adenosine receptor antagonist. Caffeine consumption during the first month of pregnancy is reported by 60% of women, and 16% of pregnant mothers report consuming 150 mg or more per day [7]. Caffeine exerts many cellular effects, including influences on intracellular calcium levels and inhibition of phosphodiesterase; however, at serum concentrations observed with typical human consumption, the major effects of caffeine are due to a blockade of adenosine action at the level of adenosine receptors through competitive inhibition [8].
Adenosine levels increase dramatically under physiologically stressful conditions that include hypoxia, tissue ischemia, and inflammation [9][10][11]. Adenosine acts via cell surface G-protein coupled receptors, including A1, A2a, A2b, and A3 adenosine receptors [12]. Of these adenosine receptors, A1 adenosine receptors (A1ARs) have the highest affinity for adenosine and are the earliest expressed adenosine receptor subtype in the developing embryo [12,13]. Showing how adenosine plays an important role in development, recent data indicate that a single dose of caffeine given to pregnant mice leads to reduced embryonic heart size and impaired cardiac function in adulthood [14]; however, the mechanisms by which these effects occur are not known.
At present, our understanding of the long-term effects of in utero caffeine exposure remains modest. In animal models, embryonic caffeine exposure leads to teratogenic effects, including ventricular septal defects and intrauterine growth retardation (IUGR) [14][15][16][17][18][19]. Other studies show that caffeine exposure as early as embryonic day 10.5 (E10.5) in mice can cause reduced embryonic cardiac tissue [14]. In addition, caffeine can induce defects in angiogenesis in zebrafish (Danio rerio) embryos [14,20]. In mice and zebrafish, caffeine affects embryonic cardiac function by increasing heart rates [21,22]. Caffeine exposure increases expression of the cardiac structural gene myosin heavy chain alpha (Myh6) in fetal rat hearts [23]. In humans, there is little evidence that fetal caffeine exposure leads to morphological defects, but prenatal caffeine exposure is associated with an increased risk of spontaneous abortions and reduced birth weight [24][25][26][27][28][29][30][31]. Studies examining the long-term consequences of in utero caffeine exposure in humans have not been performed.
One recognized mechanism for transmitting in utero stress into an increased risk of adult disease involves epigenetic changes that include altered DNA methylation, post-translational modifications of histone tails, and miRNA regulation [32,33]. Changes in DNA methylation patterns occurring normally during early embryogenesis can be influenced by nutritional and environmental factors resulting in long lasting effects in adulthood [34][35][36].
To provide further insights into adenosine and caffeine action in the embryo, we assessed the role of A1ARs in transducing the embryonic effects of caffeine in the mouse model. We also assessed the effects of caffeine on epigenetic modifications specifically DNA methylation patterns, and finally we examined caffeine's long-term effects on the heart.
Ethics Statement
All animal experiments were approved by the Institutional Animal Care and Use Committee (IACUC) at Yale University. All animal research was conducted at Yale University and concluded before corresponding author moved to the University of Florida College of Medicine.
Animals
Adenosine A1 receptor (A1AR) deficient mice were provided by Dr. Bertil Fredholm at the Karolinska Institutet in Stockholm, Sweden and were characterized [37]. These mice are on a mixed background (129/OlaHsd/C57BL) and breed normally with expected Mendelian frequency.
Timed matings were performed with A1AR+/2 males and A1AR+/2 females, and the day a vaginal plug was observed was designated as embryonic day 0.5 (E0.5). Pregnant dams were randomized into two groups and injected intraperitoneally (i.p.) at E8.5. This stage is a critical time during cardiac development when the heart has begun to function, the heart valves are forming, and the heart is beginning to loop in order to bring the different chambers of the heart into proper alignment [38,39]. In addition, treatment of pregnant dams at E8.5 was chosen because it is during the embryonic development window (E6.5-10.5) when genomic DNA is being re-methylated [40], thus E8.5 is a stage sensitive to DNA methylation disruption. Group 1 was injected with vehicle 0.9% NaCl, and group 2 was injected with 20 mg/kg of caffeine (Sigma-Aldrich, St. Louis, MO, USA) dissolved in vehicle. This caffeine treatment results in circulating blood levels equivalent to the consumption of 2-4 cups of coffee in humans and 65% A1AR occupancy [8,14]. Analysis was performed on male offspring divided into six groups based on treatment and genotype; including 1) vehicle/A1AR+/+ (veh+/+), 2) vehicle/A1AR+/2 (veh+/2), 3) vehicle/A1AR2/2 (veh2/2), 4) caffeine/ A1AR+/+ (caff+/+), 5) caffeine/A1AR+/2 (caff+/2), and 6) caffeine/A1AR2/2 (caff2/2). Adult offspring for each group were obtained from at least 4 different dams. Adult offspring were used for nuclear magnetic resonance (NMR), weights, and echocardiography. The number of male offspring produced for these experiments came from 11 dams treated with vehicle including 8 A1AR+/+, 24 A1AR+/2, and 8 A1AR 2/2, and 9 dams treated with caffeine including 8 A1AR+/+, 17 A1AR+/2, and 11 A1AR2/2. Of these mice some died before NMR and echocardiography, the Ns for each experiment are provided in the figure legends. In addition, 3-4 hearts from each group were used for histology and 3-4 hearts per group were used for RNA and DNA isolation. After birth, mice were weighed weekly from 2-8 weeks of age. Mice were euthanized by CO 2 inhalation followed by cervical dislocation.
Nuclear magnetic resonance
Between 8-10 weeks of age, the vehicle-and caffeine-treated male mice were evaluated by NMR at the Yale Metabolic Phenotyping Center, as described [14,41]. Animals were placed in a restraint cylinder for body composition analysis using the Minispec Benchtop NMR (Bruker Optics, Billerica, MA, USA). NMR analysis was used to assess absolute fat, lean mass, and free body fluid content, based on total body weight. Data were used to determine percent body fat (fat mass/total body mass 6100).
Echocardiography
Cardiac function of male offspring was assessed using echocardiography between 8-10 weeks of life, as described [14,42]. Offspring were anesthetized with a continuous flow of isoflurane administered via nosecone and anesthesia levels were regulated to maintain heart rates between 400 and 500 beats per minute. Transthoracic 2D M-mode echocardiography was performed using a 30-MHz probe (Vevo 770; Visualsonics, Toronto, ON, Canada) [14]. Echocardiography and analysis of results were performed blinded.
The hemodynamic effects of caffeine treatment were assessed in dams as described [43]. Briefly, baseline data on heart rate and cardiac function were obtained as described above. Animals were allowed to recover for 30 minutes followed by treatment with either 0.9% NaCl (vehicle) or 20 mg/kg of caffeine. Echocardiography was then performed 30 minutes after treatment.
Adult cardiac histology
Adult hearts of male offspring were fixed by perfusion of hearts with 4% paraformaldehyde solution (PFA; Electron Microscopy Sciences, Hatfield, PA, USA) containing 150 mM KCl and 5 mM EDTA. Hearts were embedded in paraffin, sectioned, mounted on slides, and analyzed as described [14].
DNA methylation array analysis
Methylated DNA immunoprecipitation (MeDIP) and Nimble-Gen DNA methylation microarrays were used to assess changes in DNA methylation patterns between caffeine-and vehicle-treated adult male mice. The groups studied included: caff+/+, caff2/2, veh+/+, and veh2/2 mice. Two samples from each treatment group were used to generate the DNA methylation array data, which was then used for all subsequent pathway analyses. The use of 1 to 2 samples per group is common for this type of analysis, so our use of two samples is consistent with previously published reports [44,45] DNA methylation array data were analyzed using methods developed by Palmke et al. 2011 [44] and by Tobias Straub (http://www.protocol-online.org/cgi-bin/prot/view_cache.cgi?ID = 3 973). Data were normalized within (Lowess-based) and between (quantile-based) arrays and probe level log2 ratio of Cy5/Cy3 (M-value) was used as a measure of MeDIP enrichment [46]. All DNA methylation array data were uploaded to the Gene Expression Omnibus (GEO) and can be accessed via http:// www.ncbi.nlm.nih.gov/geo/; accession number GSE43030. Chromosomal distribution of DMR regions were analyzed by using Enrichment on Chromosome and Annotation (CEAS) in Galaxy/Cistrome [47]. Venn diagrams were constructed with Galaxy/Cistrome [47].
Pathway analysis of the genes associated with the DMRs was conducted using MetaCore Enrichment Analysis (Version 6.11, build 41105; GeneGo, Carlsbad, CA, USA) and Ingenuity Pathway Analysis (Ingenuity Systems, Redwood City, CA, USA). The lists of differentially methylated genes and miRNAs were uploaded separately into the applications. MetaCore enrichment ontologies used included Pathway Maps, Map Folders, Process Networks, Diseases (by Biomarkers), and Disease Biomarker Networks. Ingenuity ontologies analyzed included IPA Core Analysis and IPA-Tox.
Bisulfite sequencing (BS-seq) was used to analyze the DMRs within the gene promoters and performed as described [48]. Bisulfite specific primers were designed with Methyl Primer Express v1.0 (Applied Biosystems, Carlsbad, CA, USA). Sequence data were analyzed with DNAstar (SeqMan, Madison, WI, USA). The CpG methylation percentage was calculated as (total number of methylated CpG)/(number of CpG sites in each gene 6number of colonies sequenced).
Real-time PCR analysis
Total RNA from left ventricles of adult male offspring was extracted with RNeasy Plus Mini Kit (Qiagen), according to the manufacturer's protocol. cDNA was synthesized using iScript cDNA Synthesis Kit (Bio-Rad, Hercules, CA, USA). Primers were as follows: Mef2c forward: GATGCCATCAGTGAATCAAAGG; Mef2c reverse: GTTGAAATGGCTGATGGATATCC; Tnnt2 forward: CTGAGACAGAGGAGGCCAAC; Tnnt2 reverse: TTCTCGAAGTGAGCCTCCAT. For Myh6 and Myh7, primers were designed and synthesized by SABiosciences (Qiagen). For bactin, primers were designed and synthesized by RealTimePrimers.com (Elkins Park, PA, USA), b-actin forward: AAGAGC-TATGAGCTGCCTGA, b-actin reverse: TACGGATGT-CAACGTCACAC. Relative abundance of target genes to bactin transcripts in the cDNA libraries was determined with SYBRHGreen (Applied Biosystems) in a GeneAmp 7300 Real Time PCR System (Applied Biosystems). Each sample was measured in three separate reactions on the same plate. This assay was repeated three times. Amplification efficiencies of the target genes and b-actin primer pairs were tested to ensure that they were not statistically different. Differences in expression between the treatment groups were calculated with the 2 2DDCT method. Statistical differences between treatments were determined on the linearized 2 2DCT values.
Caffeine assay
Caffeine levels were measured in the serum of dams by ELISA assay (Neogen, Lexington, KY, USA), as described [14]. A1AR KO female mice were treated i.p. with 20 mg/kg of caffeine and blood serum was collected 2 hours later.
Statistical Analysis
Data are presented as means 6 the standard error of the mean (SEM). Analysis was performed with the statistics software package included with Microsoft Excel (Microsoft, Redmond, WA, USA) and GraphPad Prism 6.0 (GraphPad Software Inc., La Jolla, CA, USA). Statistical comparisons between groups were performed with student's t-test assuming equal variance or with one-way or two-way ANOVA with Bonferroni's post-test comparison. P#0.05 was considered to be statistically significant.
Results
Caffeine treatment has no effect on maternal cardiac function Female A1AR+/+ mice treated with a caffeine dose of 20 mg/ kg were analyzed for serum caffeine levels and cardiac function. This dose of caffeine results in a circulating serum caffeine level of 37.561.5 mM (N = 3), similar to that observed in C57Bl/6 mice [14]. To test if caffeine treatment alters the hemodynamics of treated dams, we measured the heart rates as beats per minute (bpm) and cardiac outputs as milliliters per minute (ml/min) of adult mice before and 30 minutes after treatment with either vehicle or 20 mg/kg of caffeine. There was no significant difference with caffeine treatment in heart rate from baseline (44766 bpm) to 30 min after caffeine treatment (462612.5 bpm, N = 4), or in cardiac output from baseline (14.861.6 ml/min) to 30 min after caffeine treatment (15.060.6 ml/min, N = 4). As a control dams were treated with vehicle (0.9% NaCl), and no significant differences were observed in heart rate between baseline (442618 bpm) and 30 min after vehicle (47868 bpm, N = 4)) or in cardiac output from baseline (14.760.6 ml/min) to 30 min after vehicle (15.761.7 ml/min, N = 4). In addition, no significant differences were observed between either baseline or peak heart rate or cardiac output when comparing vehicle to caffeine treated mice.
In utero caffeine treatment leads to higher body weight in adult male mice Pregnant dams were treated with one 20 mg/kg dose of caffeine or vehicle (0.9% NaCl) at E8.5. Male offspring were weighed weekly until 8 weeks of age. Beginning at 3 weeks of age, the caff+/+ mice were heavier than the veh+/+ controls (Fig. 1). The increase in body weight persisted throughout the study, caff+/+ mice weighed on average 2.47 grams more than veh+/+ controls. Because the absolute difference in body weight between the two groups was constant throughout the study, the percent increase in body weight peaked at 3 weeks with caff+/+ mice weighing 23.9% more than veh+/+ controls, and by 8 weeks of age the caff+/+ mice were 10% heavier than veh+/+ mice (Fig.1). Only A1AR+/+ mice treated with caffeine were significantly heavier in adulthood compared to A1AR+/+ controls. Comparisons of body weights between veh+/2 (N = 20) and caff+/2 (N = 15) or between veh2/2 (N = 6) and caff2/2 (N = 10) were not significantly different.
In addition to assessing body weight, we analyzed body fat in adult offspring by NMR. Even though there was a significant difference in body weight between the caffeine-treated group and the vehicle-treated group at the time NMRs were performed (twoway ANOVA, P#0.04), no differences in the body fat content were detected among the different groups. The average percent body fat for the different treatment groups were veh+/+ 6.8161.1% (N = 8), veh+/2 6.3160.5% (N = 20), veh2/2 6.760.9% (N = 6), caff+/+ 6.9660.8% (N = 8), caff+/2 6.2260.8% (N = 15), and caff2/2 7.2560.8% (N = 10). There was no difference in the percent muscle weight among the caffeine-and vehicle-treated groups (data not shown). In utero caffeine treatment causes a thickening of the left ventricular walls and altered cardiac function in adult hearts Cardiac function, wall thickness, and chamber size were measured by echocardiography in the adult male offspring of pregnant dams treated with caffeine or vehicle at E8.5. The groups examined included caffeine-or vehicle-treated and three genotypes (A1AR+/+, A1AR+/2, and A1AR2/2). Of these groups, only the caff+/+ vs. veh+/+ comparison revealed significant differences in cardiac function and morphology. Caffeine treatment lead to changes in adult cardiac morphology in the caff+/+ mice, including a 24% increase in left ventricle (LV) mass compared to veh+/+ ( Fig. 2A). In addition, caffeine caused an increase in the thickness of both the left ventricular posterior wall (LVPW) and the interventricular septum (IVS; Fig. 2). The LVPW thickness was increased by 28.6% during diastole and 23.3% during systole, whereas the IVS thickness was increased by 24.5% during diastole and 14.3% during systole (Fig. 2).
The increased left ventricular wall thickness was associated with a decrease in the left ventricular internal diameter (LVID) by 13.2% at diastole and 28.9% at systole (Fig. 3). The reduced LVID was associated with reduced left ventricle volume, which led to a 12.5% decrease in the LV stroke volume (Fig. 3). The percent fractional shorting (%FS) was increased in caffeine treated hearts (Fig. 3), and cardiac output (CO) was reduced by 11.4% in caff+/+ (N = 6) mice compared to veh+/+ (N = 6) treated mice (P#0.02 student t-test). Although we observed differences in the cardiac output for the caff+/+ group, overt heart failure was not observed.
Histological examination did not reveal differences in heart muscle structure among any of the groups but the caff+/+ group displayed thicker left ventricular walls compared to veh+/+ controls (Fig. 4). Trichrome staining indicated that there were no differences in connective tissue deposition or any evidence of scarring in adult hearts from any of the treatment groups (Fig.4).
In utero caffeine exposure alters the DNA methylation pattern in adult hearts NimbleGen DNA methylation microarrays were used to investigate DNA methylation patterns in adult left ventricles. The Mouse DNA Methylation 2.1 M Deluxe Promoter Array was chosen because it interrogates DNA methylation in 599 miRNA promoters including 15 kb upstream of the transcriptional start site (TSS), 15,969 gene promoters regions that range from 8,000 bp upstream to 3,000 bp downstream of the TSS, and 24,507 known CpG islands in the genomic DNA. For the DNA methylation analysis, four groups were studied including veh+/+, veh2/2, caff+/+, and caff2/2 groups. This analysis identified changes in DNA methylation patterns that were caffeine-and A1AR-dependent (veh+/+ vs. caff+/+), caffeine-dependent and A1AR-independent (veh2/2 vs. caff2/2), and A1AR-dependent and caffeine-independent (veh+/+ vs. veh2/2).
Analysis of the different groups revealed that the veh+/+ vs. caff+/+ comparison had the greatest number of differentially methylated regions (DMRs) within the genomic DNA including both hypermethylated regions (4896) and hypomethylated regions (2823; Fig. 5A). Caffeine altered the DNA methylation pattern in the absence of A1AR expression (A1AR2/2 mice) to a lesser degree than in A1AR+/+ mice. For example, the comparison of veh2/2 vs. caff2/2 only had 1024 hypermethylated regions and 1757 hypomethylated regions (Fig. 5A). In addition, the loss of A1AR expression alone altered DNA methylation patterns (Fig. 5A). Analysis of the differentially methylated regions revealed where in the genome the DMRs (either hypermethylated or hypomethylated) were located, including promoter regions, primary transcripts, known CpG islands or miRNA promoter regions (Fig. 5B, C).
A Venn diagram demonstrating the number of over-lapping DMRs from the different comparison groups indicated that the majority of DMRs are specific to each of the comparison groups (Fig. 6). The majority of DMRs in the veh+/+ vs. caff+/+ group were located in promoter regions less than 3,000 bp from the transcriptional start site (TSS) and in introns (Fig. 6). Further analysis of veh+/+ vs. caff+/+ identified the percentage of the total number of DMRs that were located on each chromosome (Fig. 6). The highest percentages of DMRs were located on chromosomes 2, 7 and 11, while the lowest percentage of DMRs was found on chromosome Y (Fig. 6).
Caffeine treatment alters DNA methylation of genes associated with cardiac hypertrophy
Because the veh+/+ vs. caff+/+ comparison revealed phenotypic differences in cardiac function and the highest degree of DNA methylation differences, all 7719 DMRs identified from this comparison, both hyper-and hypomethylated regions, were used for gene pathway analysis. Genes associated with these DMRs were examined with the functional ontology enrichment tool from MetaCore. MetaCore uses different manually created groupings of Figure 7. Significantly enriched cardiovascular related pathways. Gene set enrichment analysis was done with the differentially methylated genes between A1AR+/+ mice treated with or without caffeine. The analysis was conducted with MetaCore Enrichment Analysis using the ontologies of Diseases (by Biomarkers), Map Folders, Pathway Maps, and Process Networks. Bars represent the percentage of altered methylation genes (in black) within a pathway. The numbers of altered genes and genes in a pathway are listed next to the bars, which represent the percentage of altered genes within a pathway. Dots indicate the negative log 10 of the P-values. Larger -log(P-value) means that the pathway is more significant. The threshold for significance is marked in the graph as a dotted-line at 1.3 (2log(0.05)). N = 2. doi:10.1371/journal.pone.0087547.g007 genes from different databases, including common cellular processes, networks, biological function, and disease, which are referred to as ontologies. These ontologies are used to identify gene pathways that contain genes associated with differentially methylated regions of the genome. The first ontology examined was ''Pathway Maps,'' which groups genes into cellular processes, protein functions, and diseases. The top 20 most significantly enriched pathways within the Pathway Maps ontology included cytoskeleton remodeling, Gprotein signaling, and NF-AT signaling in cardiac hypertrophy (Fig. 7, Table 1). Next, Map Folder ontology, which is a higher order analysis of the Pathway Maps ontology, was applied. Map Folder ontology group genes together from the Pathway Maps database according to main biological processes. The top pathways identified by the Map Folder ontology included cell differentiation, cardiac hypertrophy, and vascular development (Fig. 7, Table 1). The Process Networks ontology uses data from Pathway Maps, GO-processes, and network models of main cellular processes to identify significant gene pathways. The top 20 pathways identified by Process Network ontology included those involved with cytoskeleton regulation, cardiac development -BMP/TGF-beta signaling, and cardiac development -FGF/ErbB signaling (Fig. 7, Table 2).
The forth ontology examined, Disease (by Biomarker), uses biomarkers to group the genes into disease pathways. The Disease (by Biomarkers) ontology identified heart diseases, cardiomyopathy, myocardial ischemia, myocardial infarction, and body weight changes as some of the most significant pathways in this ontology analysis (Figs. 7-8, Table 2). There were many significantly enriched pathways related to the cardiovascular system including those related to cardiac disease, cardiac hypertrophy, and cardiac development and all these significant pathways are displayed together along with their P-values and the percentage of genes in the specific pathway that are affected (Fig. 7). Some pathways associated with growth were identified, but only four of these pathways had significant P-values (Fig. 8).
Further pathway analysis was performed by importing the genes associated with DMRs into the Ingenuity Pathway Analysis (IPA) software. This analysis identified cardiac specific pathways in several larger ontologies including Diseases and Disorders, Physiological Development, Top Toxicity Lists, and Cardiotoxicity (Table 3). Many cardiac pathways were identified with the Ingenuity database including Cardiovascular Disease, Organ Development, Cardiac Hypertrophy, and Cardiac Output ( Table 3). Many of the pathways found with Ingenuity were similar to those identified with the MetaCore software (Table 1, Fig. 7).
Of the top 20 most significant cardiovascular pathways identified from the different ontologies in MetaCore, five were related to cardiac hypertrophy and four were related to cardiac development (Fig. 7). Further analysis, of the genes affected within the hypertrophic cardiomyopathy ontology (Fig. 7), identified many structural genes including troponin I (Tnni3), troponin T (Tnnt2), troponin C (Tnnc1), a-actin C1 (Actc1) that are important for proper cardiac function (Table 4). In addition, 9 of the 40 genes associated with DMRs in the hypertrophic cardiomyopathy ontology express myosin heavy chain genes, including 2 that are critical for heart function and development, myosin heavy peptide 6 alpha (Myh6) and myosin heavy peptide 7 beta (Myh7; Table 4). Analysis of a more specific ontology, cardiac hypertrophy -NF-AT signaling, revealed similar genes as the general hypertrophic cardiomyopathy ontology such as Myh6, Myh7, Tnnt2, Tnni3, and Actc1 ( Fig. 7; Table 5). The NF-AT ontology identified transcription factors that were associated with DMRs including GATA binding protein 4 (Gata4), myocyte enhancer factor 2c (Mef2c), and the transcriptional co-activator calmodulin binding transcription Gene set enrichment analysis was performed with the differentially methylated genes between A1AR+/+ mice treated with or without caffeine. The analysis was conducted with MetaCore Enrichment Analysis using the ontologies of Diseases (by Biomarkers), Map Folders, Pathway Maps, and Process Networks. Bars represent the percentage of altered methylation genes (in black) within a pathway. The numbers of altered genes and genes in a pathway are listed next to the bars. Dots indicate the negative log 10 of the P-values. Larger -log(P-value) means that the pathway is more significant. The threshold for significance is marked in the graph as a dotted-line at 1.3 (2log(0.05)). N = 2. doi:10.1371/journal.pone.0087547.g008 Table 3. Significantly enriched miRNA pathways from Ingenuity Pathway Analysis following in utero caffeine exposure. (N = 2). activator 2 (Camta2; Table 5). In addition, 9 out of the 28 affected genes in this pathway are guanine nucleotide binding proteins (Gproteins; Table 5).
To elaborate on the DNA methylation array results, we selected several DMRs at different gene loci related to cardiac hypertrophy including Mef2c, Tnnt2, Myh6, Myh7, and Gata4 and body weight Ins2 for bisulfite sequencing. Of the six genes examined by BS-sequencing, 3 of them matched the DNA methylation array results including Mef2c and Ins2 which were both hypermethylated and Myh6 which was hypomethylated in caff+/+ (Table 6). Two genes, Gata4 and Myh7, showed no difference by BS-seq in caff+/+ even though they were both hypermethylated in the DNA methylation array results (Table 6). One gene, Tnnt2, was hypomethylated in the caff+/+ group in the array but hypermethylated by BS-seq. (Table 6).
To determine if the DNA methylation changes observed between veh+/+ and caff+/+ affect gene expression, we performed quantitative real-time PCR. We examined the gene expression of Mef2c, Tnnt2, Myh6, and Myh7 in adult left ventricles. Hypermethylation of DNA is generally associated with a decrease in gene expression and this was observed for Mef2c but not Myh7, indicating that DNA methylation changes seen for Myh7 does not affect expression (Fig. 9). Hypomethylation of DNA is generally associated with an increase in gene expression, and that was observed for Myh6 (Fig. 9).
Analysis of DMRs associated with miRNA sites and promoters identified many pathways and genes related to cardiovascular biology. Using the IPA software, several Cardiotoxicity Pathways were identified including cardiac inflammation, cardiac dilation, and cardiac infarction ( Table 7). Analysis of the miRNA regions identified 103 regions that were significantly differentially methylated (Table 8). Of the miRNA promoter regions with DMRs, two were related to cardiac hypertrophy including miR-208b and miR-499 (Table 8) [49,50]. These miRNAs are located within introns of the Myh7 and Myh7b genes. MiR-208b is located within intron 31 of the Myh7 gene which is also differentially methylated and miR-499 is located in intron 19 of Myh7b. However, unlike Myh7 which was hypermethylated, regions within the promoter for both miR-208b and miR499 were hypomethylated.
Caffeine induces a decrease in global DNA methylation
The DNA methylation array interrogates the portion of the genome that is associated with promoter regions, CpG islands, and miRNA regions. To assess DNA methylation throughout the whole genome, a methylated DNA quantification assay was performed. The whole genome DNA methylation level was compared between veh+/+ and caff+/+ or veh+/2 and caff+/2 using DNA isolated from adult left ventricles. Caffeine treatment caused a 26% decrease in global DNA methylation in A1AR+/+ hearts, but no change in the level of DNA methylation was detected in the A1AR+/2 hearts (Fig. 10). In addition, no significant change in global DNA hydroxymethylation was detected in either A1AR+/+ or A1AR+/2 adult hearts following in utero caffeine exposure (Fig. 10).
Discussion
Our previous research demonstrated that A1ARs protect the developing embryo from intra-uterine hypoxia [51,52]. The loss of A1AR expression in embryos leads to increased embryonic death and severe growth retardation under hypoxic conditions [51,52]. Further studies demonstrated that hypoxia and/or caffeine treatment, which inhibits A1AR signaling, during embryogenesis had long-lasting effects into adulthood both on cardiac function and body weight [14]. We now demonstrate the importance of normal adenosine signaling through A1ARs during development, as disruption in A1AR action through caffeine treatment leads to increased body weight, reduced cardiac function, and altered cardiac DNA methylation patterns in adulthood.
Previous studies revealed that in utero caffeine exposure led to increased percent body fat in adult males but no difference in adult body weight was detected [14]. In the current study, we observed that in utero caffeine treatment increased body weight without a change in proportion of body fat. The possible reason for these differences vs. our previous studies may be related to the strain of mice used. The original examination was performed on C57Bl/6, an inbred strain from Charles River Laboratories; the current study examined the A1AR knockout line that is on a mixed background of 129/OlaHsd/C57Bl/6. The difference that we observe in metabolic function between the different mouse strains may be due to differences in DNA methylation changes in response to caffeine treatment. Further experiments into the strain differences in weight and body fat results will need to be performed, especially on an outbred strain. In both strains examined, it is important to note that early embryonic exposure to caffeine induced long-term effects in adult mice.
As with previous studies [14], this study demonstrated altered cardiac function, including decreased cardiac output. Previous reports indicated that in utero caffeine exposure caused a decrease in fractional shortening [14]. In addition, this study identified changes in heart morphology, including increased wall thickness following in utero caffeine treatment. Previous research showed that caffeine treatment affected the size of embryonic hearts [14]; in this study we observed increased ventricular wall thickness in adult hearts following early caffeine exposure. The increased ventricular wall thickness resulted in reduced ventricular volume and reduced cardiac output. The increased wall thickness and left ventricle mass that we observed are consistent with cardiac concentric hypertrophy, which is characterized by an increase in cardiac wall thickness with a reduced chamber volume. These results suggest that in utero caffeine exposure affects cardiac development, which leads to concentric hypertrophy in adulthood to compensate for reduced function. Concentric hypertrophy can eventually be maladaptive when stroke volumes are reduced and diastolic function is compromised. Altered DNA methylation represents a potential mechanism for translating in utero exposure to caffeine into the phenotypic changes observed in adult mice, including increased body weight and cardiac hypertrophy. The developing embryo and heart are sensitive to factors that can alter DNA methylation at early embryonic stages including E8.5 [40], the stage at which we treated pregnant dams with caffeine. DNA demethylation and de novo DNA methylation are actively and passively occurring during early embryonic stages. After fertilization, paternal DNA is rapidly demethylated and maternal DNA is passively demethylated until implantation, when de novo DNA methylation increases between E3.5 to E10.5 in mice [40]. This period is critical for reestablishing DNA methylation patterns; therefore factors that affect methylation during this time window could have long lasting effects.
The ability of caffeine to alter DNA methylation and gene expression has been demonstrated for the steroidogenic acute regulatory protein (StAR) gene in adrenocortical cells [53]. A change in StAR expression was attributed to the demethylation of a single CpG site in the StAR promoter following caffeine treatment [53]. Our analysis demonstrated that caffeine causes both a genome-wide decrease in methylation, as well as a large number of hypermethylated regions. The induction of both hyper-and hypo-methylated DNA regions in the genome by caffeine was also observed in cultured rat hippocampal neurons [15]. Although these data may seem contradictory, a global decrease and regional increase in DNA methylation has been observed in other biological systems including cancer [54,55]. In addition, caffeine exposure decreased promoter methylation in L6 rat myotubes that was paralleled by an increase in the respective gene expression [56]. These observations and our data may indicate that changes in DNA methylation associated with caffeine treatment may be the result of more than one pathway. For example, caffeine may affect the activity or expression of both DNA methylation enzymes (DNMTs) and demethylation agents (Tets). The altered activity of these enzymes could then lead to the altered DNA methylation patterns we and others observe. The genes affected by altered DNA methylation may be dependent on the tissue and the timing of treatment. For example, caffeine treatment at E8.5 leads to altered DNA methylation and altered expression of genes associated with cardiac hypertrophy in the adult heart.
Several cardiac hypertrophic pathways were identified with altered DNA methylation patterns following caffeine treatment. The cardiac hypertrophy pathway was identified within multiple databases. Because the cardiac hypertrophy pathways identified by ontology were consistent with the phenotype observed in adult offspring of pregnant dams treated with caffeine, we further analyzed the differentially methylated genes within these pathways. Although we discovered many genes associated with cardiac hypertrophy that displayed altered DNA methylation patterns, not all of these changes will necessarily affect gene expression. To initially analyze the functional effects of altered DNA methylation on gene expression, we performed bisulfite sequencing to identify which DNA methylation sites were changed. Next, we performed real-time PCR to examine the expression level of genes associated with differentially methylated regions.
The first set of genes analyzed were part of the ''NF-AT signaling in cardiac hypertrophy'' pathway. Cardiac hypertrophy is mediated by three main transcription factors Mef2, NF-AT, and Gata4 [57], and all of these genes were linked with changes in DNA methylation in our analysis. We analyzed two of these genes further (Mef2c and Gata4) and demonstrated that an increase in Table 7. Significantly enriched miRNA pathways from Ingenuity Pathway Analysis following in utero caffeine exposure. (N = 2). [57]. We also analyzed myosin heavy chain alpha (Myh6) and myosin heavy chain beta (Myh7), as their expression is altered during cardiac hypertrophy [50,57]. During development Myh6 and Myh7 are expressed differently with Myh7 as the predominant fetal isoform and Myh6 as the dominate form in adults mice [58]. Myh7 is still expressed in adulthood but an increase in its expression during adulthood is a common feature of cardiac hypertrophy [59]. Our analysis revealed that both Myh6 and Myh7 were upregulated at the level of gene expression. The increase in Myh6 expression was consistent with a decrease in DNA methylation in its promoter region. However, DNA methylation analysis by both BS-seq and methylation array indicated an increase in DNA methylation in the Myh7 transcript region. This apparent disconnect between DNA methylation status and gene expression level for Myh7 may be explained in part by the fact that the promoter region for miR-208b was hypomethylated. MiR-208b is encoded within an intron of the Myh7 gene, such that an increase in miR-208b expression could also result in an increase in Myh7 expression due to the fact that they are co-regulated [50]. We observed by real-time PCR an up-regulation of Myh7 gene expression which is consistent with the phenotype of cardiac hypertrophy observed. Future analysis will focus on elucidating the mechanism by which caffeine exposure causes changes in Myh6 and Myh7 methylation, and identifying the specific methylation sites that are important for regulating their expression.
The changes observed with caffeine treatment were seen only when A1ARs were expressed. Loss of A1ARs in the knockout mice (A1AR2/2) protected the embryos and adults from effects of in utero caffeine exposure. These findings indicate that A1ARs mediate the effects of caffeine on the developing embryo that lead to long-term changes in cardiac function and body weight as well as long-term changes in DNA methylation patterns. Although we observed changes in DNA methylation in response to caffeine in the absence of A1AR expression, there were fewer differences and we did not observe effects on cardiac function or body weight in these treated mice. These data indicate that caffeine has specific effects on DNA methylation by acting through A1ARs and producing a specific phenotype. In addition, these results suggest that embryonic and cardiac developments are sensitive to changes in A1AR action.
Some of the limitations of this study include the number of animals examined and using i.p. injection as the route of caffeine administration, which is not the normal way of ingesting caffeine in humans. Many parameters, including route of administration, number of exposures including chronic exposure and peak serum levels, need to be considered before commenting on the risk of caffeine exposure in humans [60]. In our previous study, we did not see differences in adult female percent body fat with in utero caffeine treatment, which is one reason female mice were not examined in this study [14]. Further studies on the effect of in utero caffeine on females will need to be performed in order to overcome this limitation. Caffeine has also been shown to have beneficial effects in neurological and immunological disorders_ENREF_59 [61][62][63]. Thus further clinical and animal studies are needed to assess the effectiveness and safety of caffeine treatment during human pregnancy before recommendations can be made.
This report begins to answer an important question: does in utero caffeine exposure increase the susceptibility of an individual developing cardiac or metabolic disease in adulthood? This study is an initial step that indicates that in utero caffeine exposure can have long lasting effects into adulthood and that caffeine can alter the DNA methylation pattern in the heart during early stages of embryonic development.
|
2016-05-12T22:15:10.714Z
|
2014-01-27T00:00:00.000
|
{
"year": 2014,
"sha1": "4a73b151e483cf4018661db44d8eb255188f0061",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0087547&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fb2b7750d1e31e761dda96c5f2f0828ed27c1ed9",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
246781028
|
pes2o/s2orc
|
v3-fos-license
|
Protocol on Comparative Efficacy of Bruhatyadi Kwath as Compared to Furosemide for Improving e –GFR and Albuminuria in Chronic Kidney Disease- A Randomized Controlled Study
Background: The steady decrease of kidney function is referred to as chronic kidney disease. The kidney function is measured by Glomerular Filtration Rate (GFR). According to Ayurveda the CKD can be correlated to Mutraghatabecause of similarity of symptoms. In Ayurveda Mutraghata is described under Mutrarogawhich comes under Mutravahastrotas (urinary system). Objective: To assess and compare the efficacy of Bruhatyadikwath, Furosemide and Bruhatyadikwath along with Furosemide on symptoms, eGFR, and Albuminuria in various stages of Chronic Kidney Disease. Study Protocol Shankarpure et al.; JPRI, 33(63A): 175-182, 2021; Article no.JPRI.80087 176 Methodology: Total 90 patients will be divided in 3 equal groups. Patients in group A will be treated with Bruhatyadikwatha, group B patients will be treated with Furosemide and patients in group C will be treated with Bruhatyadikwatha and Furosemide for 90 days. Follow up will be taken after every 30 days. Expected Results: Furosemide along with Bruhatayadikwath will show better improvement in eGFR and Albuminurea as compared to only treated with Bruhatyadikwath and Furosemide. Assessment of subjective criteria like edema, anorexia, weakness and vomiting will be done on day 0, 30, 60 and 90 whereas assessment of Serum creatinine, Blood Urea, Sr. Sodium, Sr. Potassium, eGFR (Cockcroft formula) and Albuminuria will be done before and after treatment (on 0 and 90th day). Result: Subjective and objective outcomes will be assessed by statistical analysis. Conclusion: It will be drawn from the result obtained.
INTRODUCTION
Chronic Kidney Disease (CKD) is a progressive, permanent decline in kidney function that usually occurs over months to years. It begins as a biochemical aberration, but when the kidney's excretory, metabolic, and endocrine functions deteriorate, clinical signs and symptoms of renal failure, commonly known as uraemia, develop. End Stage Renal Disease (ESRD) is a term used to describe a condition in which mortality is likely without RRT (Renal Replacement Therapy) (CKD stage 5) [1].
Chronic Kidney Disease (CKD) is becoming a major chronic disease worldwide. One reason is that the global incidence of diabetes and hypertension is quickly growing. Given India's population of over one billion people, the increased frequency of CKD is projected to cause severe challenges in the future for both healthcare and the economy. Indeed, the ageadjusted incidence rate of end-stage renal illness in India has recently been estimated to be 229 per million people, with more than 100,000 additional patients entering renal replacement programmes each year [2]. Only 10% of Indian patients with end-stage renal disease receive any renal replacement therapy due to a lack of funding.
Therefore, exploration of a safe and alternative cost-effective therapy is highly required, which proves to be helpful in reducing requirement or frequency of dialysis and in postponing the renal transplantation.
Chronic kidney disease (CKD) is a kind of kidney disease in which kidney function gradually deteriorates over months to years.
Kidney function is a measure of the kidney's health and its contribution to renal physiology.
The Glomerular Filtration Rate (GFR) is a metric for measuring kidney function.
The Glomerular Filtration Rate (GFR) is the rate at which filtered fluid flows through the kidneys. Without Chronic Kidney Disease, a Glomerular Filtration Rate (GFR) of 60 ml/min/1.73 m2 is considered normal [3] .Chronic Kidney Disease is characterised as having a GFR of less than 60 ml/min/1.73 m2 for three months. In CKD we get changes in blood, urine and imaging studies. In blood there is raised Serum Creatinine and Urea even than normal Albuminuria in urine is the oldest and widely used marker for kidney dysfunction, Albumin is the most prevalent plasma protein, and its urine excretion is determined by the combined effects of glomerular filtration and renal tubular processing. It's also used to track CKD progression. On the basis of GFR, Chronic Kidney Disease (CKD) is divided into five stages.
The majority of patients with slowly progressive illness are asymptomatic until their GFR drops below 30 ml/min/1.73 m2, and others can be asymptomatic even with significantly lower GFR values.
Symptoms and indicators are typical when GFR goes below 15-20 ml/min/1.73m2, and they can influence practically all physiological systems [4]. Tiredness or shortness of breath, as well as lower limb swelling, can all be signs of renal anaemia or fluid overload.
Patients with worsening renal function may have pruritus, anorexia, weight loss, nausea, vomiting, and hiccups.
Due to significant metabolic acidosis, the patient's respiration may be exceptionally deep (Kussmaul breathing) in very severe renal failure and muscle twitching, fits, sleepiness, and coma are all possible side effects [5].
Ayurvedic management has proved its potential as an alternative medicine for the treatment of a variety of ailments in recent years, and it continues to be a key source for the discovery of new medications, which has gotten more attention recently. Ayurvedic therapy is also becoming more popular for enhancing healthcare and preventing Chronic Kidney Disease, according to data (CKD). Chronic Kidney Disease is not described in Ayurveda but due to similarity of symptoms it can be correlated with Mutraghata, which is one of the most important Mutraroga as described in ancient Samhitas. There are a variety of formulations available that target urinary system problems and have a variety of activities. Brihatyadikwatha is one of them described in Sushruta Samhita [6]. It contains Gokshur, Brihati, Patha, Indrayava, Kantakari and Yashtimadhu as shown in Table 1. All these herbs have Mutral (Diuretic) property. In Sushruta Samhita BruhatyadiGana is mainly described for the management of Mutrakruccha where as in Sahastryogam it is indicated in the management of all Mutravikara [7]. It possesses Rasayana property which is helpful in regeneration of damaged kidneys. Deepan-Pachan property of these drugs reduces production of Aam as well as Kleda. It corrects Mansa and Medadhatwagni by its Katu, Tikta Rasa and Ushna Veerya thus reduces production of Kha-Mala.
Rationale of the Study
Chronic Kidney Disease (CKD) is a global health problem that costs health systems a lot of money, and it's also a risk factor for cardiovascular disease (CVD).
CKD is linked to an increased risk of cardiovascular morbidity, premature mortality, and/or a lower quality of life at all stages. In modern science management of Chronic Kidney Disease mainly includes supportive treatment, use of diuretics and in severe cases dialysis or renal transplant. But all these treatments have their own limitations like high cost, adverse effects as well as complications. In Ayurveda many herbal formulations having Mutral (diuretic) and Rasayana (rejuvenating) properties are recommended in the management of Mutravikar as like Mutraghata (Chronic Kidney Disease). Various research studies conducted on herbal drugs in Mutravikaras are available showing their efficacy [8,6,9]. BruhtyadiKwath possesses diuretic, rejuvenating, antibacterial and antiinflammatory properties. Research studies are conducted on Bruhatyadikwath in the management of Mutrakruchra but no study is conducted on Mutraghata [10]. So for early stage prevention and to check further progression this study is planned along with Furosemide to evaluate the efficacy of Bruhtyadikwath for improving e-GFR and Albuminuria in Chronic Kidney Disease.
Aim
Comparative efficacy of Bruhatyadikwath as compared to Furosemide for improving e -GFR and Albuminuria in Chronic Kidney Disease.
Case Definition
Patients having age ≥ 20 to 70 years of either sex having eGFR ≥ 90 to 30 (Cockcroft formula) that is Chronic Kidney Disease of stage 1 to 3 with Albuminuria will be included in the study.
Research Question
Whether Brihatyadikwath along with Furosemide is effective as compared to Brihatyadikwath and Furosemide given separately in improving e-GFR and Albuminuria in Chronic kidney Disease?
Hypothesis (a) Null Hypothesis (H0):
Bruhatyadikwath along with Furosemide may not be more effective than Bruhatyadikwath and Furosemide given separately in improving the e-GFR and Albuminuria in Chronic Kidney Disease (CKD)
(b) Alternative Hypothesis (H1):
BruhatyadiKwath along with Furosemide may be more effective than
Study Formulation (Bruhtyadi kwatha) Contents
Bruhtyadikwatha will be freshly prepared each time by Standard operating procedure as per the reference of Sharangadhar Samhita (Madhyam Khanda 2/1 and 8/1). In coarse powder of raw material 16 parts of water will be added and it will be heated to reduce to 1/8 part. Mixture will be filtered and decoction will be obtained.
Sample Size
For calculating sample size with desired error of margin-
Statistical Analysis
The observations will be analyzed by using chi square test and student unpaired t test.
Time duration till follow up: The patient will be treated for a total 90 days and will be followed up on day 30, 60, 90.
Time schedule of enrolment, interventions:
Drug will be given from 0 to 30 days Recruitment: Patients will be recruited by Computerised algorithm for random allocation into three groups of 90 (30 patients in each group).
Data collection methods: Assessement criteria
Subjective Criteria:
2. Blood Urea 3. Sr. Sodium 4. Sr. Potassium will be done before and after treatment 5. eGFR (Cockcroft formula) 6. Albuminuria Data management: The data entry coding will be done by PI
DISCUSSION
In Chronic kidney disease, the kidney functions gradually go on decreasing due to various pathologies. It reduces excretion of waste products formed during metabolism, increases leakage of proteins and electrolytes due to damage of nephrons. This loss of proteins, electrolytes depletion and accumulation of nitrogenous waste products produces symptoms. As per Ayurveda, Mutraghata means retention of urine along with pain in supra pubic region are observed due to obstructive pathology, there is deranged function of Vatadosha, particularly apanavata which is the prime causative factor also vitiating Kaphadosha, affecting to mutravahastrotas and derangements occurs in Basti which results in Mutraghata. In this study patients will be divided in three equal groups and will be given interventions as shown in Table 2 [12]. Due to Rasayana property it will promote and boost cell growth and their function. It acts as a Kaphaghna because of Tiktakatu rasa. It also acts as Kledashoshak because of the Grahiguna of Indrayava and Brihati. Srotovishodhana as it contains Patha and Kantakari and lastly it has Mutral property because of Gokshur. In this way Brihatyadikwath helps in breaking pathogenesis.
Furosemide is a loop diuretic that is used to treat hypertension and edema in people with congestive heart failure, liver cirrhosis, renal illness, and high blood pressure. Diuretics enhance urine salt and water excretion by inhibiting sodium reabsorption in particular renal tubules. It is a drug used in CKD. The subjective criteria as shown in Table 3 and objective criteria will be assessed before and after treatment.
CONCLUSION
Conclusion will be drawn after statistical analysis.
RECOMMENDATION
Furosemide is a known loop diuretic drug used in CKD, if BruhatyadiKwath is given along with it may be found effective in alleviating the subjective and objective parameters. So the use of two drugs in combination will have a significant effect on preventing progression of CKD and correcting albuminuria in early stages.
LIMITATIONS
This study will not be conducted on major systemic diseases like uncontrolled diabetes mellitus, hypertension, malignance and immunocompromised disorders and also in advanced stages of stage 4 and 5.
CONSENT
The written consent will be taken from the patient before starting the study. During the study the confidentiality of each patient will be maintained. With all the information consent form and other related documentation will be given to participants.
DISSEMINATION POLICY
The data will be disseminated by paper publication.
|
2022-02-13T16:24:51.471Z
|
2021-12-29T00:00:00.000
|
{
"year": 2021,
"sha1": "f3ad0008e32d4c4268acbdbae540a7f552ac350c",
"oa_license": "CCBY",
"oa_url": "https://www.journaljpri.com/index.php/JPRI/article/download/35231/66522",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "cb7343edc654e7c2fcd631560f15038964ab9807",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
260263610
|
pes2o/s2orc
|
v3-fos-license
|
The Impact of En-bloc Transurethral Resection of Bladder Tumour on Clinical, Pathological and Oncological Outcomes: A Cohort Study
Background En-bloc transurethral resection of bladder tissue (ETURBT) has recently been proposed as a good alternative technique to trans-urethral resection of bladder tissue (TURBT) in terms of outcomes for bladder carcinoma. This study aims to assess the effectiveness of the technique in terms of clinical, pathological and oncological outcomes. Methodology In this prospective study, data was collected from patients who underwent ETURBT for bladder space-occupying lesions between June 2021 and June 2022. Demographic characteristics, tumour characteristics, and postoperative outcomes were recorded. Results A total of 52 patients were studied with the majority being male and a mean age of 50.87 years. Smoking was recorded in 22 (38.5%) patients and 8 (15.4%) were on antiplatelet therapy. The majority fell in the American Society of Anesthesiology (ASA) class I (59.6%). Most of the tumours were solitary (90.4%), primary (82.8%), papillary architecture (73.1%), and between 1-3 cm in size. The lateral wall was the most common position, and detrusor muscle was seen in 98.1% of the specimens. T1 stage (57.7%) and low grade (67.3%) were the common characteristics noted. 76.9% of the ETURBT was conducted using monopolar cautery. Recurrence was noted in 3 (5.8%) and bladder perforation in 1 patient (1.9%). Cautery artifact was seen in six patients (11.5%) and obturator jerk in nine patients (17.3%). Conclusion Our study suggests that ETURBT is a technique with a good success rate for bladder tumours less than 3 cm in size. The benefits include high chances of detrusor sampling while minimising crush artefacts and cautery damage. Specimen retrieval was challenging when the bladder tumour was solid and over 2 cm.
Introduction
Urinary bladder carcinoma is the fourth most common malignancy in men and the eighth most common malignancy in women in the Western world. Around 5% to 10% of all malignancies in men in Europe and the United States are bladder cancers [1]. Bladder cancer is the ninth most common cancer accounting for 3.9% of all cancer cases as per the Indian Cancer Registry data. Data on the exact incidence is scarce [2].
The primary management in all bladder space-occupying lesions includes a complete transurethral resection of the bladder tumour (TURBT) and a histopathological examination by a pathologist depending on which further treatment is planned. While TURBT is the gold standard, this mode of resection violates the typical oncological principles, due to the implantation of scattered and exfoliated tumour cells from the "piecemeal" resection. This has been associated with increased recurrence [3]. Other limitations include unclear resection margins and the inability to ensure the inclusion of detrusor muscle in the final histopathology sample. These factors decrease the accuracy of staging and oncological outcomes of non-muscle invasive bladder cancer (NMIBC) [4]. To overcome these difficulties, en-bloc transurethral resection of bladder tumour (ETURBT) has been proposed. 1 2 1, 3 1 1 1 4 1 First described in 1997, ETURBT involves removing the tumour in a "one-piece" fashion. This involves a circular incision either with blade or energy devices around the tumour followed by removal in toto with the underlying detrusor muscle [5]. The advantages include maintenance of the 3D architecture of the tumour enabling accurate staging of bladder cancer and proper assessment of the margins of the tumour [6]. Remaining in the same surgical plane decreases complications. Avoiding tumour fragmentation decreases tumour spillage and improves oncological outcomes [7]. When a good assessment of the depth of invasion is possible, it can avoid re-look TURBT even in T1 tumours or high-grade tumours [5,8]. This especially helps extend the scope of this novel procedure in pathologies that were previously not treated by TURBT definitively which includes conditions like muscle-invasive bladder cancer (MIBC) and carcinoma-in-situ (CIS).
Very few studies have addressed the outcomes of ETURBT, especially in our population. In this study, we are performing a prospective analysis to assess the effectiveness of ETURBT in bladder tumours in terms of completeness of resection based on post-op histopathology and incidence of early and late complications.
Materials And Methods
This was a single-centre prospective study conducted in Apollo Hospitals, Chennai, India, between June 2021 and June 2022, involving the population presenting to the urology outpatient department. This study was conducted according to the ethical guidelines of the Declaration of Helsinki and its amendments. The study has been approved by our Institutional Ethics Committee (approval number: AMH-DNB-050/06-21) on 7th June 2021. All patients participating in this study provided informed consent. The authors confirm the availability of, and access to, all original data reported in this study.
The study included patients more than 18 years of age who underwent ETURBT for a new or recurrent spaceoccupying lesion in the bladder based on imaging or surveillance-detected recurrent NMIBC; single and multiple tumours seen on cystoscopy with a size of single tumour less than 3 cm. Exclusion criteria included tumours greater than 3 cm, tumours close to the ureteric orifice, restaging TURBTs for high-grade bladder cancer, patients not willing to participate, benign findings on histopathology, advanced disease on preop imaging and patients who required other simultaneous procedures like transurethral resection of the prostate or ureteroscopy.
The procedure was conducted in the operating room under general or spinal anaesthesia. A detailed cystoscopic evaluation was carried out in all cases. The bladder lesions were meticulously assessed and all data including site, size, multiplicity, relation to the ureteric orifices, and the appearance of the rest of the bladder mucosa were recorded. All resections were carried out in standard fashion either by laser or by monopolar cautery [5].
A circular incision was made 5-10 mm around the tumour. Blunt dissection was carried out with care taken to include the underlying detrusor muscle. Small tumours were retrieved with biopsy forceps while in certain cases, Ellick's bladder evacuator was used to retrieve the specimen. All the visible lesions were resected and the tissue was sent for histopathological examination. The biopsy chips were analysed by a dedicated pathologist and reported based on the presence of muscle in the sample, grade, stage and presence of cautery artifact.
Complications like catheter block secondary to bladder clots needing transfusion and signs of bladder perforation were looked out for in the initial few days after the procedure. The patient was discharged from the hospital when urine was clear after stopping irrigation for three hours. Repeat haemoglobin was performed in all patients 48 hours after the procedure. Foley was removed on postoperative day (POD) two if urine was clear. A follow-up cystoscopy was done on the third month for all patients to look for recurrence.
Statistical analysis was done using SPSS version 25.0 (IBM Corp., Armonk, NY). All continuous variables were tested for normality using Shapiro-Wilk's test. If they were normally distributed, then they are expressed as mean SD, otherwise median (IQR
Results
Fifty-eight patients with a mean age of 50.87 years and a standard deviation of 11.94 years underwent ETURBT during the study period. Patients were included on an accrual basis. The majority were male (61.5%). Six patients were lost to attrition. Twenty-two patients had comorbidities and 20 were smokers. Eight patients were under antiplatelet therapy. The majority of patients fell in the American Society of Anesthesiology (ASA) class I. The demographic data are given in Table 1.
Tumour characteristics
Most of the tumours were primary (82.8%). There were up to three tumours in a single patient, with the majority being solitary tumours (90.4%). Papillary tumours were the majority (73.1%) and sizes of all the tumours varied from 1-3 cm. The most common site was noted to be the lateral wall. Of the tumours removed, 98.1% had detrusor muscle in the specimen and 57.7% were T1 staged. Most of the tumours were of low grade (67.3%). The commonly used energy device in our study was monopolar cautery (76.9%). The tumour characteristics are given in Table 2.
Postoperative outcomes
Cautery artifact was noted in 11.5% of patients, obturator jerk in 17.3%, and bladder perforation in one patient (1.9%). Follow-up cystoscopy was normal in 71.2% of the patients. Most patients needed three bottles of normal saline irrigation to clear urine, and foley catheter removal was done most commonly on postoperative day (POD) two (61.5%). Since there were multiple tumours in the same patient, the total number of tumours in the study here is 58.
In the subgroup of patients who had multiple tumours, all the tumours were present in the same area. All obturator jerks were experienced while resecting the lateral wall tumour. The bladder perforation noted was in the patient who had two tumours in the left lateral wall. One patient who had a tumour in the trigone of size 2 cm and resected with monopolar cautery was noted to have clot retention. Foley was removed on POD 10 for the patient who had bladder perforation.
Discussion
The endoscopic surgical management of NMIBC has been almost exclusively carried out by TURBT over the last 60 years [9]. During this resection process, cancer cells are released which have been shown to reimplant particularly at sites of surgical injury, which are the top contributing factor to recurrence [10][11][12]. In contrast, ETURBT allows bladder tumour removal in one piece. The main limitation of the studies describing ETURBT is that they describe a per-protocol series, which introduces a selection bias.
On the other hand, in certain patients (e.g., BCG-refractory NMIBC, MIBC patients), even a conventional TURBT cannot be regarded as a definitive surgery, and the patients may end up requiring additional intervention in the form of radical cystectomy. Given that ETURBT is now a well-established technique we feel that reporting outcomes based on the routine implementation of ETURBT is more informative for urologists considering adopting the procedure.
In our study, 61.5% were male patients and the remaining 38.5% were females. Other studies have reported a threefold to a fourfold higher risk of bladder cancer in males as compared to females [13,14]. In our cohort, while men didn't surpass women by a huge margin, they still accounted for an almost two-fold higher risk than females. In other studies, the average age was 60.2 ± 4.4 years old, while in our participants it was 50.8 years which was almost 10 years less and there was no patient less than 29 years in our study [15].
Compared to other studies reporting smoking as a risk factor, our cohort had 61.5% smokers while other studies have reported an incidence of 80.7% including former smokers [16]. About 82.8% of our patients presented with painless haematuria compared to 97% in a study by Gupta et al. [15]. However, our cohort included 19.2% of patients who were referred to our department because of an incidentally detected bladder mass during health check screening. Other studies report at least one comorbid condition in 66% of their study sample while our cohort reported considerably less with 42.3% [17]. About 15.4% of our population were on antiplatelet medications for their heart disease.
Our study only included tumours less than 3 cm in size; although some studies estimated that ETURBT is not feasible for almost 30% of tumours due to size, morphology, and/or location, laser ETURBT has been performed for tumours up to 4.5 to 5.5 cm in diameter and in virtually all locations throughout the bladder [18,19].
While our study included tumours in all locations, we experienced difficulty with anterior wall tumours which could not be resected by ETURBT. Resection was attempted with monopolar current and subsequently required conversion to the conventional method in those cases. These were not included in our study. Tumours in the dome were resected both by laser and monopolar TURBT without any conversions into conventional TURBT. In the study, 42 (76.9%) patients underwent monopolar TURBT and 12 (23.1%) patients underwent laser TURBT. The energy source was selected based on the surgeon's preference.
Studies have compared the different energy sources for ETURBT and concluded that no difference was found in staging and diagnosis of bladder cancers, as all energies ensure a high-quality specimen [20]. In the current cohort, 17.3% of patients had an obturator jerk at the time of tumour resection and all the jerks were experienced while resecting the lateral wall tumours.
In the trial comparing energy sources for ETURBT, a higher rate of jerk was experienced in lateral wall tumours using monopolar or bipolar energies as compared to a laser source [20]. In the current cohort, the numbers are less for a comparative analysis though it is important to note that the only patient who had perforation underwent resection with a monopolar current for multiple lateral wall tumours.
We faced difficulty while retrieving the resected specimen in tumours more than 2 cm. This is in line with the finding by Naselli et al. who observed that solid lesions are more difficult to extract than papillary lesions. A similar difficulty was reported for lesions arising from the bladder neck [21]. An 18-22 Fr three-way bladder catheter was deployed at the end of the procedure, and continuous bladder irrigation was started. Early oneshot instillation of 40 mg mitomycin C was administered according to current guidelines, but not given in case of bladder wall perforation or excessive bleeding. In the current cohort, one patient had clot retention due to haematuria. Patients followed the postoperative care and follow-up protocols of our institution in line with current European Association of Urology NMIBC guidelines [22].
While other studies have used the presence of muscularis mucosae to improve the accuracy of T1 substage, our study did not include this [6]. Similar to observations by Truong et al, we have also noticed significant interference of cautery artefact with the staging [23].
ETURBT has been described as a potential solution to increase the detrusor muscle sampling rate. In the current cohort, 98.1% of samples showed the presence of detrusor muscle in the specimen. Similar rates have been reported in other studies [3,24]. ETURBT can avoid cautery damage and crush artefacts to the specimen. Tangential tissue sections and random embedding of the tumour tissue can also be avoided [25].
Whether ETURBT plays a role in reducing the recurrence rate of NMIBC is controversial. In the current cohort, follow-up cystoscopy was normal in 71.2% of patients and 5.8% of patients had a recurrence. The remaining 23% of the patients had to undergo other modalities of treatment based on their pathology. Some studies have shown no difference in recurrence rate [26,27]. In a study by Sureka et al., ETURBT recurrencefree survival (RFS) was 45.1 months compared to 28.5 months in those who underwent TURBT [8].
Limitations of our study include a short follow-up period and the absence of comparison among the various energy devices for ETURBT or with conventional TURBT. Resection margins could have been evaluated in detail as more importance was given to detrusor muscle and cautery artifacts in our study.
There is limited high-quality prospective trial data on the recurrence rates following ETURBT although the results of ETURBT are promising. A peep into this area of outcome assessment would be a great tool to study the long-term outcome.
|
2023-07-29T15:20:10.927Z
|
2023-07-01T00:00:00.000
|
{
"year": 2023,
"sha1": "646e8571ff532eab76d80f324948ae6d45752068",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "9c11f7da2142b5b0aac6b9b490d78a82d4528037",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
265461947
|
pes2o/s2orc
|
v3-fos-license
|
1884. Metagenomic Sequencing of the Lung Microbiota Facilitates Diagnosis and Prognosis of Nontuberculous Mycobacterial Pulmonary Disease
Abstract Background The incidence of nontuberculous mycobacterial pulmonary disease (PNTM) is rising, but available diagnostics and treatments have limitations. The lung microbiota plays an important role in the onset and progression of lung diseases. Thus, investigating the relationship between the lung microbiota and treatment outcomes in PNTM should contribute to the development of improved therapeutic, diagnostic, and prognosis-determination tools. Methods Bronchoalveolar lavage fluid (BALF) was collected from 169 patients with PNTM, 46 patients with pulmonary tuberculosis (PTB), 17 individuals without mycobacterial infection, and 37 individuals without infection. The lung microbiota in BALF was characterized using metagenomic sequencing. Patients with PNTM and PTB were identified with a microbiome-based classifier. Different pneumotypes were defined and associated with outcomes. Results Analysis of BALF samples revealed that patients with PNTM differed from controls in terms of lung microbiota richness and composition. The species co-occurrence network in PNTM exhibited low diversity and predominantly positive interactions. Random forest models improved the classification efficacy of PNTM and PTB based on the lung microbiota. At the 13-month median follow-up, pneumotype 1 (with Mycobacterium, opportunistic pathogens, and anaerobes) had a lower probability of sustained culture conversion (hazard ratio = 0.29; 95% confidence interval = 0.11–0.72; P = 0.007) than pneumotype 2, indicating a worse prognosis . Differences in lung microbial taxonomic characteristics between PNTM patients and PTB patients. (A and B) Relative abundance comparisons at the genus (A) and species (B) levels including major bacteria and fungi. In the histogram, microbial taxa that were overrepresented in the PNTM patients were depicted on the right (log2foldchange >0), and microbial taxa that were overrepresented in the PTB patients were depicted on the left (log2foldchange <0). Boxes indicate the horizontal line in each box represents the median, the top and bottom of the box the 25th and 75th percentiles, and the whiskers 1.5 times the interquartile range. (C and D) Classification of PNTM and PTB by Random Forest Model. (C) The top 15 most discriminating taxonomic group between PNTM and PTB. (D) The performance of classifiers in the cohort was measured by the area under the ROC curve (AUC). Cross-validation AUCs based on 5-fold cross-validation replicates of 10 were provided for microbiota classifiers. Model 1 contained mNGS diagnostic criteria. Model 2 contained all differential species. Model3 contained important species filtered by recursive feature elimination (RFE). Hypothesis testing (A) P-values were determined using the Mann-Whitney U test, and the Benjamini-Hochberg was applied to adjust p values. Significant taxa differences (adjusted P-value<0.05) were observed as *. (B) Use the DESeq2 package to calculate log2foldchange. All species displayed were different at the level of p < 0.05. PNTM, nontuberculous mycobacterial pulmonary disease; PTB, pulmonary tuberculosis; mNGS, metagenomic next-generation sequencing. Pneumotype can predict the microbial outcome of PNTM. (A) Dirichlet Multinomial Mixtures (DMM) identified 2 compositionally distinct microbial communities of PNTM patients. The proportion of communities in each microbial state at the genus level. (B) Principal component analysis of microbiota communities revealed that the community composition of lung microbiota was dis-tinct (PERMANOVA, R2=0.14, P=0.001). (C) Significant differences were found be-tween DMM1 and DMM2 in terms of CD56+16+ counts (p=0.035). (D) The proportion of bronchiectasis was significantly different between DMM1 and DMM2 (p=0.02). (E) Cultures conversion rate was significantly greater in patients with DMM2 pneumotype than those with DMM1 pneumotype. (F) Kaplan–Meier curves of persistent positive cultures during follow-up in patients stratified by pneumotype. Hypothesis testing was performed using the (C) Mann-Whitney U test, (D and E) χ2 test, (F) Log-rank test. PNTM, nontuberculous mycobacterial pulmonary disease; PTB, pulmonary tuberculosis. Conclusion Characterization of the lung microbiota improved diagnostic efficacy and identified high-risk patients. We conclude that sampling the lung microbiota can aid in clinical decision-making and provide novel therapeutic avenues for PNTM. Disclosures All Authors: No reported disclosures
Background.The incidence of nontuberculous mycobacterial pulmonary disease (PNTM) is rising, but available diagnostics and treatments have limitations.The lung microbiota plays an important role in the onset and progression of lung diseases.Thus, investigating the relationship between the lung microbiota and treatment outcomes in PNTM should contribute to the development of improved therapeutic, diagnostic, and prognosis-determination tools.
Methods.Bronchoalveolar lavage fluid (BALF) was collected from 169 patients with PNTM, 46 patients with pulmonary tuberculosis (PTB), 17 individuals without mycobacterial infection, and 37 individuals without infection.The lung microbiota in BALF was characterized using metagenomic sequencing.Patients with PNTM and PTB were identified with a microbiome-based classifier.Different pneumotypes were defined and associated with outcomes.
Results.Analysis of BALF samples revealed that patients with PNTM differed from controls in terms of lung microbiota richness and composition.The species co-occurrence network in PNTM exhibited low diversity and predominantly positive interactions.Random forest models improved the classification efficacy of PNTM and PTB based on the lung microbiota.At the 13-month median follow-up, pneumotype 1 (with Mycobacterium, opportunistic pathogens, and anaerobes) had a lower probability of sustained culture conversion (hazard ratio = 0.29; 95% confidence interval = 0.11-0.72;P = 0.007) than pneumotype 2, indicating a worse prognosis .
Differences in lung microbial taxonomic characteristics between PNTM patients and PTB patients.
Conclusion.
Characterization of the lung microbiota improved diagnostic efficacy and identified high-risk patients.We conclude that sampling the lung microbiota can aid in clinical decision-making and provide novel therapeutic avenues for PNTM.
Disclosures.All Authors: No reported disclosures (A and B) Relative abundance comparisons at the genus (A) and species (B) levels including major bacteria and fungi.In the histogram, microbial taxa that were overrepresented in the PNTM patients were depicted on the right (log2foldchange >0), and microbial taxa that were overrepresented in the PTB patients were depicted on the left (log2foldchange <0).Boxes indicate the horizontal line in each box represents the median, the top and bottom of the box the 25th and 75th percentiles, and the whiskers 1.5 times the interquartile range.(C and D) Classification of PNTM and PTB by Random Forest Model.(C) The top 15 most discriminating taxonomic group between PNTM and PTB.(D) The performance of classifiers in the cohort was measured by the area under the ROC curve (AUC).Cross-validation AUCs based on 5-fold cross-validation replicates of 10 were provided for microbiota classifiers.Model 1 contained mNGS diagnostic criteria.Model 2 contained all differential species.Model3 contained important species filtered by recursive feature elimination (RFE).Hypothesis testing (A) P-values were determined using the Mann-Whitney U test, and the Benjamini-Hochberg was applied to adjust p values.Significant taxa differences (adjusted P-value<0.05)were observed as *.(B) Use the DESeq2 package to calculate log2foldchange.All species displayed were different at the level of p < 0.05.PNTM, nontuberculous mycobacterial pulmonary disease; PTB, pulmonary tuberculosis; mNGS, metagenomic next-generation sequencing.Pneumotype can predict the microbial outcome of PNTM.(A) Dirichlet Multinomial Mixtures (DMM) identified 2 compositionally distinct microbial communities of PNTM patients.The proportion of communities in each microbial state at the genus level.(B) Principal component analysis of microbiota communities revealed that the community composition of lung microbiota was dis-tinct (PERMANOVA, R2=0.14, P=0.001).(C) Significant differences were found be-tween DMM1 and DMM2 in terms of CD56+16+ counts (p=0.035).(D) The proportion of bronchiectasis was significantly different between DMM1 and DMM2 (p=0.02).(E) Cultures conversion rate was significantly greater in patients with DMM2 pneumotype than those with DMM1 pneumotype.(F) Kaplan-Meier curves of persistent positive cultures during follow-up in patients stratified by pneumotype.Hypothesis testing was performed using the (C) Mann-Whitney U test, (D and E) χ2 test, (F) Log-rank test.PNTM, nontuberculous mycobacterial pulmonary disease; PTB, pulmonary tuberculosis.
|
2023-11-29T05:04:54.896Z
|
2023-11-27T00:00:00.000
|
{
"year": 2023,
"sha1": "f195111004d36ec85c0c7788033c6ba28fd77b66",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "f195111004d36ec85c0c7788033c6ba28fd77b66",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Environmental Science"
],
"extfieldsofstudy": []
}
|
7422050
|
pes2o/s2orc
|
v3-fos-license
|
Super-enhancement and Control of Amh Expression
In previous work it was shown that mutation of site 1 in the downstream enhancer sequence (DE) led to ablation of enhancement. Mutation of the Wilms tumour factor element (Wt), situated in the Amh promoter between the tata box and the start of translation (TSS), also led to ablation of enhancement. This suggested that these sites may be the anchor points for a specific duplex factor bridging remote DNA elements to the promoter. Mutation analysis of the DNA sequence between sections 1 and 2 of DE was carried out by site directed mutagenesis. It is reported here that site 4 lying between DE1 and DE2, plays a key role in controlling the level of enhancement.
Introduction
AMH (Amh) (Anti-Müllerian hormone), a member of the TGF β (BMP) family of growth factors, has been widely studied with regard to its role in sexual development [1].SMAT-1, a prepubertal line of mouse Sertoli cells [2], transfected with a plasmid where a prokaryotic reporter gene (d2EGFP) is driven and controlled by Amh promoter and enhancer sequences, has helped an understanding of control of gene expression.While the use of such a reporter may be somewhat indirect to the expression of the eukaryotic gene in its normal physiological environment, results obtained using the EGFP reporter largely agree with previous work where Amh expression was measured directly [3]- [7]: thus providing support for the view that reporter gene analysis can make valid contributions to understanding the driving and control of specific gene expression.It is assumed that SMAT cells contain an Amh gene in its normal genomic setting together with a full set of the requisite transcription factors present in a prepubertal Sertoli cell.The prospective downstream enhancer region (DE) was divided into 3 subregions on the basis of a comparison with human AMH.
Methods
SMAT-1 cells [2] were grown adherently to tissue culture plasticware in DMEM-F12 medium containing glutamax (Gibco), 10% FCS and antibiotics (penicillin at 10 U/ml; streptomycin at 10 µg/ml)-for an assay 10 5 cells were established in individual wells of Costar 24-well plates, one day prior to transfection (in the absence of antibiotics) by LipofectAmine 2000 with 800 ng plasmid DNA per well.After a further 2 days of culture the green EGFP expression was measured using a flow cytometer.Fluorescence emission by individual cells was recorded as red and green so autofluorescence could be excluded.Expressed d2EGFP has a half-life of 2 hours in vivo, so measurement is mainly of rate not accumulation.An index of expression was calculated as geometric mean brightness per cell (G m ) x% cells in the green window.
Site directed mutations were successfully carried out in the promoter region and at DE2 in the enhancer, using the double overlap PCR method [8] [9] with Deep Vent polymerase: this method was consistently unsuccessful when applied to other enhancer sites, using oligo-nucleotide primers based on the sequence shown in Figure 1(b).The "Phusion" method (Thermopol) was successful and has been used for mutation sites 3 -6 (see Figure 1(b)).
Experiments and Results
GATA-1 and GATA-4 DNA have opposite effects when added to the transfection mixture which included a plasmid vector with an Amh promoter but lacking any enhancer DNA [10].Figure 2 shows that a proportionally similar difference is still manifested if the DE enhancer sequence is added to the vector, although the actual overall level of expression is higher.It is clear from the results summarised in this figure that control at the level of the promoter is supplemented and not overridden by enhancement.It seems that the enhancement mechanism is independent of control at the level of transcription factor-promoter interaction: however with other enhancers other interpretations are likely [14] [15].
Previously it was shown that a DNA sequence immediately downstream of the PA signal of a mouse Amh gene, when inserted in an equivalent position in the EGFP plasmid vector, had a moderate enhancer effect on green (reporter) expression [12].Since it seemed possible that ~23 bp of DE between DE1 and DE2, might include a key sequence for enhancement, DNA of this region was made by annealing the appropriate oligo-nucleotides [13] and adding it to the transfection mixture of plasmid constructs with Amh promoter sequences but lacking an intrinsic DE.This double stranded oligo-DNA and the single stranded constituent oligos, were added to the plasmid DNA-Lipofect Amine mixture at the time of transfection of the SMAT cells.The results of these experiments are summarised in Figure 3. High levels of added reagents had a profound (non-specific) inhibitory effect on green expression.A small but significant expression was seen with lower amounts of the reverse oligonucleotide but not with annealed oligo-DNA.Further experiments (Figure 3(b) and Figure 3(c)) confirmed these results and in addition showed that the forward oligo also showed a small but significant incremental effect.A detailed sequence map of the Amh gene outlined in (a).Nucleotide sequence (5' to 3') of a mouse Amh promoter and accompanying downstream enhancer sequence.SF3a2-PA is the polyadenylation signal of an upstream gene coding for a spliceosome component.Potential promoter elements (white on black) are identified on the basis of sequence similarity with human and other mammalian Amh promoter sequences: the order of elements is conserved.These potential elements are identified by superscript titles, with mutated sequences indicated as subscripts.Where possible the superscript titles are defined by their affinity for known transcription factors.The start of translation (0) is position 8647 in GenBank mouse genomic nucleotide sequence X83,733.DE is a downstream enhancer starting at the polyadenylation signal for Amh: the DNA for this element was inserted in the d2EGFP vector at a MluI site as indicated in this figure.The MluI site replaces an AflII site which was in the vector as supplied by Invitrogen.Previously it was shown that mutation of distSF1; sox; Se1; and proxSF1; resulted in a significant reduction in d2EGFP expression [13].In contrast mutation of proxGata resulted in a small but significant increase in expression [13].Mutation of the other elements, including the Wilms tumour factor element (Wt) had no measurable effect in this expression system when a potential enhancer was absent.Dual mutation of Wt with any other of the promoter elements had no effect.However when a downstream enhancer (DE) was added to the system, mutation of either Wt or DE2 ablated the enhancer effect [12]: this suggested that these sites are the anchor points for a duplex factor forming a bridge holding the enhancer sequence onto the promoter immediately upstream of the translation start site.A potential short homologous sequence in DE and in Wt is marked by a superscript + (+ + + + +): the distance between this site and the bridge anchor points is larger in DE than in Wt, suggesting that the DE adherent to the promoter at Wt forms a small bulge or minor loop.The DNA was made by annealing appropriate forward and reverse oligo-nucleotides [13].The annealed oligo-DNA and its free oligo-nucleotides were added to the transfection mixture 2 days prior to measurement of d2EGFP expression using a flow cytometer.(a): A complete non-specific suppression was observed with quantities of reagent greater than 30 pMol per culture (0.6 ml).With lower amounts there was neither a decrease nor an increase in rate of expression by annealed oligo-DNA.However there was a small but significant incremental effect with 1pMol of the reverse oligo.(b): A repeat of the experiment gave the same result.(c): These results were confirmed in another repeat experiment, with the added result that 1 pMol of forward oligo also gave a significant increase in expression.Comparison of the 10 pMol group in "(a)" and "(b)" suggests that this concentration is at a critical point between non-specific suppression and a specific incremental effect.5) in the prospective minor loop of DE formed when it is closely associated with the Amh promoter.This modification of DE results in "super-enhancement" of EGFP expression.It is also clear that with a simultaneous mutation of Wt superenhancement is ablated, presumably by disruption of the Wt-DE2 bridge.The term "super-enhancer" is used here in the context of the experimental results described-there are other definitions [14].
Discussion
In plasmid constructs without an enhancer sequence, mutation of the Wilms tumour factor element (Wt), either alone or in combination with mutation of other promoter elements, had no effect.However when DE is present in the construct, mutation of Wt and/or the downstream enhancer element DE2 resulted in ablation of enhancement [12].These results suggested that the DE sequence was brought into close contact with the promoter immediately upstream of the start of translation, by a specific duplex factor, thus forming a major loop in the genomic DNA, as well as bringing the essential elements for enhancement into close juxtaposition with the promoter.A reverse identity exists between parts of the DE and Wt sequences.The gap between the anchor points Wt and DE2, and the homology (cccacc) "sites" are adjacent in Wt but 15 nt apart in DE, suggesting that when DE is bridged (looped) to Wt, there is a bulge or minor loop in the DE DNA.Mutation of site 4 (see Figure 1(b) and Figure 5), which is in this hypothetical minor loop, was made to ascertain if this DNA played any part in enhancement.The results of mutating sites 3 -6 are summarised in Figure 4.
A sequence consisting of 23 nt immediately upstream of DE2 was put in a NIH Blast search in the mouse genome data base, in particular with reference to chromosome 10; expected hits < 4; hits achieved > 120.Since this sequence forms part of the Amh enhancer it seems unlikely to be an example of a random insert of viral origin.Perhaps it is an example of a mechanism forming ordered 3D structure to genomic DNA, which may be related in some way and in some circumstances, to enhancement [15].
Figure 5 is a sequence based schematic illustrating a possible interface of DE2 with the promoter between the tata box and the start of translation.One interpretation of the observation that super-enhancement occurs on mutation of site 4, is that super-enhancement is the default condition, normally down regulated (silenced) by a masking factor or organelle such as a nucleosome-like body [16], specifically binding to this site.Competition for the site by a non-inhibitory factor such as a micro RNA or perhaps an oligo-nucleotide, could result in an easing of control over the release of the brake on the underlying default enhancement mechanism.A similar relaxation of control is achieved by mutating the site.Removal of control of default expression at critical moments in development, such as the removal of the Müllerian duct in early male development [1], would result in a burst of hyper-expression.The high frequency of the occurrence of ~23 nt upstream of DE2 suggests that this sequence may be associated with a mechanism for imparting an ordered 3D structure to genomic DNA, which may be part of the mechanism for enhancement by distant DNA elements: a phenomenon widely acknowledged in the recent literature [17].If enhancement is manifested by the addition of signals to the message, triggering its longevity and/or rapid recycling during translation [18], then this might be detectable by mRNA sequence changes during super-enhancement.
Figure 1 (
a) is a summary of key factors thought to influence Amh expression.The role of promoter elements in Amh (EGFP) expression is summarised in the legend to Figure 1(b).
Figure 1 .
Figure 1.(a) A simplified schematic to illustrate the interelationships between the components of the mouse Anti-Mullerian hormone (Amh) gene.The triangles represent transcription and other elements.The dashed line indicates the Wt-DE2 connection.(b)A detailed sequence map of the Amh gene outlined in (a).Nucleotide sequence (5' to 3') of a mouse Amh promoter and accompanying downstream enhancer sequence.SF3a2-PA is the polyadenylation signal of an upstream gene coding for a spliceosome component.Potential promoter elements (white on black) are identified on the basis of sequence similarity with human and other mammalian Amh promoter sequences: the order of elements is conserved.These potential elements are identified by superscript titles, with mutated sequences indicated as subscripts.Where possible the superscript titles are defined by their affinity for known transcription factors.The start of translation (0) is position 8647 in GenBank mouse genomic nucleotide sequence X83,733.DE is a downstream enhancer starting at the polyadenylation signal for Amh: the DNA for this element was inserted in the d2EGFP vector at a MluI site as indicated in this figure.The MluI site replaces an AflII site which was in the vector as supplied by Invitrogen.Previously it was shown that mutation of distSF1; sox; Se1; and proxSF1; resulted in a significant reduction in d2EGFP expression[13].In contrast mutation of proxGata resulted in a small but significant increase in expression[13].Mutation of the other elements, including the Wilms tumour factor element (Wt) had no measurable effect in this expression system when a potential enhancer was absent.Dual mutation of Wt with any other of the promoter elements had no effect.However when a downstream enhancer (DE) was added to the system, mutation of either Wt or DE2 ablated the enhancer effect[12]: this suggested that these sites are the anchor points for a duplex factor forming a bridge holding the enhancer sequence onto the promoter immediately upstream of the translation start site.A potential short homologous sequence in DE and in Wt is marked by a superscript + (+ + + + +): the distance between this site and the bridge anchor points is larger in DE than in Wt, suggesting that the DE adherent to the promoter at Wt forms a small bulge or minor loop.
Figure 2 .Figure 3 .
Figure 2. Addition of Gata1 or Gata4 plasmid DNA to the transfection mixture has opposite effects on expression in the absence of a downstream enhancer (DE) [10].Here it is shown that there is a similar differential effect when there is moderate enhancement due to the presence of intrinsic DE.Ctrl is the DNA of a third party plasmid lacking a Gata insert….As in all figures p-values are derived from a two tailed t test between control and experimental group, these are included where a positive difference is statistically significant.As in all experiments there are 4 cultures per group.Error bars are SEM.Statistical values calculated using Graph Pad Prism.
Figure 4 (
Figure 4(a) and Figure 4(b) illustrate the effect of mutating site 4, (see Figure 1(b), Figure5) in the prospective minor loop of DE formed when it is closely associated with the Amh promoter.This modification of DE results in "super-enhancement" of EGFP expression.It is also clear that with a simultaneous mutation of Wt superenhancement is ablated, presumably by disruption of the Wt-DE2 bridge.The term "super-enhancer" is used here in the context of the experimental results described-there are other definitions[14].
Figure 4 .
Figure 4.The consequence of mutating site 4, in the putative minor-loop in DE, when this is bridged to Wt in the promoter, is illustrated in this figure: there is a very large increment in d2EGFP expression in both experimental groups.(a): The increment by unmutated DE is relatively modest but nevertheless significant.The large increment in EGFP expression is due to an increase in the rate of expression by individual cells (G m).Mutations (1a, 1b) and of Wt in the promoter, ablate the moderate enhancer effect of DE[11].Mutation of sites 3 and 6 had no effect in this expression system.As mentioned above, mutation of site 4 led to a super-enhancement.(b): A similar result to that illustrated above was obtained in a repeat experiment, where in addition it was shown that breaking the "bridge", by additionally mutating Wt, also ablates super-enhancement.
Figure 5 .
Figure 5.A sequence based diagrammatic representation of the hypothetical juxtaposition of the Amh promoter and the downstream enhancer element (DE).The bottom line is the promoter sequence from the tata box to the translation start site (TSS).*−* represents the gap between the end of the PA signal of the gene and the start of the downstream enhancer: this gap is short in this example but is probably very much greater in other examples of enhancement by remote elements.Superscripted "enhancer element" is an indication of a potential functional sequence which may be responsible for enhancement; "Site4" is the binding site for the brake on (silencer of) super-enhancer activity.= = = represents the anchor sites of a high affinity duplex bridging factor, and ------is a low affinity sequence homology.
|
2016-10-14T09:01:21.650Z
|
2016-07-18T00:00:00.000
|
{
"year": 2016,
"sha1": "3a4204ee71b122065b3ae5941c10fa79d79994d8",
"oa_license": "CCBY",
"oa_url": "https://www.scirp.org/journal/PaperDownload.aspx?paperID=68535",
"oa_status": "GOLD",
"pdf_src": "Crawler",
"pdf_hash": "3a4204ee71b122065b3ae5941c10fa79d79994d8",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
}
|
261544180
|
pes2o/s2orc
|
v3-fos-license
|
Ethanolic Extract from Seed Residues of Sea Buckthorn (Hippophae rhamnoides L.) Ameliorates Oxidative Stress Damage and Prevents Apoptosis in Murine Cell and Aging Animal Models
Hippophae rhamnoides L. has been widely used in research and application for almost two decades. While significant progress was achieved in the examination of its fruits and seeds, the exploration and utilization of its by-products have received relatively less attention. This study aims to address this research gap by investigating the effects and underlying mechanisms of sea buckthorn seed residues both in vitro and in vivo. The primary objective of this study is to assess the potential of the hydroalcoholic extract from sea buckthorn seed residues (HYD-SBSR) to prevent cell apoptosis and mitigate oxidative stress damage. To achieve this, an H2O2-induced B16F10 cell model and a D-galactose-induced mouse model were used. The H2O2-induced oxidative stress model using B16F10 cells was utilized to evaluate the cellular protective and reparative effects of HYD-SBSR. The results demonstrated the cytoprotective effects of HYD-SBSR, as evidenced by reduced apoptosis rates and enhanced resistance to oxidative stress alongside moderate cell repair properties. Furthermore, this study investigated the impact of HYD-SBSR on antioxidant enzymes and peroxides in mice to elucidate its reparative potential in vivo. The findings revealed that HYD-SBSR exhibited remarkable antioxidant performance, particularly at low concentrations, significantly enhancing antioxidant capacity under oxidative stress conditions. To delve into the mechanisms underlying HYD-SBSR, a comprehensive proteomics analysis was conducted to identify differentially expressed proteins (DEPs). Additionally, a Gene Ontology (GO) analysis and an Encyclopedia of Genes and Genomes (KEGG) pathway cluster analysis were performed to elucidate the functional roles of these DEPs. The outcomes highlighted crucial mechanistic pathways associated with HYD-SBSR, including the PPAR signaling pathway, fat digestion and absorption, glycerophospholipid metabolism, and cholesterol metabolism. The research findings indicated that HYD-SBSR, as a health food supplement, exhibits favorable effects by promoting healthy lipid metabolism, contributing to the sustainable and environmentally friendly production of sea buckthorn and paving the way for future investigations and applications in the field of nutraceutical and pharmaceutical research.
Introduction
Hippophae rhamnoides L., commonly known as sea buckthorn or seaberry, boasts a widespread distribution across Asia, Europe, and North America [1].It belongs to the Elaeagnaceae family and has been traditionally used in a wide range of fields, such as foods, pharmaceuticals, and cosmetics [2].The diverse components of sea buckthorn offer a plethora of bioactive compounds, making them prized for medicinal and nutritional applications [3,4].Of noteworthy importance, sea buckthorn seed oil has been subject to extensive investigation due to its exceptional properties, including wound healing, antioxidant, anti-inflammatory, anticancer, antimicrobial, and emollient activities [5][6][7].
Nevertheless, the residues left behind after sea buckthorn seed oil extraction are often discarded or underutilized.However, recent research has highlighted the presence of valuable natural compounds, including flavones, polyphenols, and unsaturated fatty acids, in sea buckthorn seed residues (SBSR) [8][9][10][11].
Despite the promising potential of SBSR for various applications, a comprehensive exploration of its practical utility in raw material development is lacking.Furthermore, the choice of extraction methods can lead to variations in the composition and efficacy of the resulting extracts.Commonly used solvents for SBSR extraction include water and ethanol.Previous studies indicated that aqueous SBSR extracts exhibit hypoglycemic and hypolipidemic effects in type 2 diabetic rats induced with streptozotocin and a high-fat diet [12].In addition, investigations into different concentrations of organic solvents for sea buckthorn seed extraction have demonstrated varying antioxidative capacities, with ethyl acetate extracts displaying the highest and isopropyl extracts displaying the lowest antioxidative potential [13].It is also worth noting that SBSR retains a significant amount of liposoluble substances even post-oil recovery.Advanced extraction methods, such as supercritical carbon dioxide, pressurized ethanol, and enzyme-assisted extraction, have been used to isolate valuable components, including tocopherols and monosaccharides, from sea buckthorn pomace and seeds [14].
Numerous in vitro studies have extensively validated the antioxidant capacities of SBSR.These investigations encompass a range of assays, including the estimation of reactive oxygen species generation, measurement of enzymatic/non-enzymatic antioxidant activities/levels, evaluation of peroxisome proliferator-activated receptors levels, and assessment of 3-ethylbenzothiazoline-6-sulfonic acid (ABTS) and 2,2-diphenyl-1-picrylhydrazyl (DPPH) radical scavenging activities [1,7,15].However, under conditions of oxidative stress, the body initiates a protective stress response to counteract free radicals.This response entails not only direct free radical elimination but also augmentation of antioxidant enzyme activity (e.g., glutathione peroxidase (GSH-Px), superoxide dismutase (SOD), and catalase (CAT)) along with a reduction in the accumulation of peroxidation products like malondialdehyde (MDA) and lipofuscin (LP), thus conferring a protective role [16][17][18].While existing research on SBSR has predominantly focused on a single level of impact, a comprehensive investigation utilizing animal experiments is needed to elucidate the mechanisms by which HYD-SBSR functions within the body.Therefore, a thorough exploration of the antioxidative mechanism of SBSR is warranted.
In recent years, flow cytometry has undergone significant advancements, enabling the identification of multiple phenotypic subsets, the selection of individual cells, and even cell isolation using sorting.This technology plays a pivotal role in clinical settings by identifying aberrant cells, quantifying excessive or reduced populations of specific cells, and monitoring changes in tracked cell populations [19].In this study, in conjunction with mouse experiments, the levels of antioxidant enzymes and peroxides were also analyzed in vivo.A proteomics analysis was conducted to scrutinize changes in differentially expressed proteins (DEPs) post-HYD-SBSR administration.Subsequent Gene Ontology (GO) analysis and Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analysis were performed to provide further insights into the impact of DEPs.B16F10 cells, a melanoma cell line derived from mice, possess high metabolic activity leading to reactive oxygen species (ROS) production, potential stress response activation, rapid growth facilitating timely observations, and relevance to simulating disease-related conditions [20,21].These attributes collectively position B16F10 cells as a valuable model for investigating oxidative stress and cellular responses.The D-galactose-induced mouse aging model offers simplicity and ease of operation, enabling rapid inter-group comparisons.It has been demonstrated as an effective simulation of aging effects and finds wide application in antioxidant research [22,23].Prior research has demonstrated that sea buckthorn seed products can contribute to reducing blood glucose and lipid levels in obese mice [12,24].Using them as supplementary food additives holds the potential to regulate lipid metabolism in the body.This approach could also mitigate certain suboptimal health conditions arising from high-sugar diets.The primary objective of this study was to assess the effects of the hydroalcoholic extract from sea buckthorn (Hippophae rhamnoides L.) seed residues (HYD-SBSR) at the cellular and physiological levels while establishing a correlation between in vitro and in vivo experimental findings.This not only supports the broader application of sea buckthorn seed by-products in food but also addresses environmental concerns linked to sea buckthorn production, thereby contributing to sustainable and eco-friendly industrial development.
Preparation of HYD-SBSR
HYD-SBSR was prepared following the methodology outlined in [25].Ground powder of sea buckthorn seed residues (provided by Qinghai Kompu Biotechnology Co., Ltd., Xining, China, dried, crushed, and sifted) was mixed with an 80% ethanol aqueous solution at a ratio of 8:1 (liquid to solid ratio, mL/g).The mixture was then extracted for 1.5 h.After extraction, centrifugation was carried out at 400× g and 4 • C for 15 min.The resulting supernatant was collected, and the extraction process was repeated twice to enhance the yield and purity of the extracted components.The collected supernatants from each extraction were combined to create a consolidated sample.The ethanol in the sample was subsequently removed using rotary evaporation under vacuum conditions at a controlled temperature of 40 • C.This evaporation process facilitated solvent removal, concentration of the extracted substances, and conversion into a fine powder.The collected HYD-SBSR was stored securely for future investigations.The flavonoid, procyanidin, and total phenolic contents reached up to 354.00 ± 21.00 mg RE per g DW, 319.31 ± 11.70 mg CE per g DW, and 271.24 ± 91.30 mg GAE per g DW, respectively [25].
Cell Culture
B16F10 cells (obtained from the Cell Resource Center, Institute of Basic Medicine, Chinese Academy of Medical Sciences) were cultured in Dulbecco's Modified Eagle Medium (DMEM, Grand Island Biological Company, New York, NY, USA), supplemented with 10% heat-inactivated fetal calf serum(FBS, GIBCO, Darmstadt, Germany), and a 1% antibiotic-antimycin solution consisting of 100 units/mL penicillin /streptomycin and 100 U/mL amphotericin (GIBCO, Darmstadt, Germany) at 37 • C in a humidified incubator (Shanghai Shengke, Shanghai, China) under 5% CO 2 .
Cell Viability
Cell viability was determined using a dimethylthiazole-2-yl)-2,5-diphenyltetrazolium bromide (MTT, Sigma, Welwyn Garden City, UK) assay.B16F10 cells were seeded into a 96-well plate at a concentration of 5000 cells per well at 37 • C and were incubated in a cell incubator with 5% CO 2 for 12 h.The cells were treated with H 2 O 2 or HYD-SBSR at different concentrations for a period.Details of the concentration values are provided in Sections 2.6 and 2.7.
The MTT assay involved the addition of a 100 µL mixture of MTT solution (5 mg/mL) and DMEM at a volumetric ratio of 1:5.This treatment was carried out for 4 h, followed by the addition of 150 µL of DMSO to dissolve the resulting products.After a 10-minute incubation at 37 • C, the absorbance was measured at a wavelength of 490 nm to determine the results.
Cell viability was calculated following the equation below.
Cell Viability (%) = OD t /OD 0 × 100% where OD t represents the experimental group absorbance minus zeroing group absorbance and OD 0 represents the control group absorbance minus zeroing group absorbance.
Establishment of the H 2 O 2 -Induced B16F10 Cell Oxidative Stress Model
A model of acute ROS-induced B16F10 cells was established using a 4-hour treatment with hydrogen peroxide(H 2 O 2 ).
Different concentrations of H 2 O 2 ranging from 0.4 to 44.1 mM were utilized for treatment, each lasting for 4 h.At least 6 parallel wells were used for each group.Cell viability was measured.IC 50 parameters were selected to establish the oxidative stress model, striking a balance between eliciting significant effects within a reasonable timeframe and ensuring reproducibility.
Protective and Repair Effects of HYD-SBSR on the H 2 O 2 -Induced B16F10 Cell Oxidative Stress Model
To explore the protective effects of HYD-SBSR on cells, B16F10 cells seeded in a 96-well plate were treated with 0.05, 0.1, 0.2, and 0.4 mg/mL of HYD-SBSR for 24 h, cleaned twice with PBS, and treated with 8.8 mM H 2 O 2 for 4 h.Cell viabilities were detected using an MTT assay.
Additionally, the repair effects of HYD-SBSR were investigated.B16F10 cells were first exposed to H 2 O 2 for 4 h to establish the oxidative stress model.Subsequently, the cells were treated with different concentrations of HYD-SBSR for 24 h.Cell viability was assessed using an MTT assay.
GSH-Px, CAT, SOD, and MDA in B16F10 Cells
The experimental procedure involved the treatment of B16F10 cells with H 2 O 2 (8.8 mM) for a duration of 4 h to establish the H 2 O 2 -induced cell oxidative stress model.Subsequently, the cells were subjected to treatment with varying concentrations of HYD-SBSR (0, 0.05, 0.10, 0.20, and 0.40 mg/mL) for a period of 24 h.The group of cells without any HYD-SBSR treatment served as the model group for comparison.
Cells at a density of 1 × 10 5 were planted in a six-well plate and incubated overnight.Cells were treated with HYD-SBSR (0, 0.05, 0.10, 0.20, and 0.40 mg/mL) for 24 h and then treated with H 2 O 2 (8.8 mM) for 4 h.Only cells treated with HYD-SBSR were used as the HYD-SBSR control.Cultured cells were washed with PBS.The cells were lysed using 200 µL Western and IP cell lysis buffer (Beyotime, Nanjing, China), followed by a 12,000× g centrifugation for 10 min to collect the supernatant.The contents of GSH-Px, CAT, SOD, and MDA were then determined in a 96-well plate (Corning, Corning, NY, USA), according to the instructions of the GSH-Px, CAT, SOD, and MDA detection kit (Beyotime, Nanjing, China).Protein content was measured using a BCA protein kit from Beyotime.The contents of these parameters were calibrated using the protein content and expressed as micrograms per milligram of protein.
Flow Cytometry Analysis for Cell Cycle Distribution and Apoptosis
Annexin V-PE is a fluorescently labeled protein that binds to calcium-dependent phospholipids with a strong affinity for phosphatidylserine (PS) binding sites, similar to Annexin V-FITC.During the early stages of cell apoptosis, the loss of membrane symmetry exposes PS on the outer surface of the cell membrane.In both the early and late stages of apoptosis, PS is present on the outer surface of the cell membrane.Importantly, early apoptotic cells maintain membrane integrity, preventing the entry of 7-AAD.In contrast, late apoptotic cells exhibit compromised membrane integrity and can be co-stained with Annexin V-PE or V-FITC and 7-AAD.This staining method allows for the differentiation and identification of cells at different stages of apoptosis based on variations in phospholipid exposure and membrane integrity [26,27].
In this study, cell death was detected and analyzed with flow cytometry using FACS Calibur (Becton Dickinson Biosciences, San Jose, CA, USA).
Seeding was performed with a cell density of 1.5 × 10 5 cells per well in a 6-well plate, with 3 wells per group.The model group was initially exposed to 8.8 mM H 2 O 2 for 4 h, followed by treatment with 0.1 mg/mL HYD-SBSR for 24 h.Conversely, the HYD-SBSR group underwent the opposite treatment sequence.Then, cells from each group were harvested using trypsin digestion for subsequent analyses.After treatment, approximately 10,000 cells were obtained from each sample.Staining of cell samples was performed using Annexin V-PE and 7-AAD (Annexin V-PE Apoptosis Detection Kit I, BD Bioscience, San Jose, CA, USA).Experimental data were obtained and analyzed using CellQuest (Becton Dickinson Immunocytometry Systems, San Jose, CA, USA).Annexin V-PE and 7-AAD fluorescence (Becton, Dickinson, ND, USA) were used for two-parameter point plots.
Animals and Treatment
A D-galactose-induced aging mice model was established to investigate the protective effect of HYD-SBSR.The experiment was conducted using specific pathogen-free (SPF)grade Institute of Cancer Research (ICR) male mice (Mus musculus).
Except for the control group, the remaining groups were subjected to daily intraperitoneal injections of 10% D-galactose solution (100 mg/kg body weight).Control-group mice received an equivalent volume of 0.9% physiological saline.The low-dose, medium-dose, and high-dose groups were orally administered 100, 300, and 600 mg/(kg body weight) of HYD-SBSR, respectively.The control and model groups were administered an equivalent volume of physiological saline.
The treatment duration lasted 42 days.Throughout the intragastric administration period, the mice were maintained in an environment with a temperature of 22 to 25 • C, relative humidity of 50% to 60%, and a 12-hour light (08:00-20:00) and 12-hour dark cycle under fluorescent illumination.Bedding was changed every 3 days.All mice received a standard diet and had access to water.
No mice experienced mortality over the course of the entire experiment, and all subjects were included in this study.The Guide for the Care and Use of Laboratory Animals (National Institutes of Health, Stapleton, NY, USA) was strictly followed in designing all animal experimental procedures.Ethical approval for all animal experiment procedures was granted by the Experimental Animal Welfare Ethics Committee of Beijing Experimental Animal Research Center (BLARC-2017-A015).
Collection of Mouse Experimental Samples
Following euthanization with cervical dislocation, segments of liver and brain tissues were frozen in liquid nitrogen first and then homogenized under an ice bath to prepare 10% homogenate in a 1:9 (w/v) ratio with pre-chilled physiological saline, which was centrifuged at 3 000 r/min and 4 • C for 15 min.The supernatants were removed to determine tissue biochemical indexes (GSH-Px, CAT, SOD, MDA, lipofuscin, and total antioxidant capacity).All indexes were determined following the manufacturer's instructions.
Additionally, a proteomics assay was conducted on liver samples from the low-dose group, using TMT labeling quantitative proteomics technology (CapitalBio Technology.Beijing, China).The whole process included protein sample preparation, TMT labeling, high-performance liquid chromatography (HPLC) fractionation, liquid chromatographytandem mass spectrometry (LC-MS/MS) analysis, and proteomics data analysis.A detailed protocol for this analysis is provided in the Supplemental Materials.
Differentially Expressed Proteins (DEPs) and GO Enrichment and KEGG Pathway Enrichment Analyses
The DEPs satisfied the following conditions: average ratio-fold change >1.1 (up-regulation) and <0.9 (down-regulation) and a p-value < 0.05.
Functional classification of the DEPs was performed according to the Gene Ontology (GO) annotation and enrichment analysis.The Encyclopedia of Genes and Genomes (KEGG) Orthology-Based Annotation System (KOBAS) v2.0 was used.The enrichment analysis of GO function significance unveiled functional categories that exhibited significant enrichment in the pool of differential proteins in comparison with the broader genomic background.This analysis entailed the submission of all differential proteins to the GO database (http://www.geneontology.org/Map each term of org/, accessed on 20 July 2023), where the count of proteins for each term was calculated.Subsequently, hypergeometric tests were applied to pinpoint GO entries that exhibited substantial enrichment among the differential proteins, relative to the genome background.After multiple tests and corrections, GO terms with a p-value of ≤0.05 were considered significantly enriched in the set of differential proteins.These DEPs were then categorized into three primary classifications, namely, biological process (BP), cell component (CC), and molecular function (MF).The KEGG database was used to identify enriched pathways.A two-tailed Fisher's exact test was used to evaluate the enrichment of DEPs relative to all the identified proteins within specific pathways.A pathway achieving a corrected p-value of < 0.05 was deemed to be significant.These pathways were subsequently classified into hierarchical categories in accordance with the KEGG website's structure.
Statistical Analysis
The experimental data was analyzed using SPSS 19.0 (SPSS, Chicago, IL, USA) and GraphPad Prism 9.0 (GraphPad Software, La Jolla, CA, USA) software.Single factor ANOVA analysis was used for comparisons between groups, and a t-test was used for pairwise comparisons.A p < 0.05 was considered statistically significant, and the results were shown as mean ± standard deviation.
Establishment of the H 2 O 2 -Induced B16F10 Oxidative Stress Model
The impact of H 2 O 2 on cell viability and the induction of oxidative stress in B16F10 cells were investigated.An MTT assay was used to assess the viability of B16F10 cells exposed to varying concentrations of H 2 O 2 , ranging from 0.4 to 44.1 mM.The results demonstrated that as the dose of H 2 O 2 increased, cell viability progressively decreased (Figure 1).Conversely, as the H 2 O 2 concentration decreased, cell viability exhibited an increase, and the effects on B16F10 cells were nearly 100% when the H 2 O 2 concentration was below 4.4 mM.However, when the H 2 O 2 concentration exceeded 17.6 mM, the viability of B16F10 cells was almost reduced to 0%.For the purpose of establishing an oxidative stress model, the IC 50 value was chosen, which represents the concentration at which cell viability is reduced by 50%.Specifically, the viability of B16F10 cells treated with 8.8 mM H 2 O 2 was measured to be (50.31± 2.53) %.Therefore, the oxidative stress model was established using the conditions of 8.8 mM H 2 O 2 exposure for a duration of 4 h.
Protective and Repair Effects of HYD-SBSR on H 2 O 2 -Induced B16F10
The investigation delved into the impact of varying concentrations of HYD-SBSR (ranging from 0.05 to 0.4 mg/mL) on the viability of B16F10 cells (Figure 2A).Remarkably, when treated with 0.05 and 0.1 mg/mL of HYD-SBSR, cell viability surpassed the 80% mark, underscoring the compound's low cytotoxicity and propensity to sustain cell survival rates beyond 80%.Consequently, the concentration of 0.1 mg/mL HYD-SBSR was deemed suitable for subsequent experimental conditions.
Foods 2023, 12, x FOR PEER REVIEW 7 of 16 cells was almost reduced to 0%.For the purpose of establishing an oxidative stress model, the IC50 value was chosen, which represents the concentration at which cell viability is reduced by 50%.Specifically, the viability of B16F10 cells treated with 8.8 mM H2O2 was measured to be (50.31± 2.53) %.Therefore, the oxidative stress model was established using the conditions of 8.8 mM H2O2 exposure for a duration of 4 h.
Protective and Repair Effects of HYD-SBSR on H2O2-Induced B16F10
The investigation delved into the impact of varying concentrations of HYD-SBSR (ranging from 0.05 to 0.4 mg/mL) on the viability of B16F10 cells (Figure 2A).Remarkably, when treated with 0.05 and 0.1 mg/mL of HYD-SBSR, cell viability surpassed the 80% mark, underscoring the compound's low cytotoxicity and propensity to sustain cell survival rates beyond 80%.Consequently, the concentration of 0.1 mg/mL HYD-SBSR was deemed suitable for subsequent experimental conditions.
Additionally, this study examined the potential of both pre-and post-treatment with HYD-SBSR in the context of H2O2-induced oxidative stress.The examination encompassed assessments of cell viability (Figure 2A) as well as apoptosis (Figure 2B,C).
In both of the studies on different HYD-SBSR treatments, the respective model group generally exhibited significant oxidative stress (p < 0.01) compared with their respective control group.The pre-treatment with HYD-SBSR at 0.1 mg/mL (HYD-SBSR + 8.8 mM H2O2 in Figure 2A) resulted in a substantial augmentation of B16F10 cell viability, showcasing a clear contrast with the model group.Conversely, the other concentrations failed to induce similar beneficial effects.Furthermore, the post-treatment strategy using 0.1 mg/mL HYD-SBSR (8.8 mM H2O2+ HYD-SBSR in Figure 2A) also exhibited the ability to enhance cell viability when juxtaposed with the model group.
Flow cytometry analysis was used to examine cell apoptosis at a concentration of 0.1 mg/mL HYD-SBSR (Figure 2B).This enabled the computation of rates for early, late, and total apoptosis (Figure 2C).Strikingly, the model group demonstrated the highest levels of early, late, and total apoptosis among the various groups studied.Notably, the proportion of cells undergoing early apoptosis eclipsed that of cells undergoing late apoptosis.The pre-and post-treatment regimens with HYD-SBSR were both capable of attenuating the pro-apoptotic effects induced with H2O2, thereby reducing the number of cells undergoing either early or late apoptosis.Interestingly, among the studied interventions, the pre-treatment with HYD-SBSR exhibited a relatively stronger protective effect against H2O2-induced oxidative stress.To gain deeper insights into the protective and repair properties of HYD-SBSR in B16F10 cells, intracellular antioxidant enzyme levels (GSH-Px, SOD, and CAT) and lipid peroxidation products, such as MDA content, were detected.
As illustrated in Figure 2D, the analysis involved cells subjected to different treatments, including H2O2 and HYD-SBSR, as well as pre-and post-treatment strategies in H2O2-induced models.In comparison with the control group, a discernible trend emerged Additionally, this study examined the potential of both pre-and post-treatment with HYD-SBSR in the context of H 2 O 2 -induced oxidative stress.The examination encompassed assessments of cell viability (Figure 2A) as well as apoptosis (Figure 2B,C).
In both of the studies on different HYD-SBSR treatments, the respective model group generally exhibited significant oxidative stress (p < 0.01) compared with their respective control group.The pre-treatment with HYD-SBSR at 0.1 mg/mL (HYD-SBSR + 8.8 mM H 2 O 2 in Figure 2A) resulted in a substantial augmentation of B16F10 cell viability, showcasing a clear contrast with the model group.Conversely, the other concentrations failed to induce similar beneficial effects.Furthermore, the post-treatment strategy using 0.1 mg/mL HYD-SBSR (8.8 mM H 2 O 2 + HYD-SBSR in Figure 2A) also exhibited the ability to enhance cell viability when juxtaposed with the model group.
Flow cytometry analysis was used to examine cell apoptosis at a concentration of 0.1 mg/mL HYD-SBSR (Figure 2B).This enabled the computation of rates for early, late, and total apoptosis (Figure 2C).Strikingly, the model group demonstrated the highest levels of early, late, and total apoptosis among the various groups studied.Notably, the proportion of cells undergoing early apoptosis eclipsed that of undergoing late apoptosis.The pre-and post-treatment regimens with HYD-SBSR were both capable of attenuating the pro-apoptotic effects induced with H 2 O 2 , thereby reducing the number of cells undergoing either early or late apoptosis.Interestingly, among the studied interventions, the pretreatment with HYD-SBSR exhibited a relatively stronger protective effect against H 2 O 2induced oxidative stress.
To gain deeper insights into the protective and repair properties of HYD-SBSR in B16F10 cells, intracellular antioxidant enzyme levels (GSH-Px, SOD, and CAT) and lipid peroxidation products, such as MDA content, were detected.
As illustrated in Figure 2D, the analysis involved cells subjected to different treatments, including H 2 O 2 and HYD-SBSR, as well as pre-and post-treatment strategies in H 2 O 2induced models.In comparison with the control group, a discernible trend emerged that the GSH-Px, CAT, and SOD levels notably decreased, while the MDA content exhibited a marked increase in the model group (p < 0.01).Conversely, cells treated solely with HYD-SBSR demonstrated minimal alterations in these indices (p > 0.05), with the exception of CAT (p < 0.05), as compared to the control group.
Significantly, the post-treatment with HYD-SBSR following H 2 O 2 exposure (H 2 O 2 + HYD-SBSR treatment) yielded a substantial increase in the levels of cellular enzymes (GSH-Px, CAT, and SOD) (p < 0.01) coupled with a reduction in MDA contents (p < 0.01) when contrasted with the model group.This intervention resulted in the GSH-Px, CAT, SOD, and MDA contents reaching values of (303.06 ± 19.24) mU/mg protein, (107.95 ± 1.29) mU/mg protein, (237.27 ± 21.98) mU/mg protein, and (8.12 ± 0.96) µmol/g protein, respectively, indicative of the potential reparative influence of HYD-SBSR.Similarly, the pre-treatment regimen involving HYD-SBSR (HYD-SBSR + H 2 O 2 treatment) displayed a comparable trend.However, it was noted that the enhancement of GSH-Px, CAT, and SOD levels was relatively subdued, and the inhibition of MDA was less pronounced compared with the post-treatment strategy using HYD-SBSR.
Antioxidant Enzyme and Peroxide Levels In Vivo
D-galactose-induced aging mice were utilized to investigate antioxidant enzyme and peroxide levels in vivo.The continuous administration of D-galactose led to disturbances in glucose metabolism within mice cells, disrupting the body's antioxidant defense system and resulting in the accumulation of free radicals.This, in turn, triggered oxidation reactions leading to the production of substances like LP and MDA, culminating in bodily aging.
Apart from assessing the total antioxidant capacity (T-AOC), the activities of key antioxidants, including SOD, GSH-Px, and CAT, offered direct insight into the body's ability to counteract free radicals.For the analysis, liver and brain tissues were collected from mice to measure these antioxidant enzyme and peroxide levels (Figure 3).In comparison with the control group, the activities of SOD, GSH-Px, CAT, and T-AOC in the liver and brain tissues of the model group exhibited significant reductions (p < 0.01), effectively confirming the successful establishment of the D-galactose-induced aging model.
TMT-Based Quantitative Proteomics Analysis of Liver Tissue
In order to further clarify the mechanism underlying HYD-SBSR in vivo, we conducted a TMT-based quantitative proteomics analysis coupled with bioinformatics assessment.Leveraging the mouse (mmu) protein database, a comprehensive total of 5510 reliable proteins were identified under conditions with less than a 1% false discovery rate (FDR).Figure 4 illustrates the screening process of DEPs using the fold change (FC) value and p-value as pivotal criteria (FC ≥ 1.1 or ≤0.9 and p ≤ 0.05).In comparison with the control group, the model group exhibited a set of 79 DEPs, among which 32 were upregulated and 47 were downregulated (Table S1).Similarly, the HYD-SBSR group demonstrated 100 DEPs, consisting of 63 upregulated and 37 downregulated proteins, relative to the model group (Table S2).Intriguingly, six DEPs were found to overlap between the DEPs for both groups (Figure 4C, Table 1).Among these DEPs, three (Eef1e1, Farp1, and Aga) displayed an intriguing pattern of being upregulated in the model group and concurrently downregulated in the HYD-SBSR group (Table 1).As depicted in Figure 3A, liver GSH-Px activities were significantly elevated in both the HYD-SBSR-L group (77.85 ± 13.10 mU/mg protein) and the HYD-SBSR-H group (48.61 ± 9.12 mU/mg protein) in comparison with the model group (p < 0.01).However, no statistically significant differences were observed between the HYD-SBSR-M group and the model group (p > 0.05).Conversely, for brain tissue, no noteworthy distinctions were evident between the three doses of HYD-SBSR and the model group (p > 0.05).
Figure 3B depicts the activities of CAT in both liver and brain tissues.Mice in the HYD-SBSR-L group exhibited a marked increase in liver CAT activity (587.66 ± 25.63 U/mg protein, p < 0.01), surpassing that in the model group (551.00 ± 41.85 U/mg protein).In contrast, medium and high doses of HYD-SBSR did not significantly affect liver CAT activities when compared to the model group (p > 0.05).However, a substantial impact of HYD-SBSR on CAT activity in brain tissue was evident, showing values of 42.42 ± 6.64 U/mg protein as compared with the model group.Notably, all three doses of HYD-SBSR significantly enhanced brain CAT activity (p < 0.01) in a dose-dependent manner.
Turning to liver SOD activities (Figure 3C), both the HYD-SBSR-M and HYD-SBSR-L groups exhibited significantly higher levels compared with the model group (p < 0.05).No significant differences were observed in liver SOD activities between HYD-SBSR-H and the model group (p > 0.05).Similar patterns emerged for brain SOD activities, showcasing a dose-dependent trend.Medium-and high-dose groups (HYD-SBSR-M and HYD-SBSR-H) displayed significantly elevated brain SOD activities (p < 0.01) compared with the model group, whereas no substantial differences were observed between the HYD-SBSR-L and model groups (p > 0.05).
The total antioxidant capacities (T-AOC) of both liver and brain tissues in the model group were notably lower than those in the control group (Figure 3F, liver, p < 0.01; brain, p < 0.05).HYD-SBSR treatment brought about a significant enhancement in T-AOC levels within the liver and brain tissues.T-AOC levels in the HYD-SBSR-L group surpassed those in the model group (liver, p < 0.05; brain, p < 0.01).However, no statistically significant differences in T-AOC levels were observed between the HYD-SBSR-M and HYD-SBSE-H groups and the model group (p > 0.05).
The levels of MDA (Figure 3D) and LP (Figure 3D) in the D-galactose-induced aging model group were significantly higher than those in the control group (LP, p < 0.05; MDA, p < 0.01).After HYD-SBSR treatment, the levels of LP and MDA in liver and brain were lower than those in the model group (p < 0.01).
TMT-Based Quantitative Proteomics Analysis of Liver Tissue
In order to further clarify the mechanism underlying HYD-SBSR in vivo, we conducted a TMT-based quantitative proteomics analysis coupled with bioinformatics assessment.Leveraging the mouse (mmu) protein database, a comprehensive total of 5510 reliable proteins were identified under conditions with less than a 1% false discovery rate (FDR).Figure 4 illustrates the screening process of DEPs using the fold change (FC) value and p-value as pivotal criteria (FC ≥ 1.1 or ≤0.9 and p ≤ 0.05).In comparison with the control group, the model group exhibited a set of 79 DEPs, among which 32 were upregulated and 47 were downregulated (Table S1).Similarly, the HYD-SBSR group demonstrated 100 DEPs, consisting of 63 upregulated and 37 downregulated proteins, relative to the model group (Table S2).Intriguingly, six DEPs were found to overlap between the DEPs for both groups (Figure 4C, Table 1).Among these DEPs, three (Eef1e1, Farp1, and Aga) displayed an intriguing pattern of being upregulated in the model group and concurrently downregulated in the HYD-SBSR group (Table 1).We performed GO enrichment analysis separately on the DEPs identified in the control versus model group (Figure 5A) and the model versus the HYD-SBSR group (Figure 5B).In the comparison between the control and model groups, the DEPs exerted significant impacts on 11 types of biological processes (BPs), 6 types of cellular components (CCs), and 6 types of molecular functions (MFs) (p < 0.05).Among the top five enriched GO terms were processes such as catecholamine metabolic process, positive regulation of cholesterol biosynthetic process, cellular response to glucagon stimulus, amino acid binding, and carboxylyase activity.In the comparison between the model and HYD-SBSR groups, the DEPs significantly influenced 9 BP, 10 CC, and 10 MF categories (p < 0.05).Among the top five enriched GO terms were processes like negative regulation of intestinal phytosterol and cholesterol absorption, 3 -phosphoadenosine 5 -phosphosulfate biosynthetic process, sulfate adenylyl transferase (ATP) activity, ATP-binding cassette (ABC) transporter complex, and adenylyl sulfate kinase activity.Further analysis revealed the enriched KEGG pathways (p < 0.05).In the analysis of the control vs. the model group (Figure 6A), the top three pathways were selected including cysteine and methionine metabolism, metabolic pathways, and biosynthesis of amino acids.In the model vs. the HYD-SBSR group (Figure 6B), there were 10 pathways that showed significance (p < 0.05), including selenocompound metabolism, PPAR signaling pathway, lysosome, ABC transporters, metabolic pathways, fat digestions and adsorption, glycerophospholipid metabolism, complement and coagulation cascades, cholesterol metabolism, and sulfur metabolism.Further analysis revealed the enriched KEGG pathways (p < 0.05).In the analysis of the control vs. the model group (Figure 6A), the top three pathways were selected including cysteine and methionine metabolism, metabolic pathways, and biosynthesis of amino acids.In the model vs. the HYD-SBSR group (Figure 6B), there were 10 pathways that showed significance (p < 0.05), including selenocompound metabolism, PPAR signaling pathway, lysosome, ABC transporters, metabolic pathways, fat digestions and adsorp-tion, glycerophospholipid metabolism, complement and coagulation cascades, cholesterol metabolism, and sulfur metabolism.
Further analysis revealed the enriched KEGG pathways (p < 0.05).In the analysis of the control vs. the model group (Figure 6A), the top three pathways were selected including cysteine and methionine metabolism, metabolic pathways, and biosynthesis of amino acids.In the model vs. the HYD-SBSR group (Figure 6B), there were 10 pathways that showed significance (p < 0.05), including selenocompound metabolism, PPAR signaling pathway, lysosome, ABC transporters, metabolic pathways, fat digestions and adsorption, glycerophospholipid metabolism, complement and coagulation cascades, cholesterol metabolism, and sulfur metabolism.
Discussion
H2O2 is known to readily diffuse into nuclear tissue, leading to the onset of various oxidative stress conditions.Due to this property, exogenous H2O2 has often been used in studies to induce oxidative stress damage and apoptosis.This approach helps researchers investigate the cellular protective and repair effects of bioactive substances.Cellular damage often results in a decline in the body's ability to eliminate H2O2, causing an accumulation of ROS and the initiation of lipid peroxidation reactions.In turn, this leads to the formation of products such as MDA, which can damage vital biological molecules like
Discussion
H 2 O 2 is known to readily diffuse into nuclear tissue, leading to the onset of various oxidative stress conditions.Due to this property, exogenous H 2 O 2 has often been used in studies to induce oxidative stress damage and apoptosis.This approach helps researchers investigate the cellular protective and repair effects of bioactive substances.Cellular damage often results in a decline in the body's ability to eliminate H 2 O 2 , causing an accumulation of ROS and the initiation of lipid peroxidation reactions.In turn, this leads to the formation of products such as MDA, which can damage vital biological molecules like proteins and lipids.To maintain cellular homeostasis, the body relies on its antioxidant enzyme system, which includes key enzymes like GSH-Px, CAT, and SOD.These enzymes play a crucial role in breaking down hydrogen peroxide generated during metabolism and neutralizing ROS and other free radicals that arise during oxidative stress.In this study, an H 2 O 2 -induced B16F10 model was established to evaluate the protective and repair abilities of HYD-SBSR at the cellular level in vitro.The results indicated that HYD-SBSR exhibited superior protective capabilities, as evidenced by its tendency to enhance cell viability and reduce H 2 O 2 -induced apoptosis (Figure 2A-C).However, the cellular antioxidant assays hinted that while HYD-SBSR demonstrated a protective effect, its repair effect showed even greater promise (Figure 2D).Although the findings did not align completely when comparing the two treatment types, it was evident that HYD-SBSR held the potential to counteract oxidative stress.This aligned with the findings from our laboratory's analysis of flavonoid, procyanidin, and total phenolic contents present in HYD-SBSR [25].
To address the limited reports available on the impact of HYD-SBSR in vivo, a more in-depth investigation into its potential repair effects on antioxidant enzymes and peroxides in mice was carried out.The administration of three different levels of HYD-SBSR demonstrated the ability to elevate antioxidant enzyme levels in the liver and brain.These results suggested that the liver, known for its detoxification and metabolism functions, played a crucial role in maintaining cellular homeostasis.
To delve further into the molecular pathways underlying the antioxidant effects of HYD-SBSR in vivo, a proteomics approach was used.Importantly, three DEPs, namely, Eef1e1, Farp1, and Aga, were screened, which were upregulated in the model group and decreased after the treatment of HYD-SBSR.Conversely, one DEP (Pigr) demonstrated the opposite expression pattern.
Of significance among the DEPs is the protein Aga, also referred to as aspartylglucosaminidase, a lysosomal enzyme that starts as an inactive precursor molecule and is swiftly activated within the endoplasmic reticulum [28].An Aga deficiency leads to aspartylglycosaminuria, a lysosomal disorder causing impaired glycoprotein degradation [29].Remarkably, research by Ulla Dunder et al. illustrated that a 10% increase in Aga activity resulted in a 20% reduction in aspartylglycosaminuria accumulation [30].The increased expression of Aga was detected in the model group, probably indicating a self-regulation of the body against disordered glucose metabolism.
Eef1e1, also known as eukaryotic translation elongation factor 1 ε 1, plays a role in protein synthesis and cell differentiation [31,32].It positively modulates the ATM response to DNA damage [33] and can be induced by DNA-damaging agents like UV, Adriamycin, actinomycin D, and cisplatin.The increased expression of Eef1e1 in the model group is understandable, given its connection to the DNA damage response.The decrease in Eef1e1 protein levels after HYD-SBSR treatment suggested a potential reparative effect.
Moreover, Farp1 was identified as one of the guanine nucleotide exchange factors, which belongs to a family of regulatory proteins for Rho GTPases, influencing various cellular processes [34].Farp1 interaction with cell surface proteins regulates neuronal development [35,36], and high expression is associated with lymphatic invasion and metastasis [37].In the context of D-galactose-induced aging mice with disrupted glucose metabolism and ROS accumulation, upregulated Farp1 likely combats oxidative stress and inflammation in the model group.Conversely, the decreased Farp1 expression upon HYD-SBSR administration implies a beneficial effect.
Pigr (polymeric immunoglobulin receptor precursor) is a single transmembrane protein.Its expression was subsequently upregulated after HYD-SBSR treatment.The impact on the MEK/ERK pathway suggests potential attenuation of liver injury in mice [38].
Regarding the GO and KEGG pathway analyses, our findings revealed that HYD-SBSR significantly influences GO terms related to mitochondria, lipid storage, triglyceride homeostasis, ATP-binding cassette (ABC) transporter complex, and more.
Enriched KEGG pathways in the HYD-SBSR group compared with the model group encompass the PPAR signaling pathway, fat digestion and absorption, glycerophospholipid metabolism, and cholesterol metabolism.These pathways are closely associated with the body's antioxidant status and changes in MDA and LP indicators.PPARs, in particular, play a pivotal role in lipid metabolism, mitochondrial function, and antioxidant defense, helping mitigate oxidative stress [39].Enhanced expression of ABCG5 and ABCG8 transporters following HYD-SBSR suggests a potential mechanism for reducing oxidative stress by promoting cholesterol excretion and metabolic balance [40,41].This antioxidant effect is indicated by restored SOD, CAT, and GSH-Px activities and decreased MDA levels, which contributes to the reduction in obesity and hepatic steatosis in the liver.The proteomics analysis provides initial insights into the mechanisms underlying HYD-SBSR's in vivo actions, paving the way for comprehensive research on its promising applications in oxidative stress-related conditions.
Conclusions
In this study, we initiated our research by creating an oxidative stress model using H 2 O 2 on B16F10 cells, aiming to assess the potential cytoprotective and repair effects of HYD-SBSR.The results we obtained pointed toward a decrease in apoptosis rates and an improvement in resistance to oxidative stress upon treatment with HYD-SBSR.Additionally, our findings suggested that HYD-SBSR exhibited significant properties in terms of facilitating cell repair.
By incorporating the outcomes of antioxidant enzyme and peroxide analyses at the cellular level, further investigations were conducted to evaluate the impact of HYD-SBSR on antioxidant enzymes and peroxides in mice.The results exhibited significant antioxidant performance, particularly at lower concentrations of the extract, suggesting a potent ability to enhance antioxidant capacity.
Figure 1 .
Figure 1.The effect of different H2O2 concentrations on the cell viability of B16F10.Six parallel wells were used in each group.
Figure 1 .
Figure 1.The effect of different H 2 O 2 concentrations on the cell viability of B16F10.Six parallel wells were used in each group.
Foods 2023 , 16 Figure 3 .
Figure 3. Levels of GSH-Px (A), CAT (B), SOD (C), MDA (D), LP (E), and T-AOC (F) in the liver and brain of five groups of mice (n = 10).Control, the control group in which normal mice were only injected with physiological saline every day; D-galactose, the oxidative stress model group only injected with physiological saline every day; HYD-SBSR-L, the oxidative stress model group injected with 100 mg/kg HYD-SBSR every day; HYD-SBSR-M, the oxidative stress model group injected with 300 mg/kg every day HYD-SBSR; HYD-SBSR-H, the oxidative stress model group injected with 600 mg/kg HYD-SBSR every day.Compared with the control group, * indicates a significant difference, p < 0.05, and ** indicates a highly significant difference, p < 0.01.Compared with the model group, # indicates a significant difference, p < 0.05, and ## indicates a highly significant difference, p < 0.01.
Figure 3 .
Figure 3. Levels of GSH-Px (A), CAT (B), SOD (C), MDA (D), LP (E), and T-AOC (F) in the liver and brain of five groups of mice (n = 10).Control, the control group in which normal mice were only injected with physiological saline every day; D-galactose, the oxidative stress model group only injected with physiological saline every day; HYD-SBSR-L, the oxidative stress model group injected with 100 mg/kg HYD-SBSR every day; HYD-SBSR-M, the oxidative stress model group injected with 300 mg/kg every day HYD-SBSR; HYD-SBSR-H, the oxidative stress model group injected with 600 mg/kg HYD-SBSR every day.Compared with the control group, * indicates a significant difference, p < 0.05, and ** indicates a highly significant difference, p < 0.01.Compared with the model group, # indicates a significant difference, p < 0.05, and ## indicates a highly significant difference, p < 0.01.
Figure 4 .
Figure 4. Volcano plots and Venn diagrams of DEPs.(A) Volcano plot of the control group vs. the model group, where fold change refers to the ratio of protein abundance in the model group compared to the control group.(B) Volcano plot of the model group vs. HYD-SBSR group, where fold change refers to the ratio of protein abundance in the HYD-SBSR group compared to the model group.(C) Venn diagrams showing the distribution of overlapping proteins among the control group and the model group (C vs. M-up and C vs. M-down) and the model group and the HYD-SBSR group (M vs. HS-up and M vs. HS-down).
Figure 4 .
Figure 4. Volcano plots and Venn diagrams of DEPs.(A) Volcano plot of the control group vs. the model group, where fold change refers to the ratio of protein abundance in the model group compared to the control group.(B) Volcano plot of the model group vs. HYD-SBSR group, where fold change refers to the ratio of protein abundance in the HYD-SBSR group compared to the model group.(C) Venn diagrams showing the distribution of overlapping proteins among the control group and the model group (C vs. M-up and C vs. M-down) and the model group and the HYD-SBSR group (M vs. HS-up and M vs. HS-down).
Figure 5 .
Figure 5. GO enrichment analysis: (A) the control vs. the model group and (B) the model vs. the HYD-SBSR group.
Figure 5 .
Figure 5. GO enrichment analysis: (A) the control vs. the model group and (B) the model vs. the HYD-SBSR group.
Figure 6 .
Figure 6.KEGG pathways analysis of DEPs (p < 0.05): (A) the control vs. the model group and (B) the model vs. the HYD-SBSR group.
Figure 6 .
Figure 6.KEGG pathways analysis of DEPs (p < 0.05): (A) the control vs. the model group and (B) the model vs. the HYD-SBSR group.
Table 1 .
Overlapping DEPs in the control group vs. the model group and the model group vs. the HYD-SBSR group.
Table 1 .
Overlapping DEPs in the control group vs. the model group and the model group vs. the HYD-SBSR group.
|
2023-09-06T15:06:59.325Z
|
2023-09-01T00:00:00.000
|
{
"year": 2023,
"sha1": "757e36b1525333b54b7363a8f50bcd229a531128",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2304-8158/12/17/3322/pdf?version=1693819949",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "69c7bfe0eb811c994fc1b8e268d32c99dc366a07",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
85514409
|
pes2o/s2orc
|
v3-fos-license
|
Tracking of Borrelia afzelii Transmission from Infected Ixodes ricinus Nymphs to Mice
Quantitative and microscopic tracking of Borrelia afzelii transmission from infected Ixodes ricinus nymphs has shown a transmission cycle different from that of Borrelia burgdorferi and Ixodes scapularis. Borrelia afzelii organisms are abundant in the guts of unfed I. ricinus nymphs, and their numbers continuously decrease during feeding.
of spirochetes are then dramatically reduced during subsequent molting (6). Spirochetes persisting in the nymphal midgut upregulate OspA (7) and stay attached to the TROSPA receptor on the surface of the midgut epithelial cells (8). Spirochetes remain in this intimate relationship until the next blood meal. As the infected nymphs start feeding on the second host, Borrelia spirochetes sense appropriate physiochemical stimuli that trigger their replication (7,9). Their numbers increase exponentially (10,11), and the spirochetes downregulate OspA and upregulate OspC (7,12). Simultaneously, ticks downregulate the production of TROSPA (8). These changes help spirochetes to detach from the midgut, penetrate into the hemolymph, migrate to the salivary glands (8), and infect the vertebrate host.
Understanding of Lyme borreliosis in Europe lags far behind that in the United States, mainly because the situation is complicated by the existence of several different species in the B. burgdorferi sensu lato complex that act as causative agents of the disease. To date, only a few papers regarding transmission of B. burgdorferi sensu lato strains by I. ricinus ticks have been published. Available publications suggest that the transmission of European Borrelia strains differs from the model cycle described for B. burgdorferi/I. scapularis (5,(13)(14)(15).
In this study, we present an updated view on the B. afzelii transmission cycle. We have performed a quantitative tracking of B. afzelii from infected mice to I. ricinus and back to naive mice. We further tested the role of tick saliva in infectivity and survival of B. afzelii spirochetes.
RESULTS
Borrelia afzelii-Ixodes ricinus transmission model. In order to understand the Lyme disease problem in Europe, the development of a transmission model is essential for the European vector I. ricinus and local Borrelia strains of the B. burgdorferi sensu lato complex. For this purpose, we established a reliable and robust transmission model employing C3H/HeN mice, I. ricinus ticks, and the B. afzelii CB43 strain isolated from local ticks (16). This strain develops systemic infections in mice and causes pathological changes in target tissues. Variably intensive lymphocytic infiltrations were detected in the heart, where the majority of inflammatory cells were concentrated in the subepicardial space with infiltration of myocytes (Fig. 1A). Inflammatory infiltration was prominent within the urinary bladder. The most prominent changes were in the submucosa, close to the basal membrane (Fig. 1B). In the skin, weak infiltration of the epidermis and dermis was documented; however, most lymphocytes were found in deep soft tissues (Fig. 1C). Borrelia afzelii CB43 also turned out to be highly infectious for I. ricinus ticks, as positive infection was detected in 90% to 100% of molted nymphs that fed on infected mice as larvae.
B. afzelii population grows rapidly in engorged I. ricinus larvae and during molting to nymphs. Studies on the dynamic relationship between the Lyme disease spirochete and its tick vector were previously performed on an I. scapularis/B. burgdorferi model (6,10). Nevertheless, little is known about the growth kinetics of European B. afzelii in I. ricinus ticks. The number of spirochetes was determined in engorged I. ricinus larvae fed on B. afzelii-infected mice and then at weekly intervals until larvae molted to nymphs. Measurements were completed at the 20th week postmolt. The mean number of spirochetes in fully fed I. ricinus larvae examined immediately after repletion was relatively low, 618 Ϯ 158 (Ϯ standard errors of the means [SEM]) spirochetes per tick. The spirochetes then multiplied rapidly in engorged larvae, and their numbers continued to increase during molting to nymphs. The maximum number of spirochetes, 21,005 Ϯ 4,805 per tick, was detected in nymphs in the 2nd week after molting. Spirochetal proliferation then halted and the average spirochete number became relatively stable from the 4th to 20th week postmolt, slightly oscillating around the average number of about 10,000 spirochetes per tick (Fig. 2).
B. afzelii numbers in I. ricinus nymphs dramatically drop during feeding. We further examined the absolute numbers of B. afzelii spirochetes in infected I. ricinus nymphs during feeding. Nymphs were fed on mice and forcibly removed at time intervals of 24, 48, and 72 h after attachment, and the spirochetes were then quantified by quantitative PCR (qPCR). Prior to feeding, the mean number of spirochetes per nymph was 10,907 Ϯ 2,590. After 24 h of tick feeding, the number of spirochetes decreased to 7,492 Ϯ 3,294. In the following 2nd and 3rd day of blood intake, the numbers continued to drop to 2,447 Ϯ 801 and 720 Ϯ 138 spirochetes per tick, respectively (Fig. 3A). As this result was in striking contrast to the reported progressive proliferation of B. burgdorferi during I. scapularis nymphal feeding (10, 11), we confirmed the gradual decrease in B. afzelii spirochetes in the midguts of feeding I. ricinus nymphs using confocal immunofluorescence microscopy. In contrast, a parallel exam- ination of the salivary glands from the same nymphs demonstrated that no spirochetes were detected in this tissue at any stage of feeding (Fig. 4).
Ability of B. afzelii spirochetes to develop a persistent infection in mice increases with feeding time.
It is generally known that the risk of acquiring Lyme disease increases with the length of tick feeding (5). In subsequent experiments, we focused on the infectivity of B. afzelii transmitted via I. ricinus nymphs. To determine the minimum length of tick attachment time required to establish a permanent infection in mice, B. afzelii-infected nymphs were allowed to feed on mice for 24, 48, and 72 h (10 nymphs per mouse). Mouse infection was assessed in ear biopsy specimens 3 weeks after tick removal. The ability of B. afzelii spirochetes to promote a persistent infection increased with the length of tick attachment. All mice exposed to the bite of B. afzelii-infected ticks for 24 h remained uninfected, whereas 8/10 mice exposed for 48 h and 10/10 mice exposed for 72 h became infected. These results show that the time interval between 24 and 48 h of exposure to the B. afzelii-infected tick is critical for the development of a systemic murine infection.
B. afzelii spirochetes are already present in the murine dermis on the first day of tick feeding. The delay in development of a B. afzelii infection in mice may support the notion that the spirochetes are still travelling toward the tick salivary glands during the first day after attachment. To test this hypothesis, we determined the number of B. afzelii organisms in murine skin biopsy specimens from the tick feeding site at time intervals of 24, 48, and 72 h after feeding. Skin biopsy specimens from 9/10, 10/10, and 10/10 mice were PCR positive at time intervals of 24, 48, and 72 h, respectively. Analysis by qPCR further revealed that there were no significant differences in the number of spirochetes in skin samples at defined time intervals (Fig. 5A). This result was also confirmed by confocal microscopy, revealing clearly the presence of spirochetes in murine skin biopsy specimens during the first day of tick feeding ( Fig. 5B). Together with the rapid decrease in spirochetal number in nymphal midguts during feeding ( Fig. 3A and 4), these results imply that the migration of spirochetes to the host commences soon after the blood meal uptake.
Tick saliva does not protect the early B. afzelii spirochetes against host immunity. The apparent contradiction between the early entry of B. afzelii spirochetes into the vertebrate host and their delayed capability to develop a permanent infection supports the concept of the tick saliva's role in the successful dissemination and survival of spirochetes within the host body. In order to verify that tick saliva is essential for B. afzelii survival in mice, we designed and performed the following experiment. In experimental group 1, uninfected I. ricinus nymphs (white labeled) were allowed to feed simultaneously with B. afzelii-infected nymphs (red labeled) at the same feeding site. After 24 h of cofeeding, B. afzelii-infected nymphs were removed, while uninfected ticks were fed on mice until repletion and served as a source of saliva. In control group 1, B. afzelii-infected nymphs fed for 24 h without any support of uninfected ticks. In control group 2, B. afzelii-infected nymphs were allowed to feed until repletion. Four weeks later, B. afzelii infections in ear, heart, and urinary bladder biopsy specimens were examined by PCR. No infection was detected in any of the examined tissues in experimental and control group 1, where the infected ticks fed for only 24 h. In contrast, all tissues were PCR positive in control group 2, where the infected nymphs fed until repletion (Fig. 6A). These results revealed that the presence of uninfected ticks and their saliva is not sufficient to protect early spirochetes against elimination by the host immune system.
A possible explanation of this unanticipated result is that unlike those of the uninfected tick, the salivary glands of Borrelia-infected ticks express a different spectrum of molecules that assist their transmission and survival within the vertebrate host (17)(18)(19). Therefore, we also examined the protective effect of saliva from Borreliainfected nymphs. The experimental setup was the same as that described above, with one exception: in experimental group 2, nymphs infected with a different strain of B. burgdorferi were allowed to feed until repletion next to B. afzelii-infected nymphs that were removed after 24 h. In control group 3, B. afzelii-infected and B. burgdorferiinfected nymphs were allowed to feed until repletion. Four weeks after repletion, mice were specifically examined for the presence of one or both Borrelia strains using rrs-rrlA intergenic spacer (IGS) PCR amplification. All mice in experimental group 2 were positive for B. burgdorferi, while B. afzelii was not detected in any of the analyzed murine tissues. All mice in control group 3 tested positive for both B. afzelii and B. burgdorferi (Fig. 6B). This result implies that the saliva from B. burgdorferi-infected ticks also was not capable of ensuring survival of B. afzelii transmitted to mice at the early feeding stage.
Infectivity by B. afzelii is gained in the midgut and changes during nymphal feeding. Another possible explanation for the delayed capability of B. afzelii to infect mice was that infectivity of the spirochetes changed during the course of nymphal feeding. To test the infectivity of B. afzelii during different phases of nymphal feeding, B. afzelii-containing guts were dissected from unfed I. ricinus nymphs and nymphs fed for 24 h, 48 h, and 72 h and subsequently injected into C3H/HeN mice (5 guts/mouse). B. afzelii spirochetes from unfed nymphs were not infectious for mice. Spirochetes from nymphs fed for 24 h infected 3 out of 5 inoculated mice, and all mice became infected after the injection of spirochetes from nymphs fed for 48 h. Interestingly, only 1 out of 5 mice inoculated with spirochetes from nymphs fed for 72 h established B. afzelii infection. This result suggests that the capability of B. afzelii spirochetes to infect mice is gained in the tick gut and peaks at about the 2nd day of feeding. Infectivity of B. afzelii is linked to differential gene expression during tick feeding and transmission. Previous research demonstrated that transmission of B. burgdorferi from I. scapularis to the host is associated with changes in expression of genes encoding outer surface proteins OspA and OspC or the fibronectin-binding protein BBK32 (7,(20)(21)(22). In order to examine whether the infectivity of B. afzelii depends on expression of orthologous genes, we performed qPCR analysis to determine the status of ospA, ospC, and bbk32 expression by B. afzelii spirochetes in unfed and feeding I. ricinus nymphs as well as in murine tissues 4 weeks postinfection. The gene encoding OspA was abundantly expressed in unfed ticks, downregulated during tick feeding, and hardly detectable in mice. The B. afzelii ospC gene was weakly expressed in unfed I. ricinus nymphs. Its expression steadily increased during feeding, with the highest levels of ospC mRNA at the 3rd day of feeding. Significant ospC expression was also detected in mice with a permanent B. afzelii infection. Similarly, a gradual upregulation of bbk32 was evident with the progress of tick feeding, and gene transcription was fully induced during mammalian infection (Fig. 7).
DISCUSSION
Understanding the dynamics of Borrelia spirochete transmission is crucial for development of strategies for preventing Lyme disease. Recently, we managed to implement a reliable transmission model for European Lyme disease that involves the vector I. ricinus and the most common causative agent of borreliosis in Europe, B. afzelii spirochetes. This allowed us to quantitatively track the growth kinetics and infectivity of B. afzelii during the I. ricinus life cycle and compare the results to data known for the I. scapularis/B. burgdorferi model. In nature, infection is acquired by larval or nymphal ticks feeding on an infected host. Absolute quantification of B. afzelii spirochetes during larval development and molting to nymphs revealed that I. ricinus larvae imbibe relatively low spirochete numbers (ϳ600 per tick). The number of B. afzelii organisms then gradually increases during larval molting and reaches its maximum of about 20,000 spirochetes per tick 2 weeks after molting to nymphs. The level then stabilizes at about 10,000 spirochetes in starving nymphs (Fig. 2). This course of spirochetal burden is roughly in line with the data reported for I. scapularis/B. burgdorferi (6). However, compared to our observations, these authors described a dramatic decrease in B. burgdorferi numbers during I. scapularis molting. They speculated that it was due to depleted amounts of N-acetylglucosamine, an important building block of integumentary chitin but also a key component for spirochetal development. The limited availability of other nutrients might also be the reason for halted proliferation of spirochetes in molted nymphs. With its adoption of a parasitic lifestyle, the bacterium is an auxotroph for all amino acids, nucleotides, and fatty acids. It also lacks genes encoding enzymes for the tricarboxylic acid cycle and oxidative phosphorylation (23,24). Therefore, Borrelia spirochetes in the tick midgut are completely dependent on nutrients derived from ingested blood.
A striking difference between I. ricinus/B. afzelii and I. scapularis/B. burgdorferi was observed in spirochete numbers in the nymphal midgut during feeding. We found that B. afzelii numbers dramatically decrease from about ϳ10,000 spirochetes present in flat I. ricinus nymphs to only ϳ700 spirochetes in nymphs fed for 3 days (Fig. 3A). This result is in sharp contrast with the data previously published for I. scapularis/B. burgdorferi. Using antibody-based detection, De Silva and Fikrig demonstrated that the total number of B. burgdorferi organisms increased from several hundred in starved nymphs to almost 170,000 spirochetes on the 3rd day of nymphal feeding (10). Later, these data were confirmed in a qPCR study showing that B. burgdorferi spirochetes in tick midguts increased 6-fold, from about 1,000 before attachment to about 6,000 at 48 h after attachment (11). The observation that numbers of spirochetes in ticks decrease as nymphs acquire their blood meal is intriguing, especially as it goes against what has been observed previously. The apparent drop in numbers during nymph feeding coincides with increased blood volume. Moreover, not all DNA extraction methods remove inhibitory contaminants. To test whether the qPCR can be inhibited by blood components or influenced by increased blood volume, we performed a spike-in control experiment which revealed that spirochetal decrease during nymphal feeding is due to the transmission of spirochetes and not inhibition or increased blood volume (Fig. 3B).
It is commonly known that the risk of Lyme disease increases with the length of time a tick is attached. It was stated that I. scapularis ticks infected with B. burgdorferi removed during the first 2 days of attachment do not transmit the infection (11,20). Our data show that B. afzelii spirochetes require less time to establish a permanent infection. Most mice became infected by 48 h of attachment. This is in agreement with the previously published results showing that B. afzelii-infected I. ricinus nymphs transmit the infection earlier than B. burgdorferi-infected ticks (13). Nevertheless, quantification by qPCR as well as microscopic examination of B. afzelii in the mouse dermis revealed that B. afzelii spirochetes enter the host earlier than they are able to develop a systemic infection (Fig. 5). This is in agreement with their significant decrease in the tick midgut during feeding ( Fig. 3A and 4) and suggests that B. afzelii spirochetes leave the nymphs as early as the first day of feeding. The presence of spirochetes in mouse dermis prior to becoming infectious was also reported for I. scapularis/B. burgdorferi. Ohnishi et al. observed noninfectious spirochetes in skin samples from mice that were exposed to B. burgdorferi-infected I. scapularis nymphs for less than 53 h (20). Moreover, Hodzic et al. also reported the presence of B. burgdorferi spirochetes in four out of eight mice 24 h after I. scapularis attachment (25). These data suggest that Borrelia spirochetes invade the host at very early time points of tick feeding, but early spirochetes are not able to develop a systemic infection. There could be two explanations for this observation. First, bioactive molecules present in tick saliva are crucial for successful dissemination and survival of spirochetes within the host body. Therefore, the early spirochetes cannot colonize the host without sufficient protection and support of the tick saliva (26,27). Second, early spirochetes that are transmitted to the vertebrate host are not infectious. A substantial body of work has been performed to elucidate the various tick bioactive molecules, mainly comprising a complex cocktail of salivary proteins that dampens the host's defenses against blood loss and the development of inflammatory and complement reactions at the feeding site (28). Several tick molecules have been suggested to be crucial for Borrelia acquisition in ticks and transmission to the next host during subsequent feeding (reviewed in reference 29). To test the role of tick saliva in survival of early spirochetes, we performed a cofeeding experiment in which the early B. afzelii spirochetes were under the protection of uninfected ticks or ticks infected with B. burgdorferi (Fig. 6). This experiment clearly showed that the presence of tick saliva is not sufficient for protection and survival of early spirochetes, as all mice remained uninfected with B. afzelii spirochetes. Therefore, we tested how infectivity of B. afzelii changes during tick feeding. A number of studies provide solid evidence that Borrelia spirochetes change expression of their surface antigens during feeding and transmission to the host, making it possible for spirochetes to specifically adapt to the tick or the host environment as required (7,30,31). Changes in gene expression of our model spirochete seem to be the main event that promotes increasing infectivity during tick feeding. Borrelia afzelii spirochetes in unfed ticks showed high levels of expression of ospA and negligible expression of ospC and bbk32. In this tick model, spirochetes were not infectious for mice. As feeding progressed, ospA was downregulated and ospC and bbk32 were upregulated, which correlated with increasing infectivity of B. afzelii. The highest level of infection was observed in mice inoculated with spirochetes from nymphs fed for 48 h. By this time, all mice had developed the infection. Interestingly, spirochetes from nymphs fed for 72 h infected only one out of five mice. This decrease is likely associated with a concomitant, substantially reduced number of B. afzelii organisms in the midguts of nymphs fed for 3 days (Fig. 3A and 4). Similar findings also were reported for B. burgdorferi. It was demonstrated that viable B. burgdorferi organisms in unfed I. scapularis nymphs are highly attenuated in their ability to infect mice relative to spirochetes obtained from recently fed ticks. This finding suggests that tick feeding induces critical changes that specifically prepare the spirochete for infection of the mammalian host (32).
The route of Borrelia spirochete transmission has been broadly discussed since its discovery. In 1984, Burgdorfer suggested that spirochetal development in most ticks (I. scapularis and I. ricinus) occurs in the midgut. Additional tissues, including salivary glands, were considered to be free of spirochetes in most of the ticks. It was suggested that transmission occurs by regurgitation of infected gut contents or via saliva by ticks with a generalized infection (4). Benach et al. presented similar findings in their extensive histological study. They stated that B. burgdorferi organisms are able to enter the hemocoel during the midfeeding period and develop a systemic infection in the hemolymph and central ganglion. However, B. burgdorferi organisms were never seen within the lumen of the salivary gland or attached to cells of the salivary acini (2). The salivary route of Lyme disease transmission came into consideration in 1987, when Ribeiro and colleagues reported the presence of spirochetes in saliva of pilocarpinetreated ticks (3), and then was broadly accepted after microscopic detection of spirochetes within the salivary glands and ducts of fully fed I. scapularis nymphs (33). Nevertheless, the spirochete numbers present in salivary glands of I. scapularis nymphs are minuscule and hardly detectable (34,35).
In our study, we were not able to detect B. afzelii spirochetes in the salivary glands at any stage of tick feeding. The absence of spirochetes in salivary glands is surprising, since large numbers of spirochetes were supposed to pass from the midgut to the feeding lesion during the three-day course of nymphal feeding. A possible explanation is that the gland-associated spirochetes were not detectable by the chosen method or that these findings raise the possibility of an alternative route of B. afzelii transmission. We suggest that active reverse migration of motile B. afzelii spirochetes from the midgut to the mouthpart should be further tested as a possible alternative to the traditional salivary transmission route. The idea of B. afzelii transmission avoiding I. ricinus hemocoel and salivary glands also is indirectly supported by our recent research showing that silencing of tick immune molecules or elimination of phagocytosis in tick hemocoel by injection of latex beads had no obvious impact on B. afzelii transmission (36)(37)(38).
From our results, we propose the following mechanism of B. afzelii transmission. Borrelia afzelii in flat I. ricinus nymphs represents a relatively abundant population of spirochetes. Once the tick finds a host, B. afzelii organisms immediately start their transmission to the host. B. afzelii also seems to be less dependent on its tick vector. The main requirement for successful host colonization is the change in outer surface protein expression that occurs in the tick gut during the course of feeding. Spirochetes switched to the proper, vertebrate mode are then able to survive within the host even if the tick is not present. The 24-to 48-h time window between tick attachment and transmission of infectious spirochetes is the critical period in the whole process. Our findings suggest that salivary delivery as well as alternative transmission routes should be tested in future studies as possible mechanisms of transmission of different Borrelia species. Better understanding of the transmission cycles forms a basis for preventive and therapeutic strategies against Lyme disease.
MATERIALS AND METHODS
Laboratory animals. Ixodes ricinus larvae and nymphs were obtained from the breeding facility of the Institute of Parasitology, Biology Centre, Czech Academy of Sciences. Ticks were maintained in wet chambers with a humidity of about 95%, temperature of 24°C, and day/night period set to 15/9 h. To prepare both infected and uninfected I. ricinus nymphs, the larvae were fed on either infected or uninfected mice and allowed to molt to nymphs, and after 4 to 6 weeks they were used for further experiments. Inbred, pathogen-free C3H/HeN mice (The Jackson Laboratory, Bar Harbor, ME) were used for the pathogen transmission experiments. Nucleic acid isolation and cDNA preparation. DNA was isolated from individual larvae, nymphs, and murine tissues (ear, skin, heart, and urinary bladder) using a NucleoSpin tissue kit (Macherey-Nagel, Düren, Germany) according to the manufacturer's protocol.
Total RNA was extracted from nymphs and murine tissues (ear and urinary bladder) using a NucleoSpin RNA kit (Macherey-Nagel) according to the manufacturer's protocol. Isolated RNA (1 g) served as a template for reverse transcription into cDNA using a Transcriptor high-fidelity cDNA synthesis kit (Roche, Basel, Switzerland). All cDNA preparations were prepared in biological triplicates.
PCR. Detection of spirochetes in ticks, as well as in murine tissues, was performed by nested PCR amplification of a 222-bp fragment of a 23S rRNA gene (40). PCR contained 12.5 l of FastStart PCR master mix (Roche), 10 pmol of each primer, template (4 l of DNA for the first round, 1-l aliquot of the first PCR product in the second round), and PCR water up to 25 l. Primers and annealing temperatures are listed in Table 1.
Differentiation of B. afzelii and B. burgdorferi strains was performed by nested PCR amplifying a part of the rrs-rrlA IGS region (41). Reaction conditions were the same as those described above, and primers and annealing temperatures are listed in Table 1.
qPCR. Total spirochete load was determined in murine and tick DNA samples by quantitative real-time PCR (qPCR) using a LightCycler 480 (Roche). The reaction mixture contained 12.5 l of FastStart universal probe master (Rox) (Roche), 10 pmol of primers FlaF1A and FlaR1, 5 pmol of TaqMan probe Fla Probe1 (42) ( Table 1), 5 l of DNA, and PCR water up to 25 l. The amplification program consisted of denaturation at 95°C for 10 min, followed by 50 cycles of denaturation at 95°C for 15 s and annealing plus elongation at 60°C for 1 min.
Quantification of murine -actin was performed using MmAct-F and MmAct-R primers and a MmAct-P TaqMan probe (43) ( Table 1). Reaction and amplification conditions were the same as those described above. The spirochete burden in murine tissues was expressed as the number of spirochetes per 10 5 murine -actin copies. The spirochete burden in ticks was calculated as the total number of spirochetes in the whole tick body. cDNAs from B. afzelii-infected I. ricinus nymphs as well as murine tissues served as templates for quantitative expression analyses by relative qPCR. The reaction mixture contained 12.5 l of FastStart universal SYBR green master, Rox (Roche), 10 pmol of each primer (Table 1), 5 l of cDNA, and PCR water up to 25 l. The amplification program consisted of denaturation at 95°C for 10 min, followed by 50 cycles of denaturation at 95°C for 10 s, annealing at 60°C for 10 s, and elongation at 72°C for 10 s. Relative expression of ospA, ospC, and bbk32 was normalized to that of flaB using the ΔΔC T method (44).
Spike-in experiment. To test whether the qPCR can be inhibited by blood components or influenced by increased blood volume, a spike-in control experiment was performed. Homogenates from unfed clean nymphs and nymphs fed for 24, 48, and 72 h (5 ticks/group) were spiked with defined amounts of B. afzelii spirochetes (2.3 ϫ 10 7 spirochetes/homogenate). DNA from all homogenates then was isolated, and spirochete loads were quantified using methods described above.
Preparation of murine and tick tissues for confocal microscopy. Borrelia afzelii-infected I. ricinus nymphs were fed on mice for 24 h. Skin biopsy specimens from the tick feeding site then were dissected. Guts and salivary glands of unfed nymphs and nymphs fed for 24 h or 48 h or fully fed and infected with B. afzelii were dissected in phosphate buffer (30 nymphs/time point). Dissected tissues were immersed in 4% paraformaldehyde for 4 h at room temperature. Tissues were then washed three times for 20 min each time in phosphate-buffered saline (PBS) and permeabilized with 1% Triton X-100 (Tx) in PBS containing 1% bovine serum albumin (Sigma) at 4°C overnight. The next day, Borrelia spirochetes in
Organism
Gene Primer name Sequence 5=¡3= tissues were stained with primary rabbit anti-B. burgdorferi antibody (1:200; Thermo Fisher Scientific) in PBS-Tx (0.1% Tx in PBS) for 4 h at room temperature. Tissues were then washed three times for 20 min each time in PBS-Tx and stained with Alexa Fluor 488 goat anti-rabbit secondary antibody (Life Technologies, Camarillo, CA, USA), 1:500 in PBS-Tx, for 2 h at room temperature. Tissues were counterstained with 4=,6-diamidino-2-phenylindole (DAPI) for 10 min and washed two times for 10 min each time in PBS. Slides then were mounted in DABCO and examined using an Olympus FluoView FV1000 confocal microscope (Olympus, Tokyo, Japan). Whole salivary glands were thoroughly scanned for the presence of spirochetes (12 to 20 fields of view per salivary gland). Preparation of murine tissues for histology. Borrelia afzelii-infected or clean I. ricinus nymphs were fed on mice until repletion (5 mice/group, 10 nymphs/mouse). Four weeks later, murine tissues (skin, heart, and urinary bladder) from B. afzelii-infected and uninfected mice were fixed in 10% buffered formalin and embedded in paraffin using routine procedures. Three-m thin sections were cut and stained with hematoxylin and eosin. Slides were examined using an Olympus BX40 light microscope (Olympus).
Needle inoculation of infected tick midguts. B. afzelii-containing guts from unfed I. ricinus nymphs and nymphs fed for 24 h, 48 h, and 72 h were dissected and suspended in BSK-H medium (Sigma). Guts were subsequently injected into C3H/HeN mice (5 guts/mouse in a 200-l volume, 5 mice/time point).
Statistical analysis. Data were analyzed by GraphPad Prism 6 for Windows, version 6.04, and an unpaired Student's t test was used for evaluation of statistical significance. A P value of Ͻ0.05 was considered statistically significant. Error bars in the graphs show the standard errors of the means.
|
2019-03-27T13:03:18.523Z
|
2019-03-25T00:00:00.000
|
{
"year": 2019,
"sha1": "bc503672c8b8f5d4324d7cc77e3c8125671ec0e6",
"oa_license": "CCBY",
"oa_url": "https://iai.asm.org/content/iai/87/6/e00896-18.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "bc503672c8b8f5d4324d7cc77e3c8125671ec0e6",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
2175607
|
pes2o/s2orc
|
v3-fos-license
|
Mutations in the Human AAA+ Chaperone p97 and Related Diseases
A number of neurodegenerative diseases have been linked to mutations in the human protein p97, an abundant cytosolic AAA+ (ATPase associated with various cellular activities) ATPase, that functions in a large number of cellular pathways. With the assistance of a variety of cofactors and adaptor proteins, p97 couples the energy of ATP hydrolysis to conformational changes that are necessary for its function. Disease-linked mutations, which are found at the interface between two main domains of p97, have been shown to alter the function of the protein, although the pathogenic mutations do not appear to alter the structure of individual subunit of p97 or the formation of the hexameric biological unit. While exactly how pathogenic mutations alter the cellular function of p97 remains unknown, functional, biochemical and structural differences between wild-type and pathogenic mutants of p97 are being identified. Here, we summarize recent progress in the study of p97 pathogenic mutants.
MSP1 (OMIM #167320, also called Inclusion bodies myopathy with Paget's disease of bone and frontotemporal dementia, IBMPFD) is an autosomal dominant disorder, meaning a single copy of the altered gene from either parent is sufficient to cause the disease. There are also cases of new mutations occurring in individuals with no family history of the disorder. The disease is traced to mutations in the gene that encodes p97, also known as VCP (valosin-containing protein) (Kimonis et al., 2000). MSP1 can affect multiple tissues including muscles, bones, and brain (Benatar et al., 2013;Kim et al., 2013). The first symptom of the disease is often muscle weakness (IBM, inclusion body myopathy), which typically appears late in life when the patient is at the age of 50-60 years old, and is found in more than 90% of cases. Half of the cases develop Paget's disease of the bone (PD), which interferes with the recycling process of new bone tissue replacing old one, causing abnormal bone formation. Bone pain, particularly in the hips and spine, is common. One-third of the cases also involve a brain condition called frontotemporal dementia (FTD). This disorder progressively damages parts of the brain that control reasoning, personality, social skills, speech and language, leading to personality changes, a loss of judgment and inappropriate social behavior. So far, more than 20 missense amino acid substitutions on p97 have been identified in MSP1 patients, all located in the N-terminal and D1 domains of the protein and none is found in the D2 domain ( Figure 1A and Table 1).
Familial Amyotrophic Lateral Sclerosis (FALS)
ALS or Lou Gehrig's disease is a progressive neurodegenerative disease that affects the motor neurons in the brain and spinal cord. When these nerve cells die, the brain loses the ability Davies et al., 2008). The N domain is in purple, D1 domain in blue and D2 domain in gold. (C) The top view of ND1 p97 structure showing the location of pathogenic mutations. Selected pathogenic mutations (residue I27, R93, I126, P137, R155, R191, L198, I206, A232, T262, N387, N401, A439) are represented as yellow spheres on the ribbon diagram of ND1 p97 with ADP bound (PDB: 1E32, Zhang et al., 2000).
to control muscle movement, causing complete paralysis in late stages of the disease and eventually death. In about 90% of cases, the cause of ALS is sporadic, which means they are not inherited. Pathological hallmarks of ALS are pallor of corticospinal tract due to loss of motor neurons, the presence of ubiquitin-positive inclusions and the deposition of pathological TDP-43 aggregates. The cause of this sporadic ALS is not well understood; it may be due to a combination of environmental and genetic risk factors. About 10% of cases are considered "familial ALS" (FALS, OMIM #613954). In these cases, more than one individual in the family develops ALS and sometimes family members have FTD as well. Mutations in at least 18 genes have been identified in FALS cases, with mutations in the p97 gene contributing <1-2% (Table 1) (Johnson et al., 2010;Koppers et al., 2012;Kwok et al., 2015).
Charcot-Marie-Tooth Disease, Type 2Y (CMT2Y) CMT2Y (OMIM #616687) is an autosomal dominant axonal peripheral neuropathy characterized by distal muscle weakness and atrophy associated with length-dependent sensory loss. The disease CMT is named after the three physicians who first accurately described it in 1886: Jean-Martin Charcot and Pierre Maries in France, and Howard Henry Tooth in England. Its principal features include slowly progressive muscular atrophy, Watts et al., 2004;Hübbers et al., 2007;Kimonis et al., 2008a;Viassolo et al., 2008;Stojkovic et al., 2009;González-Pérez et al., 2012 R155C 463C>T N domain IBM, PDB, FTD, ALS Watts et al., 2004;Schröder et al., 2005;Guyant-Maréchal et al., 2006;Gidaro et al., 2008;González-Pérez et al., 2012 which initially involves the feet and legs, but does not affect the upper extremities until several years later. CMT is a clinically and genetically heterogeneous disorder and is divided into subtypes based on genetics, pathology, and electrophysiology of the disease (Dyck and Lambert, 1968). The subtype CMT2Y has missense mutations in the p97 gene, which were identified in patients (Gonzalez et al., 2014;Jerath et al., 2015) ( Table 1). As most patients with CMT2Y do not obtain a genetic diagnosis, the number of cases having mutations in p97 may be higher than expected.
STRUCTURAL AND BIOCHEMICAL DIFFERENCES BETWEEN WILD-TYPE AND PATHOGENIC p97
Structure of p97 P97 is a Type II AAA + ATPase (two AAA ATPase domains) and a homo-hexamer with each subunit consisting of three main domains: the N-terminal domain (N domain) followed by two tandem ATPase domains (D1 and D2 domains), which are connected by two short polypeptides (N-D1 and D1-D2 linker). Both the D1 and D2 domains possess all essential sequence elements (Walker A and B motifs) for ATP hydrolysis and share high amino acid sequence identity. The N domains are known for interacting with various cofactors and adaptor proteins.
Cofactors of p97 are defined as those proteins that are necessary for p97 function, whereas adaptors are those that target p97 to different cellular locations . At first glance, a p97 hexamer appears to have two rings of different sizes stacked on top of each other. The crystal structure of full-length wildtype p97 ( FL p97) reveals that the two ATPase domains form two concentric rings, called D1 and D2 rings, and the N domains are attached to the periphery of the D1 ring (DeLaBarre and Brunger, 2003) ( Figure 1B). The hexameric architecture of p97 is maintained by interactions among the D1 domains (Wang et al., 2003), as isolated D2 domains are prone to form heptamers (Davies et al., 2008). This hexameric structure of p97 is very stable and can withstand treatment of up to 6M urea and its assembly does not require the addition of nucleotide (Wang et al., 2003). More than 20 amino acid mutations have been identified in p97 from MSP1 or IBMPFD patients and these mutations appear to be randomly scattered throughout the sequence of the N and D1 domain of p97 ( Figure 1A). However, when mapped to the structure of FL p97, these MSP1 mutations were found exclusively at the interface between the N and D1 domain ( Figure 1C). None was found at the sites where ATP hydrolysis occurs. Structural studies using X-ray crystallography show the pathogenic mutants retain a hexameric ring structure and share identical overall folding with the wild-type protein (Tang et al., 2010).
Amount of Pre-bound ADP
One important characteristic of p97 related to binding of nucleotides is the presence of pre-bound ADP at the D1 domain, which was hinted at by p97 crystallization experiments in the presence of different types of nucleotides. Crystallographic efforts with wild-type p97 yielded ADP invariably bound to the D1 domain, while various types of nucleotides bound to the D2 domain (Zhang et al., 2000;DeLaBarre and Brunger, 2003), leading to the misconception that the D1 domain was incapable of exchanging for different types of nucleotides. Subsequent experiments led to the realization that the nucleotide state at the D1 domain of p97 is tightly regulated (Davies et al., 2005). Without the addition of any ADP during the course of purification, isolated wild-type p97 was shown to have tightly bound ADP at the D1 domain with at least 3 molecules of ADP per p97 hexamer (DeLaBarre and Brunger, 2003;Briggs et al., 2008;Tang and Xia, 2013). This phenomenon is referred to as the pre-bound ADP at the D1 domain. Apparently, a subset of D1 domains in the hexameric p97 is occupied by ADP, thus preventing saturation of all D1 sites with ATP, which has a higher binding affinity for an empty D1 site (Tang et al., 2010;Tang and Xia, 2013). Thus, structural studies of the conformational change of wild-type p97, especially at low resolution where the nucleotide state is uncertain, should take the feature of the pre-bound ADP into account when interpreting the results.
Compared with wild-type p97, pathogenic mutants have less pre-bound ADP (Tang and Xia, 2013). More importantly, these mutants are not able to tightly regulate the nucleotide state of the D1 domain, as does the wild-type p97. They allow ATP to displace pre-bound ADP. Consequently, a uniform binding of ATP to the D1 sites can be observed (Tang et al., 2010;Tang and Xia, 2013).
Communication among Domains and Subunits
In each biological unit of p97, there are six identical subunits, containing a total of 18 main domains. The proper function of p97 therefore relies on a coordinated interplay among these domains. For instance, the conformation of the N domain has a strong influence over the ATPase activity of p97. Fixing the N domain position by introducing a disulfide bond between the N and the D1 domain reduces p97 ATPase activity (Niwa et al., 2012). The binding of adaptor proteins such as p47 and p37 to the N domain alter the overall ATPase activity of p97 (Meyer et al., 1998;Zhang et al., 2015). On the other hand, the nucleotide states of the D1 domains control the conformations of the N domain of p97 (Tang et al., 2010;Banerjee et al., 2016;Schuller et al., 2016).
The binding of ATP in the D1 domain is required for the activity of the D2 domain, and vice versa (Ye et al., 2003;Nishikori et al., 2011;Tang and Xia, 2013). One of the possible mechanisms of communication between these two ATPase domains is through the D1-D2 linker. This 22-residue linker peptide contains a highly conserved N-terminal half that appears to be a random loop and extends to the vicinity of both the D1 and D2 nucleotide-binding sites, as illustrated in the FL p97 structures (Davies et al., 2008). The inclusion of the D1-D2 linker to the N-D1 truncate of p97 activates the ATPase activity of the D1 domain (Chou et al., 2014;Tang and Xia, 2016).
Among the three domains of a p97 subunit, the D1 domain seems to play a role consistent with (1) maintaining the hexameric architecture of p97 (Wang et al., 2003), (2) driving the conformational change of the N domain (Tang et al., 2010;Banerjee et al., 2016;Schuller et al., 2016), (3) regulating the activity of the D2 domain (Tang and Xia, 2013), and (4) communicating with and controlling the nucleotide states of D1 domains of neighboring subunits Xia, 2013, 2016;Zhang et al., 2015). All these suggest an intricate communication network centered on the D1 ring of the hexameric p97.
Instead of causing structural changes to the protein, pathogenic p97 mutations appear to alter the function of p97 by perturbing the communication network between domains. Our experiments have shown that while the domain communication within an individual subunit remains undisturbed, communication between neighboring subunits in pathogenic mutants has changed, leading to uncoordinated nucleotide binding among different subunits (Tang et al., 2010;Tang and Xia, 2013). Specifically, the mutations weaken the ADP-binding affinity at the D1 domain and thus relax the tight regulation of the nucleotide states at the D1 domains (Tang and Xia, 2013). As a result, more ATPase domains of mutants are engaged in ATP hydrolysis compared to wild-type p97, giving rise to an apparent more active protein with higher ATPase activity (Halawani et al., 2009;Manno et al., 2010;Tang et al., 2010;Niwa et al., 2012).
Nucleotide-Driven Conformational Changes
It is generally believed that p97 functions as a molecular extractor, pulling damaged or unwanted proteins from large molecular or cellular assemblies. It does so by undergoing ATP-dependent conformational changes to generate mechanical forces necessary for substrate extraction (Acharya et al., 1995;Latterich et al., 1995;Rabouille et al., 1995;Xu et al., 2011;Ramanathan and Ye, 2012;Xia et al., 2016). Although exactly how p97 extracts substrate from a large molecular assembly remains unclear, progress has been made in identifying different conformations. Low-resolution cryo-EM studies showed a moderate rotational movement between the D1 and D2 rings in association with changes in the size of the D2 central pore in response to the presence of different nucleotide (Rouiller et al., 2002). However, a similar study by another group suggested a different domain movement . The insufficient resolution to determine the exact nucleotide state in each domain of p97 in these studies could be the cause of the inconsistency.
Earlier crystallographic studies showed the D1 domains are always bound with ADP, regardless of the presence of different types of nucleotides in solution, and the N domains are in a conformation that is coplanar with the D1 ring (Zhang et al., 2000;DeLaBarre and Brunger, 2003;Davies et al., 2008). This N domain conformation when the D1 domain is occupied with ADP is termed the Down-conformation (Figure 2) (Tang et al., 2010). On the other hand, the nucleotide-binding state in the D2 domains is determined by what is present in solution (either bound ADP, AMP-PNP, or ADP-AlFx). Therefore, these crystallographic data can only reveal the conformational changes associated with the nucleotide state at the D2 domain. The D2 ring undergoes a rotation relative to the D1 ring and size of the D2 central pore changes during ATP cycle, but whether the binding or the hydrolysis of ATP triggers the opening remains controversial (Davies et al., 2005;Pye et al., 2006;Banerjee et al., 2016;Hänzelmann and Schindelin, 2016b;Schuller et al., 2016). It is worth pointing out that, for the same nucleotide state, non-uniform domain conformation is observed in subunits within a crystallographic asymmetric unit, and the magnitude of such a difference is comparable to that observed between different nucleotide states (Davies et al., 2008). It is unclear if the conformational differences observed in various nucleotide states of the D2 domain represent actual changes in solution.
Recently, by genetically modifying some regions in the D2 domain, Hanzelmann and colleagues were able to determine the crystal structure of full-length p97 with both ATPase domains either empty or bound with ATPγS (non-hydrolyzing ATP analog) (Hänzelmann and Schindelin, 2016b). The binding of ATPγS opens the D2 pore and generates a rotational movement between the two concentric rings. However, questions remain concerning the physiological relevance of these observations, as the effect of these mutations on the function of p97 was not characterized.
Pathogenic mutations weaken the ADP binding interactions at D1 sites and alter the regulation imposed among neighboring subunits. Effects of these mutations, though very subtle, are sufficient to make these mutants achieve uniform N domain conformation or loss of asymmetry within the hexamer, which is a property that facilitates crystallographic studies. When ATPγS binds to the D1 sites of the N-D1 fragment of p97, the N domains move to a position above the D1 ring, which is termed the Upconformation (Figure 2) (Tang et al., 2010). Such nucleotidedependent conformational switch has also been detected for only a subset of subunits in wild-type p97 in solution (Tang et al., 2010). The nucleotide-dependent conformational movement of the N domain has been confirmed by recent studies of fulllength wild-type p97 using single particle cryo-EM (Banerjee et al., 2016;Schuller et al., 2016). Instead of having all six p97 subunits in the Up-conformation in the presence of ATPγS or AMP-PMP, Schuller and colleagues observed a distribution of N domain conformations, either in Up-or Down-conformation within a hexamer (Schuller et al., 2016). By contrast, Benerjee and colleagues only reported a single conformation that N domains of all subunits were in the Up-conformation, despite the very weak EM density for the N domain (Banerjee et al., 2016). More interestingly, crystal structure of the full-length p97 with genetically modified D2 domain showed the N domain remains in the Down-conformation when the D1 domain is bound with ATPγS (Hänzelmann and Schindelin, 2016b). Thus, whether the six nucleotide-binding sites in the D1 ring bind ATP in a concerted manner leading to symmetrical N domain movement or in a sequential/random manner leading to asymmetrical hexamer has yet to come to a consensus. However, the presence of tightly pre-bound ADP in the D1 domains of a subset of p97 subunits may have already suggested a non-uniform nucleotide binding of p97.
A model was proposed to illustrate the regulatory mechanism of ATP binding and hydrolysis in the D1-ring and how it might influence the ATPase activity of the D2 ring ( Figure 3A) (Tang et al., 2010;Tang and Xia, 2013). In this model, there are four states for a subunit of a wild-type p97 hexamer, each representing one specific nucleotide-binding state. (1) There is an Empty state where no nucleotide is bound at the D1 site; the conformation of the N-domain is unknown (pink sphere). Noticed that for a wild-type p97 hexamer, only a subset of subunits is in the Empty state because of the pre-bound ADP. The N domains for those with pre-bound ADP are in the Down-conformation and are shown as pink sphere labeled with D.
(2) When ATP enters the D1 site (ATP state), it is only allowed in the Empty subunits and not allowed in those with pre-bound ADP. The subunits with ATP bound have their N domain adopt the Upconformation (pink sphere labeled with T), which has been determined from the crystal structure of IBMPFD mutants (Tang et al., 2010). (3) The hydrolysis of ATP to ADP at the D1 domain brings the N domain back to the Down-conformation, which is supported by the crystallographic data from both wildtype p97 and IBMPFD mutants (Zhang et al., 2000;DeLaBarre and Brunger, 2003;Huyton et al., 2003;Tang et al., 2010). (4) Importantly, it was proposed that there are two ADP-bound states existing in equilibrium for a subunit: an ADP-locked and ADP-open state. Both ADP-open and ADP-locked states can coexist for different subunits in a p97 hexamer. The ADP-locked state is inspired by the presence of pre-bound ADP at the D1 site in the wild-type p97, which is difficult to remove (Davies et al., 2005;Briggs et al., 2008;Tang et al., 2010). The ADP-open state represents the situation where ADP has a reduced affinity to the D1 site ready to be exchanged. (5) It was also proposed that the D2 domain of a subunit is permitted to hydrolyze ATP only if its cognate D1 domain is occupied by ATP.
A major difference between the wild-type and mutant p97 was proposed to be the regulation of the inter-conversion or the equilibration between the ADP-open and ADP-locked state ( Figure 3B). In the wild type, the equilibration favors the ADP-locked state, whereas in the mutant, it prefers the ADP-open state. This means, in the case of a wildtype p97 hexamer, that ATP can only get into a subset of D1 domains, driving corresponding N domains to the Up-conformation. This non-uniform nucleotide-binding state in the wild-type p97 in the presence of ATP generates an asymmetry in the N domain conformation in a hexameric p97. In p97 mutants, the equilibration between ADP-locked and ADP-open states is shifted toward the latter. As a result, a uniform nucleotide-binding state at the D1 domains and a synchronized N domain movement can be reached in the presence of a sufficiently high concentration of ATP, forming symmetrical hexamers. More importantly, this model implies that the function of p97 requires an asymmetry in the D1 nucleotide-binding state in a hexameric ring. We should also point out that a consequence of this model is that the p97 mutants are higher in ATPase activity, because there are more ATP molecules occupying the D1 sites, which is required for ATP hydrolysis in the D2 domain (Tang and Xia, 2013).
Although the role of the conformational changes observed in p97 during the ATP cycle in relation to its physiological function remains unclear, the opening and closing of the D2 pore as well as the up-and-down swinging motion of the N domain have consistently been observed. As experimental evidence increasingly points to a role played by p97 in extracting protein substrates from its interacting partners, the coordinated up-and-down motion of the N domain at the D1 ring and the opening and closing of the D2 ring within the hexamer during the ATP hydrolysis could conceivably generate a pulling force to extract protein substrates from various organelles. Taking ERAD as an example, p97 is recruited to the ER membrane via interaction between the N domain and adaptor proteins. The swinging movement of the N domain would create a pulling force to extract the protein substrates from the ER membrane. Conceivably, the generation of this pulling force requires a highly sophisticated coordination among the subunits of p97. As shown from biochemical and structural studies, individual subunits of pathogenic mutants fail to communicate, resulting in uniform movement of the N domain. This un-coordinated conformational change in pathogenic p97 may be why mutants fail to process protein substrates effectively, thus leading to accumulation of protein inclusions.
Interacting with Protein Partners
Over 30 different cofactor/adaptor proteins have been identified; they interact mostly with the N domain but in some cases the C-terminal tail of p97. These proteins either function as adaptors that recruit p97 to a specific subcellular compartment or substrate, or serve as cofactors that help in substrate processing. They are found in many different subcellular structures such as mitochondria, endoplasmic reticulum membrane, nuclear membrane, and Golgi body. Hence, their bindings lead p97 to function in different cellular pathways.
Several common binding-domains or motifs, such as the UBX domain, the PUB-domain, and the VCP-interacting motif (VIM), have been found to interact with p97. Despite differences in structures among these binding motifs, most of them bind to the N domain at the interface between the two subdomains, as shown from crystal structures of these binary complexes (Figure 4). This observation provides an explanation for the mutually exclusive binding pattern observed biochemically among various p97interacting proteins (Meyer et al., 2000;Rumpf and Jentsch, 2006). Intriguingly, while all six binding interfaces on the N domains of a hexameric p97 are available, crystal structures of the complexes showed the binding stoichiometry is not more than 3 molecules of adaptor proteins to 1 FL p97 hexamer (Dreveny et al., 2004;Hänzelmann and Schindelin, 2016a). Consistently, binding studies using the isothermal calorimetry (ITC) technique showed a similar effect . Indeed, the sharing of the same binding interface and the substoichiometric binding of the interacting protein to p97 led to the hierarchical binding model for p97 to fulfill specific cellular functions Meyer et al., 2012).
The impact of pathogenic mutations on the interactions between p97 and adaptor proteins has been investigated. So far, there is no structural data in the literature that demonstrate the difference in adaptor protein binding between wild-type and mutant p97. Using isolated FL p97, it was shown biochemically that cofactors p37 and p47 regulate ATPase activity of p97 in a concentration-dependent manner. By contrast, mutant p97 lost this regulation although it still interacts with the cofactors (Zhang et al., 2015). Results derived from cell-based experiments from different groups are not always consistent (Fernández-Sáiz and Buchberger, 2010;Manno et al., 2010). For example, in one study, isolated mutant p97 exhibited the same binding as wild-type p97 toward the adaptor proteins p47, Ufd1-Npl4, and E4B, the human UFD-2 homolog. However, mutants in the same study showed impaired binding to ubiquitin ligase E4B in the presence of Ufd1-Npl4. In vivo pull-down experiments using HEK293 cells showed reduced binding toward the E4B and enhanced binding toward ataxin 3, thus resembling the accumulation of mutant ataxin 3 on p97 in spinocerebellar ataxia type 3 (Fernández-Sáiz and Buchberger, 2010). In another study, however, similar in vivo pull-down were carried out showing enhanced binding of the Ufd1-Npl4 pair by IBMPFD mutants but not for p47 (Manno et al., 2010). An increased amount of cofactor pair Ufd1-Npl4 was detected in association with mutant p97 (Fernández-Sáiz and Buchberger, 2010;Manno et al., 2010). However, no significant difference was found in the binding of the same adaptor to either wild-type or pathogenic mutants when using isolated protein for pull-down assays (Hübbers et al., 2007;Fernández-Sáiz and Buchberger, 2010). This inconsistency may be due to the difference in the N domain conformation, which depends on the nucleotide state at the D1 domain of p97. Such an effect can be demonstrated by the seven-fold decrease in the binding affinity of SVIP to pathogenic p97 in the presence of ATPγS . So far, two nucleotidedependent conformations (the Up-and Down-conformation) of the N domain have been observed in p97. In both cases, the binding interface for adaptor proteins is available but orients differently. In the Up-conformation, the binding interface faces outward to the side of the hexameric ring, while in Downconformation, the binding interface faces down toward the D2 ring. As the sizes and shapes of adaptor proteins vary, it is conceivable that the binding of some adaptor proteins will be hindered by spatial restrictions caused by different N domain conformations.
FUNCTIONAL DEFECTS IN PATHOGENIC p97
The diverse biological roles played by p97 in various cellular activities, such as membrane fusion, DNA repair, and protein homeostasis, have been reported and extensively reviewed (Dantuma and Hoppe, 2012;Meyer et al., 2012;Yamanaka et al., 2012;Franz et al., 2014;Meyer and Weihl, 2014;Xia et al., 2016). These important functional roles are reflected by the sequence conservation of the protein and indicate that mutations in p97 would have severe functional consequences. Despite embryonic lethality in p97 knock-out mice (Müller et al., 2007) and accelerated MSP1 pathology in homozygote p97 mutant mice (Nalbandian et al., 2012), pathogenic mutations in p97 seems well tolerated and affect only a subset of its functions, as there is no evidence of developmental abnormalities in affected individuals (Kimonis et al., 2008b). This is consistent with the fact that MSP1 is a late-onset disease and clinical pathology of MSP1 seems to point to a defective function in maintaining protein homeostasis.
Pathological features in MSP1 patient samples include rimmed vacuoles found in muscle tissues that stain positive for p97 and ubiquitin (Watts et al., 2004) and nuclear inclusions in neurons, which also stained positive for p97 and polyubiquitin in brain tissues (Kimonis and Watts, 2005;Schröder et al., 2005). This common pathologic feature found in MSP1 affected tissues suggests a defective function of pathogenic p97 mutants in protein degradation/trafficking pathways. Similar phenotypes can be reproduced in in vitro cultured cells, either transfected with disease-associated p97 mutants (Weihl et al., 2006;Janiesch et al., 2007) or derived from patient tissues (Ritz et al., 2011). Moreover, studies using various animal models further strengthen the linkage between mutations in p97 and MSP1. Transgenic mice bearing a p97 mutation (R155H or A232E) display dominant-negative phenotypes similar to MSP1 patients (Weihl et al., 2007;Custer et al., 2010); mutant p97 (R155H) knock-in mice display progressive muscle weakness and other MSP1-like symptoms (Badadani et al., 2010).
One of the best studied cellular functions of p97 is endoplasmic reticulum-associated degradation (ERAD) (Meyer et al., 2012). Protein substrates in the ER are labeled with polyubiquitin chains, recognized, and subsequently retrotranslocated by p97 across the ER membrane to the cytosol, where they are degraded by the proteasome. Failure to clear these polyubiquitinated protein substrates leads to ER stress. It has been shown that MSP1 mutants have impaired ERAD, leading to accumulation of ERAD substrates (Weihl et al., 2006;Erzurumlu et al., 2013).
Another characteristic that sets pathogenic mutants apart from wild-type p97 is their failure to form a ternary complex with ubiquitylated CAV1 (Ritz et al., 2011). CAV1 (caveolin-1) is a main constituent of caveolae, small invaginations on the plasma membrane. The degradation of CAV1 through the endocytic pathway requires mono-ubiquitin modification (Haglund et al., 2003;Parton and Simons, 2007). During maturation, CAV1 first forms SDS-resistant oligomers that associate to form larger assemblies in a cholesterol-dependent manner during exit from the Golgi apparatus. P97 binds to a mono-ubiquitylated cargo substrate, CAV1, on endosomes and is critical for its transport to endolysosomes. Blocking p97 binding of CAV1 with MSP1associated mutations or its protein segregase activity with the Walker B motif mutation or the DBeQ inhibitor leads to accumulation of CAV1 at the limiting membrane of late endosomes (Ritz et al., 2011).
Besides ubiquitin, TAR DNA-binding protein-43 (TDP-43) is also found in protein inclusions in MSP1 affected tissues (Neumann et al., 2007;Weihl et al., 2008). TDP-43, the major pathological protein in ALS and FTD (Neumann et al., 2006), is primarily localized in the nucleus (Wang et al., 2001) and was suggested to play a role in transcription repression and other cellular processes (reviews please see Wang et al., 2008;Buratti and Baralle, 2009). Although how TDP-43 gets into the protein inclusions in tissue samples of MSP1 patients is unknown, it is believed that TDP-43 is a substrate for either proteasome or autophagic degradation (Caccamo et al., 2009;Wang et al., 2010), hence suggesting a role of p97 in autophagy, a degradation process involving the lysosomal machinery. The role of p97 in autophagy has been demonstrated in both mammalian and yeast cells, in which p97 has been found essential for the maturation of autophagosomes (Tresse et al., 2010). MSP1 mutants have also been observed to accumulate autophagosome markers p62 and LC3-II (Ju et al., 2009;Vesa et al., 2009;Tresse et al., 2010).
CONCLUSIONS AND PERSPECTIVE
Since the recognition of the linkage between MSP1 disease and the AAA protein p97 in 2001 (Kovach et al., 2001), there has been a steady increase in the number of pathogenic mutations being identified and increasing number of diseases associated with these mutations in p97. The association of the mutations with the disease calls for a clear understanding of the exact molecular function and its underlying mechanism of p97. Through comparative studies between wild type and mutants and using an array of genetic, biochemical, and structural methodologies, these mutants added a new dimension to our understanding on the structure and function of p97. Despite the progress made, a few fundamental mechanistic questions regarding the action of p97 remain unclear and require further engagement of the research community. First, what is the physiological significance of the conformational changes in p97? To answer this question, an in vitro system needs to be established to reconstruct the process identified in vivo for p97, which would allow us to investigate the role of p97 in a well-controlled manner and to pinpoint the steps in the reaction coordinates, which are affected by mutations. Secondly, studies are required to further identify properties of p97 that are affected by mutations, such as binding of adaptor/cofactor proteins. Finally, mutations in p97 can cause different diseases. How do cellular factors influence the ultimate clinical outcomes in patients? As a late-onset disease, individuals with p97 mutations can live a normal life for a long time without symptoms. Identifying the factors that delay the onset of the diseases and understanding how they interact with p97 can have a significant impact on those who are predisposed to the disease. The path to address these questions seems unlikely to be straight forward, as pathogenic mutations only manifest their effects in a subtle way and p97 involves in many cellular pathways. Nevertheless, optimism is warranted, given the progresses made in the past, that this path will lead us to the solutions to these unsolved issues.
|
2017-05-04T01:01:19.725Z
|
2016-12-01T00:00:00.000
|
{
"year": 2016,
"sha1": "507fe64c6bfe0b5189b33c045a531f519bd8ae63",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmolb.2016.00079/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "507fe64c6bfe0b5189b33c045a531f519bd8ae63",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
154451542
|
pes2o/s2orc
|
v3-fos-license
|
Inconsistencies in air quality metrics: ‘Blue Sky’ days and PM10 concentrations in Beijing
International attention is focused on Beijing’s efforts to improve air quality. The number of days reported as attaining the daily Chinese National Ambient Air Quality Standard for cities, called ‘Blue Sky’ days, has increased yearly from 100 in 1998 to 246 in 2007. However, analysis of publicly reported daily air pollution index (API) values for fine particulate matter (diameter≤10 µm, PM10), indicates a discrepancy between the reported ‘Blue Sky’ days (defined as API≤100, PM10≤150 µg m−3) and published monitoring station data. Here I show that reported improvements in air quality for 2006–2007 over 2002 levels can be attributed to (a) a shift in reported daily PM10 concentrations from just above to just below the national standard, and (b) a shift of monitoring stations in 2006 to less polluted areas. I found that calculating daily Beijing API for 2006 and 2007 using data from the original monitoring stations eliminates a bias in reported PM10 concentrations near the ‘Blue Sky’ boundary, and results in a number of ‘Blue Sky’ days and annual PM10 concentration near 2002 levels in 2006 and 2007 (203 days and ∼167 µg m−3 calculated for 2006—38 days fewer and a PM10 concentration ∼6 µg m−3 higher than reported; 191 ‘Blue Sky’ days and ∼161 µg m−3 calculated for 2007—55 days fewer and a PM10 concentration ∼12 µg m−3 higher than reported; 203 days and 166 µg m−3 were reported in 2002). Furthermore, although different pollutants were monitored before daily reporting began and less stringent standards were implemented in June 2000, reported annual average concentrations of particulate (diameter≤100 µm, TSP) and nitrogen dioxide (NO2) indicate no improvement between 1998 and 2002. This analysis highlights the sensitivity of monitoring data in the evaluation of air quality trends, and the potential for the misinterpretation or manipulation of these trends on the basis of inconsistent metrics.
Introduction
In 1998, Beijing launched a 'Defending the Blue Sky' campaign with air quality data becoming openly available on a weekly basis in February 1998 [1][2][3], and on a daily basis in June 2000 [4]. In January 2003, the data from individual monitoring stations also became publicly 1 Present address: University of California, Los Angeles-School of Law, Los Angeles, CA 90095, USA.
In 1998, Beijing was ranked as having the 3rd worst air quality in a global ranking of 157 cities across 45 countries [3]. The annual average particulate concentration (diameter 100 µm, TSP) that year was 35% higher than Mexico City, which was ranked as having the worst air quality in the world [3,9]. It has been widely reported that the air and distant sources [7,26] impacts Beijing air quality, and has complicated control efforts taken by the Beijing government. These issues have affected the city air quality to varying degrees throughout the 1998-2007 periods, and are not addressed in the present study which focuses on reported air quality trends and Beijing's efforts to meet the Chinese air quality standards. Severe dust storms during the spring of 2006 are likely partially responsible for high average particulate levels for that year, but these days of highly elevated particulate concentrations should not affect days with particulate concentrations near the PM 10 = 150 µg m −3 national ambient air quality standard.
According to the national Chinese State Environmental Protection Administration's 2005 Automated Methods for Ambient Air Quality Monitoring (HJ/T 193-2005), which went into effect on January 1, 2006, cities with a population of over 3 million people are required to use at least eight monitoring stations to measure urban air quality [33]. In addition, new specifications were added regarding the minimum distance from roadways that air pollution should be monitored. For roadways with an average of 3000 vehicles per day, monitoring stations should be a minimum of 25 m from the road; for roadways with an average of 15 000 vehicles per day a minimum of 80 m; and for roadways with an average of 40 000 vehicles per day a minimum of 150 m [33]. In 2005, Beijing municipality had 15.4 million permanent residents and the 293 primary roads in urban districts of the city had a total length of 596 km that carried on average 5422 vehicles per hour [23b], a number that would increase 11.4% to 6040 vehicles per hour in 2006 [23a].
The reported Beijing air quality is an average of data from selected monitoring stations [27]. From 1984 to 2005, the 7 stations used to measure air quality remained constant. These stations monitored areas with different characteristics, e.g., traffic, residential, commercial, and industrial [1,[28][29][30][31]. Although the number of monitoring stations increased from 8 in 1984, one of the original monitoring stations was a background station located near the Ming tombs 80 km outside of the city, to 27 [28,35]. The monitoring stations used for determining daily city pollution levels and 'Blue Sky' days are also used to calculate annual average pollution concentrations [1,26]. In 1998, without the two monitoring stations in transportation areas, the annual average particulate (TSP) concentration for the city would have been 7% lower, the annual average NO x concentration would have been 24% lower, and the annual average SO 2 concentration would have been 10% lower than reported for that year.
Reports have raised questions regarding the accuracy of scientific and air quality reporting in China [36][37][38][39]54]. However, the annual number of 'Blue Sky' days, along with annual pollutant concentrations, continue to be used in China to evaluate air quality trends [3,5,14,18,22], model air pollution [31,40], calculate the health and economic impacts of air pollution [2, [41][42][43], and establish air quality control plans [44]. No known study has analyzed the sensitivity of Beijing's air quality monitoring data to the analysis of air quality trends, which I examine by calculating the impact of the change in monitoring station locations on reported air quality, or examined the air pollution index reporting system for other irregularities, including the revision of standards in June of 2000 [45,46]. The relative importance of nitrogen oxides (NO 2 /NO x ) and sulfur dioxide (SO 2 ) in public air quality reporting will also be addressed, along with a discussion of monitoring station locations.
Data
This study used daily and weekly air quality data, reported as Air Pollution Index values, publicly available from the State Environmental Protection Agency (SEPA, www.zhb.gov.cn) and Beijing Environmental Protection Bureau (BJEPB, www. bjepb.gov.cn). Chinese API values are a scientific measure of air quality designed to inform the public about air pollution and the potential impacts on human health [27]. The conversion from API values to pollutant concentrations is detailed in SEPA technical regulations in both Chinese and English, and has been used and described in several scientific studies [46][47][48]. The Chinese API is based on the air quality index (AQI) used in the United States, and although the standards vary, the calculation methodology is the same [49]. Similar index systems are also used in other countries [46].
In major cities in China, concentrations of the pollutants PM 10 (TSP from 1998 to 2000), NO 2 (NO x from 1998 to 2000), and SO 2 are monitored and converted to an air pollution index (API) value between 1 and 500 (table 1) [27]. From 1998 to 2000, ozone (O 3 ) and carbon monoxide (CO) were also used in API reporting [38]. Each day (week from 1998 to 2000), the highest API value is reported, and the primary pollutant is identified if its API is >50, indicating potential risk to human health.
where API = air pollution index, C p = the concentration of pollutant p, I Hi = API value corresponding to B P Hi , I Lo = API value corresponding to B P Lo , B P Hi = the breakpoint that is greater than or equal to C p , B P Lo = the breakpoint that is less than or equal to C p . An API value less than or equal to 100 indicates attainment of the national air quality standard-a 'Blue Sky' day. For PM 10 [21].
The table of pollutant concentrations and equivalent API breakpoints is the same in the Chinese and English versions of the SEPA technical regulations; however, the sample calculation in the English version incorrectly uses a PM 10 concentration of 250 µg m −3 for the API breakpoint of 200. The correct PM 10 = 350 µg m −3 for the API breakpoint of 200 is used in the Chinese version, has been applied in scientific studies [46][47][48], and is consistent with the US EPA methodologies. However, several studies have included the incorrect breakpoint [6,41].
Analysis
I examine the frequency distribution of API values focusing on values near the 'Blue Sky' boundary, and calculate the daily Beijing API on days when the primary pollutant was PM 10 and the reported API values were from 51 to 200 at all stations used for the city API calculations. Within this interval, equivalent to PM 10 concentrations from 52 to 350 µg m −3 , a change of one API unit equals a change in PM 10 concentrations of 2 µg m −3 , and averaging monitoring station API values is equivalent to averaging PM 10 concentrations.
SEPA and BJEPB separately report daily city APIs using the same automated monitoring station data [50]. These reported city APIs are similar, but not always equal. Between 2003 and 2007, 1312 days (71.9%) had PM 10 API values at all reporting monitoring stations (including 98% of days with a reported PM 10 API between 96 and 105 in 2006, and 85% of days with a reported PM 10 API between 96 and 105 in 2007). During these 5 years, the official city API reported by SEPA was equal to the city API reported by BJEPB on 74.0% of days, and within 1 API value on 99.6% of days. My averaging of daily PM 10 API values from the 7 monitoring stations (8 in 2006 and 2007) gives the official SEPA city API value on 86.3% of days, and a value within 1 API unit on 99.5% of days; closer to the official city API than reported by BJEPB.
SEPA technical regulations state that the final API should be rounded to the next whole number if a decimal remains after calculation [27], however on days where there is a difference between SEPA and BJEPB values, the SEPA value is lower by 1 unit on 99.2% of days, likely due to differences in rounding. A discrepancy larger than 1 API unit has been noted between SEPA and BJEPB data when the reported API is 100 [51].
I also analyze the sensitivity of trends in the number of days exceeding the national Grade III standard with and without the monitoring station changes, and pollutants concentrations from 1998 to 2002. Annual average pollutant concentrations are analyzed during this period due to the lack of availability of daily data, and because of the change in national air quality standards.
Changes in air quality standards: in June of 2000, less stringent standards for NO 2 /NO x , TSP/PM 10 , and SO 2 were established [45,46] complicating comparisons of the number of days meeting annual standards between 1998 and 2000 and recent 2001-2007 years [41]. Specific changes include: monitoring NO x to measuring NO 2 , and the 1996 Chinese Ambient Air Quality Standards were revised [39]. The national daily NO 2 standard was raised from 80 to 120 µg m −3 , and the annual average standard was raised from 40 to 80 µg m −3 . The WHO and many other countries also measure NO 2 , and the Chinese 1996 annual average NO 2 standard was equal to the standard that would [38]. The frequency distribution of daily PM 10 values is most often roughly log-normal [25], and analyzing data from all monitoring stations provides higher data resolution for examining potential bias. ) to be the primary pollutant on 99% of weeks above standard. NO 2 /NO x has not been a pollutant of concern since the June 2000 change in standards, even though government reports indicate no improvement in annual average NO 2 concentrations [5,11], and studies using satellite imagery have found substantial increases [53,54]. Although the number of 'Blue Sky' days reportedly increased from 100 in 1998 to 203 in 2002, neither annual average particulate nor nitrogen dioxide (NO 2 ) concentrations improved (figure 5). Previous research noted that NO x was responsible for the largest percentage of days above the standard from 1998 to June 2000 [55] however since NO 2 began being reported in June 2000, not a single day has had NO 2 as the primary pollutant.
Sulfur dioxide
This analysis does not focus on the sensitivity of trends in sulfur dioxide concentrations to monitoring station locations, because SO 2 has only been indicated as the primary pollutant on 3% of reports above the national standard from 1998 to 2007, compared to particulate (PM 10 /TSP, 87% of reports) and nitrogen oxides (NO x /NO 2 , 10% of reports). Furthermore, from 1998 to 2007, not a single API report indicated a SO 2 level above the Grade III (250 µg m −3 ) daily standard. API
Discussion
This study examined the sensitivity of Beijing's air quality metrics by comparing air quality for 2006-2007 to previous years by correcting for the change in monitoring station locations. Three measures of air quality were used to examine trends from 2001 to 2007, including: the annual number of 'Blue Sky' days, annual average PM 10 concentrations, and the annual number of days exceeding the Grade III standard. Although the most 'Blue Sky' days is found to have occurred in 2005, the lowest annual average PM 10 concentrations and the fewest number of days exceeding the Grade III standards occurred in 2003. This illustrates that the metric used for evaluating air quality is very significant, as there can be conflicting trends based on different metrics [68].
In my analysis I calculate the impact of the 2006 monitoring station changes on the reported number of annual 'Blue Sky' days and both daily and annual PM 10 concentrations; however, due to lack of data, I was unable to [1,4]. In 1998, the two stations monitoring transportation areas had annual NO x concentrations 100% higher than the average of the other 5 stations [1]. Given the growth in the number of vehicles, NO 2 /NO x concentrations in traffic areas have likely continued to increase [53,54]. Although street-level monitoring of NO 2 is not a suitable proxy for NO x , annual average NO 2 concentrations have been found to depend on the distance of measurement from main roads [25]. The monitoring station at Qianmen, one of the two removed traffic stations, was located adjacent to the sidewalk within 10 m of a main roadway. Resultantly, reported annual average NO 2 concentrations for Beijing in 2006 and 2007 measured without the two monitoring station locations in traffic areas are likely lower than they would have been if these stations had been included.
The 2005 automated methods for air quality monitoring which specified that monitoring stations in traffic areas not be used to measure urban air quality will likely lead to better harmonization of air quality data across China, although it complicates inter-year comparisons for Beijing [24].
In Europe, under the obligations of the European Union Framework Directive on air quality, public information on air quality is provided, separately, for roadside and background monitoring stations allowing for comparisons across Europe [58]. Within the Asian air pollution research network (AIRPET) efforts have also been made to compare air quality in major Asian cities using traffic, upwind, commercial, mixed, residential, industrial and commercial sites [59]. With vehicular emissions as a growing cause of air pollution in China, an understanding of air quality trends in these areas is especially important.
During 2006 and 2007, reporting continued for the two monitoring stations in transportations areas of Beijing, although they were no longer used to calculate the city air quality. However, on January 1, 2008 these two stations were de-listed and reporting stopped, preventing public access to air quality information for transportation areas and further complicating future analysis of trends in Beijing air quality [60].
More research needs to be done on the reported trends in air quality during the 1998-2002 periods, and the 2000 revisions to the Chinese national ambient air quality standard. Annual average pollution concentrations and the annual number of days meeting the national standard are two different measures of air quality.
Although the annual average concentrations of nitrogen dioxide and particulate did not decrease between 1998 and 2002, some of the reported increase in 'Blue Sky' days may be attributable to a decrease in the seasonal variability of pollution. As the primary source of air pollution has shifted from coal burning for heating to pollution from transportation, it is possible that annual average concentrations might not improve, while the number of days meeting the standard increases, due to less seasonal variation in vehicular emissions.
However, the impacts of the 2000 revision of the air quality standards on reported city air quality should not be understated. For example, in 1998, the annual average NO x concentration in Beijing was 151 µg m −3 -over three times the Chinese annual average NO x standard of 50 µg m −3 , and the annual average NO 2 concentration was 74 µg m −3nearly twice the 1996 Chinese national ambient air quality standard [17,61]. However, based on the 2000 revisions when the annual average standard for NO 2 was raised to 80 µg m −3 [45] the 1998 annual average NO 2 concentration was in accordance with national standards. Since the revision of standards, NO 2 concentrations in Beijing have never been above the national standard, but that does not necessarily indicate that the atmospheric concentrations of NO 2 or NO x have decreased.
Although many countries, including the United States and the United Kingdom, evaluate and publicly report the number of non-attainment days based on data from individual monitoring stations, China only widely reports averaged air quality statistics [1,3,[62][63][64][65]. In 2007, 246 'Blue Sky' days were reported for the city of Beijing using an average of air quality at eight monitoring stations in urban areas of the city, but there were only 100 days when all 27 monitoring stations in Beijing municipality reported an Air Pollution Index of 100 or less. On 265 days in 2007 air quality at least one of the monitoring stations indicated levels of air pollution above the Chinese national ambient air quality standards [4]. In 1998, 100 'Blue Sky' days were reported for the city of Beijing using an average of air quality from seven monitoring stations [5]. However, these two numbers, 100 'Blue Sky' days in 1998 and 100 days in 2007 when all monitoring stations reported air quality meeting the national standard, represent two different methods for evaluating the city air quality and highlight the high degree of sensitivity of these air quality metrics.
It has been widely reported that the number of 'Blue Sky' days in Beijing increased from 100 in 1998 to 246 in 2007, but these reported trends encompass a period during which air quality was evaluated in three different ways: (1) 1998-1999, based on the 1996 Chinese national ambient air quality standards (2)
Conclusions
Publicly reported air quality trends in Beijing during the period 1998 to 2007 are found to be highly sensitive to monitoring and reporting data. In 2007, 246 'Blue Sky' days were reported. However, if station locations had not changed, the number of 'Blue Sky' days and annual PM 10 Although nine continuous years of air quality improvement has been reported in Beijing between 1998 and 2007, my analysis finds that these improvements, as indicated by the annual number of 'Blue Sky' days, are due to irregularities in the monitoring and reporting of air quality and not to less polluted air. Reported variations in air quality that occur as a result of changes in monitoring station locations or air quality standards, should be considered as inconsistencies in the metrics and not as actual changes in air quality.
|
2019-05-16T13:03:37.509Z
|
2008-09-26T00:00:00.000
|
{
"year": 2008,
"sha1": "b752bad10f16c4b835d7ca35da62bcdd24ab911f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1748-9326/3/3/034009",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "428b9fc069122971984dd161c2e901aa77f12fe9",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
6132965
|
pes2o/s2orc
|
v3-fos-license
|
Computer – based method of bite mark analysis : A benchmark in forensic dentistry ?
B inflicted wounds are one of the most frequent form of human traumas; humans have used their teeth as both tools and weapons since the dawn of time. Bite marks are usually seen in cases involving sexual assault, murder and child abuse, the assessment of which could be a major factor leading to conviction of the accused. Many violent assaults involve the presence of more than one bite, making some bites difficult to identify. Bite marks are accepted as being unique to each person since the characteristics of bite mark may be affected by the type, number, and peculiarities of the teeth, dynamics of occlusion, muscle function, individual tooth movement and temporomandibular joint dysfunction.[1]
Introduction
B ite inflicted wounds are one of the most frequent form of human traumas; humans have used their teeth as both tools and weapons since the dawn of time. Bite marks are usually seen in cases involving sexual assault, murder and child abuse, the assessment of which could be a major factor leading to conviction of the accused. Many violent assaults involve the presence of more than one bite, making some bites difficult to identify. Bite marks are accepted as being unique to each person since the characteristics of bite mark may be affected by the type, number, and peculiarities of the teeth, dynamics of occlusion, muscle function, individual tooth movement and temporomandibular joint dysfunction. [1] Many techniques to analyze bite mark patterns have been used in the past. They involve the use of "overlay." The tooth exemplar, independent of the method used to produce it when biting surface data are transferred to a clear acetate sheet, is called an "overlay." These are physically compared to the injury on skin or a patterned mark. "Hollow volume overlay" records the perimeter of biting surface of each tooth and leaves the inner aspect of the tooth image transparent. [2] Based on the site and type of bite marks, overlays are generated using hand tracing, xerographic images, or through X-ray films. These life-sized overlays can be compared with the overlays from suspect's teeth. [3] The present study aimed to assess the most accurate bite mark overlay fabrication technique by direct comparisons between models of cases and bite marks with indirect comparisons in the form of conventional traced overlays of subjects. It also aimed to determine the relative accuracy of the technique and its feasibility in forensic science.
According to Iain A. Pretty, the severity of a bite mark is an important factor in the assessment of the forensic significance of the injury and whether or not it can be compared with a suspect. The American Board of Forensic Odontology (ABFO) has published guidelines that describe the evidence that should be collected from both victim and suspect, and represent a sound basis for such collection. [4] All of the photographs should be taken with the camera at 90° to the injury and DNA swabbing of the injury site should be a double swab -the first moistened with distilled water and the second dry. [4]
Materials and Methods
Thirty subjects (10 males and 20 females) with complete set of natural upper and lower anterior teeth were selected for this study. Subjects with orthodontic appliances, intraoral prosthesis, loss of anterior tooth structure, or developmental tooth anomalies were excluded from the study. The upper and lower alginate impressions were taken from 30 subjects. Die stone model was obtained from each impression; overlays were produced from the biting surfaces of six upper and six lower anterior teeth using the following methods: hand tracing from study casts, wax impression method, radiopaque wax impression method, and by xerographic-based method. [1] Following this, dental characteristics of the biting edge and degree of rotation of the six upper and six lower anterior teeth were measured. Area of tooth biting surface was included to evaluate differences in the relative length and breadth of recorded individual teeth and the width of the outline produced by each overlay method. [2] Overlay was produced by tracing the anterior teeth (maxillary and mandibular) on an acetate sheet, which was done using a fine-tipped felt pen by five techniques: • Hand tracing technique: Hand tracing from study casts was done by keeping the acetate sheet on the biting surface of the upper and lower anterior teeth [ Figure 1] • Wax impression technique: A wax impression was taken on a sheet of modeling wax and the impressions were traced on an acetate sheet [ Figure 2] • Radiographic wax impression technique: Silver amalgam powder mixed with surgical spirit was added to the individual tooth impressions taken as above, A radiographic image was taken on an intraoral dental X-ray film. The film was processed; the bite marks showed as white teeth on a dark background. The radiographic image was then traced on a transparent sheet [ Figure 3] • Xeroradiographic technique: The upper and lower study casts were placed on a glass plate of the photocopy machine with incisal edges down. This was photocopied on an A4 sheet of paper. An acetate sheet was overlaid on the photocopy image of the casts and the outline of incisal edges was traced [ Figure 4] • 2D computer layout: The study casts were positioned on the 2D scanner plate with incisal edges contacting the plate and a color photograph was obtained. The saved image was imported into Photoshop (Adobe Photoshop 6 software) and was rotated to make the edge parallel to the x-axis of the computer. Selection of biting edges: The biting edge of teeth was highlighted by semi-automatic thresholding using magnetic lasso tool. Once the initial selections in all six teeth were done, the selection was smoothend and marked for comparison. [1] [ Figure 5] All the overlays were then subjected to measurement of area and angle of rotation of all 12 teeth. The scanned overlays were opened in Image J software, and the outlines of the tooth impressions were thresholded and a mask was created. The area, perimeter, as well as centroid coordinates for each tooth in each overlay were then obtained and tabulated. The centroid points were then marked for each tooth in Image J using the coordinates obtained. The centroids of the two central incisors were joined and a perpendicular was drawn at its midpoint. This was considered as the reference line to measure the angulation. Using the angle tool, the angle formed between the reference line and the line joining the mesial contact point and centroid of each tooth was measured and tabulated (representing the angle of rotation).
Statistical analysis
The mean area and angle of rotation of overlays produced by the four methods (hand tracing from study casts, hand tracing from wax impression method, radiopaque wax impression method, and xerographic method) were individually compared with the computerized technique using linear regression. The amounts of variation in the area and the angle of rotation of individual teeth bite marks were assessed using Mahalanobis distance by SPSS version 20 IBM Co-operation Switzerland.
Results
The overlays produced by the four methods (hand tracing from study casts, hand tracing from wax impression method, radiopaque wax impression method, and xerographic method) were individually compared with the computerized technique using linear regression. The amount of variation was then assessed using Mahalanobis distance.
The mean distance and standard deviation obtained from the measurements of tooth area for six anterior teeth in maxillary and mandibular arches were calculated [ Contd... For comparison/assessment of angle of rotation, xerographic-based method was the best for teeth 11, 12, 32, and 33; overall, wax impression method was the best for 11, 12, 32, and 33. Hand tracing from study casts was the best for teeth 21, 22, and 43, and radiopaque wax method was the best for teeth 41 and 42. There was no single best technique which showed least errors in overlay area and angle measurement as compared to the standard. Assessment of individual teeth showed that wax method was best suited for 4 out of 12 teeth both in area and angle assessment, which included mandibular central and lateral incisor and maxillary central incisor and canine. Assessment of angle for central incisor was better in wax method. Hand tracing and radiopaque wax methods were the least reliable showing higher distances from the standard.
Discussion
Bite marks have been defined by Mac Donald as "a mark caused by the teeth either alone or in combination with other mouth parts." Human bite marks are most often found on the skin of victims or on food substances; while bite marks on food are usually well defined, the bite marks on skin are less defined. Bite marks can occur singly or at multiple sites, or may present as multiple bites at a single location. Each person has a unique dentition which can be replicated and helps in identifying the victim/or the culprit. Human bite marks have been described as elliptical or circular injuries. [5] Bite marks can be analyzed using various techniques which could be either direct or indirect techniques. Direct technique involves the use of a model of the suspect's teeth which is then compared to life-sized photographs of the bite mark, while indirect technique involves the use of transparent overlays, on which the biting edges of the suspect's teeth are recorded. Transparent overlays can be produced by placing a sheet of acetate over the dental cast of the suspect's teeth and tracing the biting edges with fine-tipped marker pen. [6] Bite mark recording may be tricky in many tissue/food items. On skin/food substance, direct acetate tracing could be possible, whereas bite marks on curved surfaces may be radiographed following amalgam application. These could then be compared with original casts of the suspects.
Though there are various methods to determine human bite marks, according to Maloth, [1] xerographic analysis has been proved to be a better method. The present study aimed to evaluate the reliability and accuracy of the commonly used methods of human bite mark overlays, which included the hand tracing from study casts, hand tracing from wax impression, radiopaque wax impression method, xerographic-based method, and computer-based method.
The computer-based method is more accurate, so this method was taken as the gold standard and other methods were compared with it to determine their accuracy.
Comparison of individual tooth area and angle assessment showed that there was considerable variation in the four techniques mentioned [ Table 3]. Wax impression method was found to be a good technique to produce overlays of the suspect to compare the bite marks on food, skin, etc., The angle of rotation could be better assessed by wax method. As the wax method involved penetration of the teeth into wax, it allowed larger area of tooth to be exposed, which when traced was more accurate to correlate, as the line angles and contact points were better recorded in the wax impression.
There was considerable variation among the four overlay production methods in determination of incisal edge area [ Figure 6]. It could be due to the subjective error that occurred while hand tracing. However, after statistical analysis, although wax impression method was found to be a good technique, xerographic overlay production method was found to be the most accurate method for determination of tooth area and angle of rotation among the four methods, despite the computer-based method being more reliable for bite mark analysis.
Xerographic method was the best among the four different methods to measure the area, followed by hand tracing from wax, hand tracing from study casts, and radiopaque wax impression method. Radiopaque wax impression method is not considered to be accurate because the area can increase with the depth of the bite on wax sheet which may alter based on the pressure applied. [1] On the other hand, magnification and distortion of radiographic image can also result in variation in measurements. Hand tracing from study casts is also not considered to be an accurate method as there could be subjective error while tracing.
Wax impression method may be better for recording the area and angle of tooth rotation of the teeth which are out of occlusion, The canines are the first teeth to contact the occlusal plane, and may hinder accurate recording of lateral incisor and maxillary first premolar in xerographic-based method and radiopaque wax impression method.
Advantage of xerographic method compared to other methods is that details like fracture on the model can be represented on the overlay, which cannot be accurately represented by hand tracing methods. [6] In the present study, we found that xerographic method is more accurate and inexpensive, and can be used for preliminary screening purposes. Computer-based method is considered as a "gold standard" for bite mark analysis. However, further research on bite mark comparison is needed to enhance the reliability and accuracy of bite mark analysis. A database of computerized area and angulations can be formed for comparison with xerographic method.
Conclusions
The basis of using these analyses is that human teeth are unique and this asserted uniqueness is replicated on the shown to be better than the others and very little research has been carried out to compare different methods. This study evaluated the accuracy of direct comparisons between suspect's models and bite marks with indirect comparisons in the form of conventional traced overlays of suspects, and the xerographic technique was found to be the best.
|
2018-04-03T03:29:13.149Z
|
2016-01-01T00:00:00.000
|
{
"year": 2016,
"sha1": "346056042999cb226c0c52a8ae10065da0b4eb5f",
"oa_license": "CCBYNCSA",
"oa_url": "https://europepmc.org/articles/pmc4799517",
"oa_status": "GREEN",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "9fdda328307e412904042fe6393637f1d868a875",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
}
|
237467515
|
pes2o/s2orc
|
v3-fos-license
|
High Stability Au NPs: From Design to Application in Nanomedicine
Abstract In recent years, Au-based nanomaterials are widely used in nanomedicine and biosensors due to their excellent physical and chemical properties. However, these applications require Au NPs to have excellent stability in different environments, such as extreme pH, high temperature, high concentration ions, and various biomatrix. To meet the requirement of multiple applications, many synthetic substances and natural products are used to prepare highly stable Au NPs. Because of this, we aim at offering an update comprehensive summary of preparation high stability Au NPs. In addition, we discuss its application in nanomedicine. The contents of this review are based on a balanced combination of our studies and selected research studies done by worldwide academic groups. First, we address some critical methods for preparing highly stable Au NPs using polymers, including heterocyclic substances, polyethylene glycols, amines, and thiol, then pay attention to natural product progress Au NPs. Then, we sum up the stability of various Au NPs in different stored times, ions solution, pH, temperature, and biomatrix. Finally, the application of Au NPs in nanomedicine, such as drug delivery, bioimaging, photothermal therapy (PTT), clinical diagnosis, nanozyme, and radiotherapy (RT), was addressed concentratedly.
Introduction
As the most stable noble nanomaterials, Au NPs have been researched and applied for thousands of years. Compared with other nanomaterials, Au NPs exhibit many different properties. These unique physical properties of Au NPs are mainly attributed to the quantum size effect when the size of Au NPs decreases to a specific value (about 20nm), the magnetic, optical, acoustic, thermal, electrical, and superconducting properties of Au NPs are significantly different from those of conventional materials. 1 Because of these unique physical properties and excellent biocompatibility, Au NPs have great potential in biomedical fields such as drug delivery, biological imaging, photothermal therapy, and clinical diagnosis. 2 For example, it could be combined with DNA or proteins through electrostatic interactions. Because of their magnetic properties at the nanometer scale, Au NPs can achieve targeted delivery of biomolecules under the control of an external magnetic field. 3,4 Likewise, the excellent biocompatibility, easy-to-control size, shape, and functionalization of Au NPs make them an ideal drug delivery vehicle. 5 What's more, the large specific surface area of the Au NPs can cause the free electrons in them to resonate locally and exhibit a unique local surface plasmon resonance (LSPR) effect. 6 Surface plasmons (SPs) refer to the electron density waves propagating along the metal surface (crosssection) generated by the interaction of freely vibrating electrons and photons on the metal surface. More importantly, it can be excited by electrons or light waves, enhancing peripheral fluorescence emission and producing light-to-heat conversion, thereby validly improving light absorption efficiency, making the Au NPs have photothermal conversion capabilities. 7 At present, the application of photothermal therapy for Au NPs is concentrated in the near-infrared region (NIR). Moreover, two NIR (NIR-I 650-900nm, NIR-II 1000-1200nm) wavelength light has a strong penetrating ability in biological tissues, can obtain better light absorption and light-to-heat conversion efficiency in the NIR by adjusting the size and structure of Au NPs. 8 Among them, the rodshaped Au NPs have the strongest light-to-heat conversion efficiency due to their excellent dispersibility and adjustable ratio. Many works have confirmed that its maximum heating efficiency can exceed 90% under near-infrared light irradiation. 9 These factors make Au NPs considered as an ideal candidate for photothermal therapy. On the other hand, the surface plasmon effect of nanomaterials makes Au NPs have excellent fluorescence quenching ability and become a quencher in fluorescence resonance energy transfer (FRET) based biosensor materials. 10 Furthermore, the easy-to-controllable size and functionalization of Au NPs can enable some fluorescent groups, quantum dots, antibodies to be modified on their surface to construct nanoprobes to achieve a rapid and accurate clinical diagnosis. [11][12][13][14][15] However, Au NPs applied in nanomedicine require it should keep high stability in various conditions, like the concentration and type of salt ions, pH, and biomolecules. 16 Increasing the concentration of salt ions in the solution will reduce the electrostatic repulsive force on the surface of the nanoparticles, thus causing them to shift like an unstable state and finally leading to the aggregation of Au NPs. Kӧper et al found that the stability of Au NPs decreased significantly with the increasing concentration of NaCl solution. 17 Liu et al found that some high-affinity halogen anions, such as Br − , promote aggregation of Au NPs to some extent. And the cations of elements with larger atomic numbers induce the aggregation of Au NPs compared to small ones, which is due to the reduction of nanoparticle surface potential. 18 Besides, pH is another critical factor affecting the stability of Au NPs. Au NPs can maintain good stability in pH 5-9. 19 And aggregation of Au NPs is induced by over acid or over basic conditions. 20,21 In physiological systems, some biomolecules can significantly affect the stability of Au NPs. Proteins in the biological matrix can change the stability of Au NPs through electrostatic adsorption. For example, bovine serum albumin (BSA) can adsorb on the surface of nanoparticles and decrease their stability in the biological system. Similarly, amino acids can alter the surface charge of Au NPs, causing aggregation. [22][23][24] Larson et al reported that the interaction of cysteine with Au NPs also destabilized the Au NPs. 25 Kimling et al found that excessive Vc adsorption on the surface of Au NPs causes aggregation. 26 Nowadays, except requiring the excellent stability of Au NPs during synthesis, the colloid's final stability must also be considered, which is very important for the storage and application of Au NPs, such as bioimaging and cancer therapy. 19,20,[27][28][29] For those issues, the primary method at this stage is to prepare or modify Au NPs to improve their stability through different materials or synthetic methods. Some polymers and natural products have recently been employed to synthesize different structures and particle size Au NPs. These Au NPs have been evaluated against harsh conditions such as extreme pH, high concentration ions, various biomatrix, etc. The overall goal of this review is to provide a critical overview of our current understanding of Au NPs and their applications against various conditions. We will discuss how to prepare high stability Au NPs and then focus on Au NPs against longtime storage, extreme pH, various biomatrix, etc. Finally, we introduce the latest research progress in biomedicine based on Au NPs. Figure 1 outlines the interest and focus of the present review.
Preparation of High Stability Au NPs
At present, Au NPs could be synthesized via chemical reduction methods, including the Turkevich method, Brust-Schiffrin method, and seed growth method. [30][31][32][33] The Turkevich (or citrate) method is designed in a straightforward, single-phase, and simple route to obtain spherical Au NPs to use trisodium citrate as an Au salt reducing agent. 34 Through this method, we can quickly and easily get Au NPs with controllable size. Turkevich method was usually synthesized spherical Au NPs, so it has limitations. 35 Beyond that, the Brust-Schiffrin method is also a commonly used chemical synthesis method. 36 As a two-phase synthesis and stabilization method, the preparation process is rapid and straightforward. It mainly stabilizes and modifies Au NPs through thiol functionalization and ligand exchange. Moreover, the seed-mediated method can synthesize Au NPs of different shapes but put forward higher requirements for various reaction factors. 37 Therefore, we urgently need some strategies to prepare highly stable Au NPs with excellent biocompatibility that can be widely used in the biomedical field and have convincing examples, such as antibody binding. 38 The subsequent modification of the surface chemistry of Au NPs can be accomplished through ligand exchange to adjust colloidal properties further, improve stability and expand applicability. For example, some polymers and biologically active substances are used as the capping agents or reducing agents to synthesis high stable Au NPs, particularly in natural product green synthesis Au NPs. This method has significant advantages compared with other methods, which are reliable, clean, and bio-friendly. 39,40 Besides, due to the smaller size, the ultra-small Au NPs have better stability. 41 To date, many natural products have been reported to successfully synthesize highly stable Au NPs, ranging from plants, bacteria to fungi. Herein, for chemical methods, we mainly introduce some recent advances in the preparation of Au NPs from polymers and organics; for biosynthesis methods, we mainly introduce the aspects of plants, microbes, proteins, genetic materials (DNA, RNA). Finally, we discuss the preparation of ultra-small Au NPs with controllable size. The various synthesis methods are summarized in Figure 2.
Polymer Functionalized Au NPs
Nowadays, polymers as protective groups to synthesize high stability Au NPs have been attracted more and more attention. There are three main approaches for preparing Au NPs from polymers: direct synthesis, "grafting from," and "grafting to" strategy. 42 The direct method is to obtain Au NPs by reducing tetrachloroauric acid with a reducing agent under the protection of the thiol group, such as poly (N-isopropyl acrylamide) (PNIPAM) and polystyrene (PS). [43][44][45][46] "Grafting from" technology refers to attaching polymer functional groups to the surface of Au NPs through ligand exchange, usually in the presence of chain transfer agents or initiators. For example, PNIPAM and polyacrylic acid (PAA) can be used to graft from the surface of Au NPs for functionalization. [47][48][49][50] Another approach is the "grafting to" strategy, which is to graft polymer containing sulfhydryl, amino, and other functional groups on the surface of Au NPs by way of ligand substitution to obtain composite Au NPs. 51,52 Many studies have confirmed that the "grafting to" method can get Au NPs with high stability. For example, poly (2-(dimethylamino)ethyl methacrylate (PDMA) and poly (2-(methacryloyloxy)ethylphosphocholine) (PMPC) can synthesize excellent stability of Au NPs. 53 More importantly, by this method, the assembled structure of Au NPs can be well controlled to meet the specific application's needs via adjusting structural parameters (such as ratio and molecular weight) of the hydrophilic and hydrophobic partitions of the amphiphilic polymer.
What is more, using some polymer as capping agent can improve the stability and light-to-heat conversion efficiency of nanoparticles. 54,55 These polymer-encapsulated Au NPs maintained the self-assembly behavior of the amphiphilic polymers, resulting in a series of functional nanostructures. 56 Polymer capping agents can further improve the stability of Au NPs. Therefore, many scholars have adopted polymers to synthesize Au NPs based on the Turkevich method, especially some responsive polymers that can give Au NPs some new properties to respond to external stimuli. In this way, the colloidal properties may vary with pH, ionic strength, redox potential, temperature, etc. [57][58][59][60][61][62] In addition, these responsive polymers can also enhance the stability of Au NPs and expand their application range. In general, the polymers used to synthesize Au NPs are currently classified according to their functional groups and mainly divided into heterocyclics, alcohols, and amines. [63][64][65] Heterocyclic Substances Some heterocyclic substances can reduce the Au precursors to prepare stable water-soluble and uniformly tunable Au NPs. Keeping nanoparticles' long-term and reasonable stability in biological relevant ionic media. 66 This is maybe due to N heterocyclic molecules (NHC) can form stronger bonds with metals. 67 The primary mechanism for carbon-based heterocyclic synthesis of Au NPs is the use of long alkyl chains to exchange ligands on nanoparticles self-assembly. 68,69 Compared with the Au-S bond, the covalent bond formed by the NHC and Au NPs is stronger, which makes the nanoparticles have better stability in different physiological environments. [70][71][72][73] Many reports have confirmed that NHC-stabilized Au NPs have great potential in biomedicine. 69,73 As a common NHC, under the action of an initiator, polypyrrole (PPy) is used as a protective agent to synthesize a composite urchin-like Au NP of about 6 nm utilizing oxidative polymerization. Compared with bare Au NPs, PPy-coated Au NPs have excellent stability under long-term storage, heat, pH, and laser irradiation and improve light-to-heat conversion efficiency. 74 The latest research shows that bidentate NHC is a new end-capping ligand to synthesize Au NPs by top-down and bottom-up approaches. For the top-down method, dodecyl sulfide-protected nanoparticles follow the Brust−Schiffrin method. For the bottom-up preparation, mono-and bidentate NHC−Au complexes were reduced with NaBH 4 in ethanol affording the corresponding Au NPs ( Figure 3). The Au NPs obtained by both top-down and bottom-up maintained better stability after heating at 130 °C for 24 hours due to the larger ligand density ( Figure 4). 63
PEG-Based Polymer
In recent years, the use of polyethylene glycol (PEG) to synthesize Au NPs has received more and more attention. As a typical alcohol polymer, PEG is widely used due to its low toxicity, good biocompatibility, and easy modification to the surface of Au NPs. 75 Due to the very high specific binding affinity of gold to thiol groups, the groups in PEG can be direct covalently modified on the surface of Au NPs and bind firmly to it, making the system have electrostatic repulsion and provide a particular steric
6071
hindrance to prevent salt and biomolecules induced aggregation. 76,77 For example, in the serum-containing phosphate buffer, PEG forms a dense layer on the surface of Au NPs, prevents the adhesion of BSA, and can significantly improve the stability of Au NPs. 78 Besides, Au NPs can be modified by ligand exchange with different anchor groups of PEG, such as monothiol (MP7M), flexible dithiol (BP7M), constrained dithiol (DP7M), and disulfide bond (TP7M), all of which are improved the stability of Au NPs to a certain extent. The disulfide bond modified Au NPs have the best stability and can maintain specific stability for 15 minutes at 100°C in a 2 M NaCl solution. Because the disulfide bond groups attached to the surface of the Au NPs form a dense structure. 79 Next, Park et al facilely synthesized PEGcoated Au NPs by reducing the gold precursor. Due to the chelating effect of the group, Au NPs can keep several months of stability under the cell physiological environment simulated by the mixed solution of 3.0 M DTT and 2.0 M NaCl. 80 In addition to physiological environments, some Au NPs modified with PEG can maintain long-term stability at high temperatures. Since Au NPs are often used in the photothermal treatment of tumors, their thermal stability is also the main direction of current research. The latest study shows that the physical sputtering method can synthesize Au NPs covered with PEG with uniform size and shape, ultra thermal stability (100 °C) without cytotoxicity. 81 Except for PEG, some surfactants can also improve the stability of Au NPs. 82 In particular, it can slow down the deformation caused by the maturation of Au nanomaterials, thereby improving its thermal stability. For example, Au nanofluids were synthesized using Gemini surfactant butane 1,4 (N-tetradecyl-N, N-dimethyl); ammonium bromide has better thermal stability. The results of UV-Vis spectroscopy showed that it was at 150 °C, 140 °C and 130 °C stables for 8 hours, 12 hours and 20 hours, respectively. 83
Amine-Terminated Polymers
The organic compound amine is also commonly used as a protecting group to synthesis Au NPs. Since the amine molecule can cap the Au NPs in the solution and the nanoparticles are stabilized covalently, the colloid has good dispersibility. For example, 2-methyl aniline (MA) protects Au NPs with an average diameter of 20 nm. Due to the oxidative polymerization of amine to form 6072 a polymer shell on the surface of Au NPs, it has excellent stability. 65 Rajesh Sadar et al used polyallylamine (PAAM) to synthesize PAAM-Au NPs. Then they tested the prepared small-sized Au NPs (<3 nm) in solutions of different pH and found that it can still maintain better stability under the conditions of pH 1.5 and 3.5. More interestingly, Au NPs can be assembled into various structures at different pH values, which significantly expands its scope of application. 84 Nowadays, the latest report shows that polypropylene imine (PPI) can be used to synthesize highly stable dendritic Au NPs. The high density of functional groups on the surface of nanoparticles significantly improves their stability under different physiological conditions (phosphate buffer solution, serum, Hanks buffer). 85 Susumu et al used maleimide as a ligand to terminate Au NPs, which can be stable for 10 days under 2 M NaCl and 0.5-1 M DTT conditions. 86 As an amide polymer, polyvinyl pyrrolidone(PVP) can stabilize and prevents the aggregation of Au NPs. It can well control the morphology of the nanoparticles. What's surprising, a minimal amount of PVP can achieve excellent stabilization effects on Au NPs. 87 Besides, some amine salts, such as polyallylamine hydrochloride (PAH) can also be used to prepare Au NPs (5-50 nm) with controllable size by in-situ growth. The synthesis method is simple, and the prepared Au NPs have good stability and biocompatibility. 88 Also, dendritic polyamide amide (PAMAM) can be used as a template for modification to obtain highly stable Au NPs. Next, they confirmed that the particular zwitterionic layer on the surface of the modified Au NPs limits the interaction between fibrinogen and Au NPs, so it has higher stability in the fibrinogen solution (within 24 hours). 89
Thiol Terminated Polymers
Au NPs can be conjugated with a variety of groups by simple chemical methods, such as sulfhydryl groups. 47,90 Thiol is a class of compounds containing sulfhydryl functional groups, usually cross-linked with Au NPs using Au-S bonds to protect and stabilize the nanoparticles. For example, previous research shows that the ligand exchange synthesis between Au citrate and dithiol is very stable and can resist the external environment, which may be due to the tight binding of the dithiol group of dihydrolipoic acid (DHLA) to the surface of Au NPs. 91 Besides, Li et al prepared aliphatic thiol-stabilized Au NPs. They confirmed that it can still maintain better stability even in 0.1 M dithiothreitol solution. 92 Next, based on the Brust method, Kornberg et al take advantage of ligand exchange reaction prepared Au NPs, the nanoparticles with controllable size by adjusting the ratio of thiol and HAuCl 4 . Perhaps surprisingly, the Au NPs produced by this strategy can be stable in an aqueous solution for several years under thiol protection. 93
Acid-Induced Synthesis High Stability Au NPs
As a general compound, acid can induce synthesis Au NPs, and it exhibits excellent stability under specific physiological environments. In detail, acid-functionalized can modify Au NPs and broaden their application range while improving their stability. Phosphonic acid (PA) is one of them. Due to the excellent hydrophilicity of the PA groups on the surface of Au NPs and the electrostatic repulsion and steric hindrance between them to protect the Au NPs. For example, ethylenediamine-tetramethylene phosphonic acid (EDTMP) can be used to synthesize phosphonic acid-functionalized Au NPs. Zhang et al synthesized phosphonic acid-functionalized Au NPs. The characteristic peaks of P=O, PO 3, and P-OH were found by Fourier transform infrared spectroscopy (FTIR), which further confirmed that the phosphonic acid groups were successfully modified on the surface of the Au NPs. They found that under 25 mM PBS buffer (pH 7.0), Au NPs were almost the same as the initial absorbance, and further research shows that the absorbance of Au NPs remained virtually constant in the pH range of 3.0-12.0. Meanwhile, after 3 months of storage, no flocculation or aggregation of Au NPs was observed. 94 Except phosphonic acid, some other acids are also used to synthesize highly stable Au NPs. For example, Mohammad et al synthesized Au NPs coated with PEGylated deoxycholic acid (DCA). It exhibits excellent stability and can remain stable in a wide temperature range (-78 °C−48 °C) and wide pH (2.5-11). More surprisingly, due to the higher X-ray attenuation coefficient of Au NPs and the sensitivity of deoxycholic acid-specific tumor cells, PEGylated DCA@Au NPs are expected to be used in targeted tumor therapy and contrast agents. 95 Besides, cinnamic acid (CA) can be used as a template to induce the self-assembly of Au NPs, and it can significantly improve the stability of nanoparticles. Then they further verified the stability of Au NPs; compared with the conventional chemical method, the Au NPs (5 nm) synthesized by this method can still maintain excellent stability when stored at room temperature for 3 months. 96
Green Synthesis
The green synthesis of Au NPs is a hot spot in current research. It consists of two main categories: biological synthesis and biomimetic synthesis. Biological synthesis mainly uses extracts from some plants and microbes (including bacteria and fungi) as stabilizers or reducing agents to synthesize gold nanomaterials. 97,98 Biomimetic synthesis refers to biomolecules and water as reaction reagents to guide the synthesis of nanomaterials under defined reaction conditions with the metabolites of living organisms as substrates. [99][100][101] Biomimetic synthesis overcomes some apparent drawbacks of biosynthesis, such as low yield, difficulty to control the size and shape, and further separation and purification of the obtained polydisperse gold nanomaterials. It's a new synthesis strategy evolved from biological synthesis. 102,103 Plant Extract-Mediated Synthesis Nowadays, Au NPs synthesized from plant-based phytochemicals are extremely attractive for their unique efficacy and biocompatibility. 40 Meanwhile, the plant-mediated method is synthesized at room temperature and does not require additional chemical reagents. The prepared Au NPs have unique properties, such as antioxidant, antitumor activity, and antibacterial activity. [104][105][106] At present, the major drawback of stabilizing Au NPs by plant extracts is that it is difficult to control the shape and size of the nanoparticles due to their anisotropic.
In terms of plant component-mediated synthesis, Jaewook Lee et al used some active ingredients extracted from plants, including Gallic acid (GA), protocatechuic acid (PCA), and isoflavones (IF) act as reducing agents to synthesize functionalized Au NPs with extremely high biocompatibility and stability, it can be stable for three months. Because the hydroxyl groups in the phytochemical composition have a high surface charge: the strong repulsion between them can prevent Au NPs from agglomerating. 107 Besides, the preparation of biogenic Au NPs from plants with high medicinal value such as Plumbago zeylanica, Dioscorea bulbifera, Gloriosa superba, and Gnidia glauca has also received much attention. Similarly, it relies on the hydroxyl groups of compounds (such as alkaloids, reducing sugars, phenols, tannins, saponins, and flavonoids) to bioreduce Au 3+ ions to Au, and the carbohydrates of plant extract may be used to stabilize Au NPs [108][109][110][111][112] In general, HAuCl 4 binds to plant extracts through carbon-chlorine bonds. 113 In addition, during the synthesis process, some of the gold seeds elongated without forming gold nuclei due to incomplete reaction, resulting in some irregular aggregation of nanoparticles and obtaining anisotropic Au NPs. Moreover, glucose and starch can also reduce agents and stabilizers to synthesize Au NPs in different buffers. Subsequently, experiments confirmed that Au NPs synthesized in MES buffer have long-term stability and can be stored at room temperature for 17 months. 114 Similarly, glycerin extracted from natural oils and fats can also be used as a material to synthesize Au NPs. Rashida Parveen et al used glycerin as a reducing agent and stabilizer to synthesize uniform-sized Au NPs with excellent biocompatibility and stability. And the size of Au NPs can be controlled by the ratio of glycerin to the water. 115 Due to the catalytic ability of glycerol and the superior safety of obtaining Au NPs, this synthetic method is expected to be used in the fields of catalysis and biomedicine. Using the extract of the olive leaf as a reducing agent can prepare Au NPs with better stability and non-toxicity. This method is easy to synthesize and has a higher reaction rate. 116,117 Besides, mango leaves can also be used to synthesize Au NPs. The extract of mango leaves contains various active ingredients such as phenolic acids, terpenes, and glycosides. 118 At present, studies have shown that using some mango leaf extracts can rapidly synthesize spherical Au NPs without heating, and obtained nanoparticles have ultrahigh colloidal stability. It can be stable for more than 5 months at room temperature, which may be due to the active ingredients in the mango leaves. 119 And the tannin in bayberry can also be used to obtain Au NPs with excellent biocompatibility effectively. Among them, bayberry tannin serves as a reducing agent as well as a stabilizing agent. At the same time, the size of the nanoparticles can be adjusted by the concentration of tannin. This green synthesis method does not require other toxic chemical reagents and has comparatively higher practical value. 120 And as a natural ingredient in plants, Gum Arabic (GA) can be used as a stabilizer and a reducing agent to synthesize Au NPs with steric stability. Studies have confirmed that spherical Au NPs synthesized using GA and NaBH 4 have good stability under long-term storage conditions and can maintain physical stability for up to 5 weeks. 121 Nowadays, glycans have received extensive attention due to their smaller molecular weight and advantages of binding to specific receptors. The functionalized Au NPs with some different glycans by ligand exchange have excellent biocompatibility and maintain high stability in serum proteins. 122 This provides a new option for the synthesis of ultrastable and biocompatible Au NPs.
Microbes-Mediated Biosynthesis
Except for natural ingredients in plants, nowadays, with the deepening of research, people have found that many microbes can also synthesize Au NPs. These microbes mainly include fungi and bacteria. Fungi can secrete proteins, which helps to regulate the morphology of Au NPs. At the same time, some bacteria can act as the reducing agents to synthesize and stabilize Au NPs. 123,124 Microbes can easily and quickly stabilize Au NPs with low cost and environmental friendliness. 125 Some microbes secrete proteins that can further protect Au NPs and improve their stability in complex physiological environments. 126,127 What's more, this microbe-mediated synthesis of highly stable Au NPs is expected to have a wide range of applications in many fields. Many studies on the synthesis of Au NPs by fungi and bacteria have been reported based on this. For example, Aspergillus (WL-Au) can green synthesize Au NPs with controllable size under different reaction conditions ( Figure 5). The prepared Au NPs have great catalytic activity and can be used for the depolarization of dyes. 128 And as a common fungus, mushroom extracts can also synthesize Au NPs. Even more surprising is that the protein in the mushroom extract can stabilize Au NPs and prevents their aggregation. 129 Equally, bacterial green synthesis of Au NPs also is a research hotspot in recent years. For instance, Au NPs can be prepared by Bacillus subtilis reduction. By this method, we can obtain Au NPs with robust antibacterial activity, which is expected to be used in the biomedicine and food industry. 130 Beyond that, some algae in the ocean, such as Spirulina platensis, can be used as raw materials to synthesize Au NPs quickly. Due to many bioactive substances in Spirulina platensis, the prepared Au NPs have broad application prospects in the medical field. 131
Low Molecular Weight Protein Decorated Au NPs
Biomolecules have become one of the best candidates for stabilizing Au NPs by their multifunctional chemical groups, high binding ability with metal molecules, and excellent biocompatibility. 132,133 Due to their superb stabilizing ability, Au NPs can remain stable under various physiological environments. 132,134 Moreover, while stabilizing Au NPs, it can also be conjugated with different specificities molecules to meet its application in biomedicine. 135,136 Protein is one of them. More detailedly, not only Au NPs are immobilized by biomolecules because the functional groups in amino acids directly bind to nanoparticles through Au-S covalent bonds, but also the protein-decorated Au NPs can significantly improve their dispersion and anti-aggregation stability in the biological matrix to meet applications in biosensor, diagnostic and therapeutic. 137,138 Based on this, at present, studies have shown that Au NPs synthesized with some proteins or amino acids exhibit excellent stability. For example, choline tryptophan and tetraethylammonium (TEA) can be used to prepare Au NPs, where the tryptophan group acts as a reducing agent. The nanoparticles synthesized by this method show superior stability in a specific concentration of hemoglobin buffer (100-200 µL/mL). 139 As a protein in the human body, ferritin has extreme safety and the ability to react with multiple substances. It can be wrapped on the surface of Au NPs for modification to enhance its stability, and other targeting molecules can be modified on nanoparticles for tumor treatment. Studies have shown that the Au NPs assembled by ferritin still have excellent thermal stability at 62.5 °C and do not aggregate in 800 mM NaCl solution. 140 With the deepening of research, people have discovered using specific proteins in the human body to decorated Au NPs can improve their long-term stability and avoid immune rejection. 141 This discovery is expected to be a drug delivery system that uses Au NPs as a carrier. On the other hand, some protein-decorated Au NPs exhibit characteristics that are not available in conventional synthetic Au NPs. For instance, amino acids and peptides are added to the solution of Au NPs and grown in situ to obtain chiral Au NPs. More surprisingly, its unique optical activity contributes to the application in nanomedicine. 142 In addition to the proteins contained in the human body, the proteins extracted from some fungi can also significantly enhance the biocompatibility and stability of Au NPs. For example, Au NPs prepared using protein from Rhizopus oryzae cells as a blocking agent have almost the same absorption wavelength in physiological buffer solution with a pH range of 6.5-7.5. Its good biocompatibility has been confirmed in the hemolysis test. 143
Designed and Controlled Genetic Material for the Synthesis of Au NPs
At present, genetic materials (such as deoxyribonucleic and ribonucleic acid) are often used as templates to synthesize or modify Au NPs due to their unique selfassembly properties. 144 These nucleotide-modified Au NPs show excellent biocompatibility. 145 More importantly, it can protect the Au NPs by forming a dense layer on the surface of the Au NPs through the chemical bond, thereby further improving its stability. 146 The currently commonly used synthesis strategy is to conjugate DNA to Au NPs via Au-S bonds. 147,148 On the other hand, Au NPs synthesized using genetic material have specificity and can be selectively combined with specific molecules. 135 It is expected to be widely used in the field of biomedicine. What's more, DNA-conjugated Au NPs can also be used as sensors to detect metal ions. 149 Liu et al systematically studied the influence of different factors on the stability of the DNA-Au NPs. They confirmed that a higher concentration of salt solution allows DNA to adsorb on the surface of Au NPs faster and enhances its stability; while a lower pH is conducive to the formation of a dense layer of DNA on the surface of Au NPs; polar solution and long-chain DNA have a better protective effect on Au NPs. 150 In recent years, a lot of work has been devoted to preparing highly stable DNA-Au NPs. Hwu et al prepared DNA-conjugated Au NPs and significantly improved Au NPs by regulating the density of DNA. Au NPs can still maintain excellent stability in five freeze-thaw tests (−80 °C). 151 Next, Cheng et al added biotin and diluents to different functionalized DNA adaptors to conjugated it with Au NPs and developed a new DNA-Au NPs synthesis strategy. More surprisingly, the Au conjugates prepared by this method have ultra-high stability and can still maintain a good dispersion state in the 4 M NaCl solution. The absorbance remains almost unchanged during five freeze-drying cycles. 152 Besides DNA, some RNA aptamers can also modify Au NPs due to their excellent affinity and specificity. Miao et al stabilized Au NPs with different theophylline RNA aptamers, which showed excellent salt tolerance and remained stable under 70 mM NaCl solution. What's more, nanoparticles can quickly and accurately detect theophylline concentration in the human body. 153 David et al used a self-assembly strategy to synthesize Au-siRNA NPs. It remains stable for 24 hours in 10% fetal bovine serum, so these nanoparticles are expected to serve as ideal functional probes in tumor therapy. 154
Synthesis High Stable Ultra-Small Au NPs
Compared with conventional plasmonic Au NPs, ultrasmall Au NPs (1-3 nm in diameter) with atomic-level precision have different properties in optics and magnetism due to enhanced quantum size effects. [155][156][157] Among them, photoluminescence is a unique property of ultrasmall Au NPs' surface state; It has strong emission in the NIR region due to the ultra-small size, 158,159 And ultrasmall Au NPs are paramagnetic. 160,161 After decades of research, people have made significant progress in preparing and applying ultra-small Au NPs. At this stage, four primary approaches are used to synthesize ultra-small Au NPs: bottom-up method, top-down method, dynamic control method, and green synthesis method. 157,[162][163][164]
Bottom-Up
The bottom-up synthesis strategy is to use thiolates or other ligands (such as biomolecules, dendritic polymers, etc.) to protect the ultra-small Au NPs. [165][166][167] Specifically, chloroauric acid forms a complex with a phase transfer agent, and then the ligand reduces the Au 3+ in the complex to Au + . The template protects the ultra-small Au NPs from agglomeration. 162 Biomolecules and dendritic polymers are commonly used as templates. Biomolecules can synthesize ultra-small Au NPs under mild reaction conditions, and the products have great biocompatibility. Still, the yield of ultra-small Au NPs prepared by this method is lower. The dendritic polymer used as a hard template to prepare ultra-small Au NPs has a higher yield, but the disadvantages such as poor biocompatibility and longer reaction time limit its application. Nowadays, many studies show that the size of ultra-small Au NPs can be precisely controlled by adjusting the addition ratio of reducing agent and chloroauric acid to obtain size-controlled water-soluble or organic-soluble ultra-small Au NPs. 97,168,169 Xie et al precisely synthesized ultra-small Au NPs with high quantum yields using thiol molecules as templates. 168 Meanwhile, egg white has also been used to synthesize ultra-small Au NPs of controlled size. 170
Top-Down
The top-down method is also called the etching method. It is a widely adopted synthetic strategy that enables the controlled synthesis of ultra-small Au NPs. 171,172 The mechanism of this method is to etch polydisperse Au NPs into small-sized ultra-small Au NPs using etchants (such as dihydrolipoic acid, polyethyleneimine, etc.). 173,174 In the presence of the etchant, the large Au NPs are continuously etched into small-sized Au NPs. Through continuous etching, the obtained ultra-small Au NPs have the most stable structure. For example, Wei et al precisely synthesized ultra-small Au NPs with good thermal stability by thiol etching in the presence of a protective agent. 175 Also, some natural plant components can also be used to etch and prepare ultra-small Au NPs. Chen et al synthesis highly biocompatible ultra-small Au NPs by stepwise etching method using mustard acid as an etchant and reducing agent. 176 International Journal of Nanomedicine 2021:16 https://doi.org/10.2147/IJN.S322900
Dynamic Control Methods
Recently, dynamic control methods have been increasingly used for the synthesis of ultra-small Au NPs. It is based on other ways, and precise control is implemented by varying the reaction temperature and reaction time, the pH of the reaction system, and the concentration of the reducing agent. 163,164 This method can obtain ultra-small Au NPs that meet expectations by real-time tuning. Lahtinen et al achieved a controlled synthesis of ultra-small Au NPs that are stable at different pH values by adjusting the ratio of methanol to water. 169 Wang et al prepared ultra-small Au NPs protected by alkyne ligands, which can spontaneously isomerize to a more stable structure (Au 23 -2→Au 23 -1) and have good thermal stability. 177 Crudden et al first reported super-stable ultra-small Au NPs modified with NHC as a ligand, and the NHC-modified methyl monosubstituted ultra-small Au NPs were stable at 70 °C for more than 24 h due to the super-stabilizing force between the ligand and gold. 178
Green Synthesis Methods
Some new strategies for the precise synthesis of atomicscale ultra-small Au NPs have been reported in recent years. 156,179,180 The green synthesis of ultra-small Au NPs mediated by natural products is one of them. 97,99 Zhang et al prepared highly stable ultra-small Au NPs by a simple one-pot method using polyphenols from green tea as reducing and stabilizing agents. 181 Ghosh et al successfully synthesized ultra-small Au NPs on different bacteria, in which the bacteria acted as templates and the internal proteins interacted with gold to provide stable force. This highly safe, low-cost, and rapid preparation method offers new ideas for future nanomaterial synthesis strategies. 182 Other Ways to Improve the Stability of Au NPs Some physical methods to improve the long-term stability of Au NPs after synthesis effectively. Centrifugation is one of them. Under certain conditions (7000 g, 20 minutes), the Au NPs were centrifuged and determined by DLS. The researchers found that the suspension of Au NPs can be stable for storage at 4 °C for 20 days. This study provides new ideas for improving the stability of Au NPs. 183 On the flip side, high molecular weight PEG can be used to deplete and stabilize Au NPs, and achieve excellent stability under long-term storage conditions through depletion force without destroying its surface properties ( Figure 6). And what is more, this method can further enrich the application of Au NPs. 184 Depletion stability can be used as a technical means to improve the spatial stability of Au NPs, so that people can explore many colloidal properties and reactions for a long time.
In conclusion, conventional chemical methods may not be sufficient to protect Au NPs in some cases, causing aggregation of Au NPs. Currently, some polymer-modified Au NPs exhibit excellent stability under different physiological environments. For example, the superb binding ability of NHC to Au NPs has been shown to remain long-term stable in various biological media (pH, GSH, salt solution). 69,71 PEG significantly improves the steric stability of the colloids, allowing the Au NPs to remain well dispersed under different pH and salt ion environments. 185 Similarly, PVP-protected Au NPs exhibited excellent stability in some physiological environments, especially high citrate and citric acid concentrations. 87,185 Natural product-mediated green synthesis of Au NPs can remain stable under long-term storage, and modified by biomolecules (proteins and DNA) can remain stable for a long time under biological substrates and extreme temperatures. The various stabilization methods of Au NPs are shown in Figure 7. And we give various stabilizers used during various synthesis methods in Table 1. At this stage, due to Au NPs with long-term stability and satisfactory stability in ionic solution and biomatrix, their application prospects in biomedicine are receiving more and more attention.
Application of Au NPs in Biomedicine
At present, because of the continuous in-depth research on Au NPs, it occupies a vital position in biomedicine. Due to the smaller size of Au NPs, they can accumulate in tumor tissues in the biomedical field. It is called enhanced penetration and retention effect (EPR), which helps achieve better therapeutic effects. 186 And the unique physical and chemical properties of Au NPs, there are bright prospects in nanomedicine. 179,[187][188][189][190] Herein, we focus on the most recent studies in biomedicine, including drug delivery vehicles, bioimaging, PTT, clinical diagnosis, nanozymes, RT, and other application.
Drug Delivery
Drug chemotherapy is a primary clinical treatment method. However, it has obvious disadvantages: First, some drug's poor solubility and stability inhibit the therapeutic effect. More importantly, the direct administration method cannot enrich the medicine at the tumor site, thereby weakening the drug's efficacy and causing many side effects to the body. Therefore, there is an urgent need for a carrier to load the drug to extend its blood half-life and protect its activity to achieve enrichment and controlled release at the tumor site. Due to its easy-to-control size, active surface chemical properties, and good biocompatibility, Au NPs are widely used as an ideal carrier for drug delivery. 191 We can achieve drug delivery by combining drugs with Au NPs by physical embedding or chemical bonding. Given this, Tan et al conjugated specific DNA aptamers to Au NPs through self-assembly. They loaded doxorubicin (DOX) on the surface of the nanocomposite to achieve controlled drug release under NIR irradiation. 192 Chen et al directly couple Au NPs with methotrexate (MTX) to form a nanocomposite released in lung tumor tissues to achieve enhanced therapeutic effects. 193 Sulaiman et al load biologically active hesperidin inside Au NPs by simple stirring. This drug delivery system with good biocompatibility can significantly inhibit the growth of human breast cancer cells and effectively relieve inflammation. 194 For drug delivery systems based on Au NPs, some specific substances (such as folic acid, red blood cell membrane, neutrophil membrane, etc.) can modify on the surface of Au NPs to achieve targeted therapy and obtain better curative effects. 195 For example, Au NPs co-protected by PEG and 4-mercaptobenzoic acid (MBA) can be used as targeting carriers to deliver DOX, thereby significantly improving the therapeutic effect on breast cancer. 196 Besides, due to the high photothermal conversion efficiency of Au NPs, it also has a synergistic effect on the photothermal treatment of tumors while delivering drugs. Studies have shown that DOX is loaded into the Au nanocage wrapped by the cancer cell membrane. The composite nanomaterial can achieve high-efficiency delivery of DOX and cause breast cancer cell apoptosis through auxiliary NIR irradiation. 197 At present, with the continuous deepening of research, we have discovered that in addition to serving as a drug delivery carrier alone, Au NPs can also be conjugated with other substances to form composite materials to exert their advantages further. It mainly includes some responsive polymers, proteins, and inorganic nanomaterials. As a universal heat-sensitive polymer, poly (N-isopropyl acrylamide) can combine with rod-shaped Au NPs as a drug delivery vehicle. This responsive polymer can effectively reduce the toxicity of the loaded drug, and the drug also has a controllable release rate when the NIR irradiates the carrier. 198 For protein, Mi-RNA can be combined with Au NPs to release Mi-RNA in tumor cells with high glutathione concentrations, thereby realizing efficient gene therapy. 199 And Au NPs can hybridize with iron to prepare composite nanoparticles with a metal-organic framework (MOF) structure. Au-MOF NPs can be loaded with camptothecin, and the structure is destroyed under the exceptional physiological environment of the tumor to release the drug. What is more surprising is that the produced OH· can further activate the Fenton reaction and achieve synergistic therapy. 200 Besides, a novel drug delivery system was developed by Zhu et al. They loaded vancomycin onto ultra-small Au NPs to achieve controlled release of the drug and allowed real-time monitoring of the release process by the generated fluorescent signals. This research provides new ideas for Au NPs in a multifunctional platform based on drug delivery. 180
Bioimaging
The main biomedical imaging methods are magnetic resonance imaging, CT imaging, and photoacoustic imaging. 201,202 These imaging methods require a contrast agent to enter body tissues or organs to improve image contrast and imaging effect due to the long half-life of nanomaterials in the blood, increasing the accuracy and specificity of imaging. Nowadays, more and more nanomaterials applications are used in bioimaging. 201,203,204 Among
6081
them, Au NPs have become one of the current ideal contrast agent candidates in bioinaging methods. In CT imaging, compared with traditional contrast agents, Au NPs have the advantages of high biocompatibility, low toxicity, and easy functionalization. What's more, the high X-ray absorption coefficient and high contrast of Au NPs make it an ideal material for contrast agents. 205 Under certain conditions, the X-ray decay rate of Au NPs with the same concentration is five times slower than iodine. 206 More importantly, we can design suitable modifiers to functionalize the surface of Au NPs for targeted delivery to the organs and tissues that need to be imaged, thereby improving the imaging effect. 207 At present, there are more and more applications of Au NPs in contrast agents. Due to the complex physiological environment in the blood, it is often necessary to modify other substances to enhance their stability when synthesizing Au NPs as contrast agents. Studies have shown that PEG-modified small-size Au NPs (38 nm) have excellent stability, increasing blood half-life, is an ideal contrast agents in the blood. It has obvious advantages with the traditional contrast agent iodine. 208 Next, new research confirms that Au NPs functionalized with glutamic acid can be used as contrast agents due to their large X-ray attenuation coefficient and excellent stability under physiological conditions. 209 In addition to being used as conventional contrast agents, Au NPs can also be used for targeted imaging. For instance, Sun et al synthesized Au NPs coated with glycol chitosan, which can specifically CT imaging of tumors in the liver. Currently, Au NPs synthesized from some natural products can also be used as X-ray contrast agents. For example, Au NPs stabilized and reduced by gum arabic show excellent biocompatibility and remain stable in electrolyte solutions (2 M NaCl) and serum solution (1 mg/mL HSA or 1 mg/mL BSA). Furthermore, the contrast agent effect of GA-Au NPs is about three times that of iodixanol at a similar concentration. 210 Photoacoustic imaging combines optical imaging and ultrasound imaging, is an emerging non-invasive imaging technology with high resolution and strong tissue penetration depth. 211 Due to the LSPR effect, controlled size, and high photothermal conversion capability of Au NPs, it has wide application foreground in photoacoustic imaging. Tan et al constructed a highly specific gold-coated@Fe 3 O 4 multifunctional nano-platform, which can realize the functions of magnetic resonance imaging, photothermal therapy. 212 At this stage, many groups have confirmed that Au NPs have great imaging effects as a contrast agent for photoacoustic imaging. Chen et al synthesized small-sized rod-shaped Au NPs (50nm) by seed-mediated method, which has extreme tumor penetration efficiency and can generate photoacoustics 3.5 times stronger signal than conventional-sized Au nanorods (130 nm). 8 Zhang et al developed PEG-modified Au NPs (20-50nm), which can be effectively enriched in tumor tissues and achieve excellent photoacoustic imaging effects. 213 On the other hand, luminescent ultra-small Au NPs have an easily tunable size, surface functionalization, and superior safety making them one of the best candidates for bioimaging. And among them, some biomolecule-modified ultra small Au NPs are of great interest due to their specifical targeting and efficient renal clearance efficiency. 158 For example, the ultra-small Au NPs synthesized by mercapto-cyclodextrin have excellent luminescence properties, with maximum excitation intensity at 1050 nm. Surprisingly, imaging was still possible even at a concentration of 1 μM. And follow-up studies have shown that ultra-small Au NPs synthesized by this method also have promising applications in protein labeling for tumortargeted imaging. 214 Zhang et al prepared excellent biocompatible ultra-small Au NPs doped with other atoms using glutathione. The ultra-small size enables Au NPs to have a greater penetration depth (0.61 cm), while other atoms (Cu, Zn) make ultra-small Au NPs have better imaging effects, thereby realizing multifunctional real-time imaging in vivo. 215 Chen et al successfully built a nano platform for integrated treatment. The nanoplatform enable dual-mode imaging of NIR fluorescence and CT as a bioprobe, and the excellent photothermal conversion efficiency enables it to be used for photothermal therapy. 216
Photothermal Therapy
While traditional hyperthermia destroys tumor tissues, it also damages normal tissues. As a non-invasive treatment method, photothermal therapy uses nanoparticles as a photothermal therapy agent to irradiate the tumor with a NIR (808 nm), which can accurately destroy tumor tissues without damaging normal tissues. 217 This method can effectively reduce the side effects of treatment. As one of the most critical inorganic nanomaterials in biomedicine, Au NPs play an essential role in photothermal therapy. Due to the high light-to-heat conversion efficiency of Au NPs, strong absorption of NIR, LSPR effect, and easy-to-control size and shape. Generally, Au NPs are used in photothermal therapy in two ways. One is to use pure Au NPs as a photothermal agent; the other is to form a composite material with some substances or load drugs for synergistic treatment. can significantly inhibit colon cancer cells; the cell viability after 808 nm laser irradiation for 5 minutes is only about 50%. 218 Besides, the rod-shaped Au NPs have excellent photothermal treatment effects due to their extreme high extinction coefficient. Studies have shown that PEG-modified rod-shaped Au NPs can exert therapeutic effects within 72 hours and eliminate breast tumors in mice within 10 days. 219 Nowadays, research on photothermal therapy has turned to Au-based composite nanomaterials. We can modify the surface of Au NPs to achieve specific functions. These hybrid nanomaterials can be combined with drugs or doped with other substances to enhance the photothermal treatment effect further. For example, encapsulated by PPy exhibits ultrahigh light-to-heat conversion efficiency (70%) due to its unique chain structure and self-assembly behavior. Subsequent experiments have also confirmed that it can achieve an excellent tumor photothermal ablation effect under the irradiation of 808nm NIR (Figure 8). 220 Next, the latest research shows that the Au@Pt composite dendritic NPs synthesized by ultrasound have the characteristics of Au and Pt at the same time. Therefore, the high photothermal conversion efficiency of Au and the photothermal stability of Pt make this composite become an ideal material for photothermal therapy. 221 Moreover, adding photosensitizer can further enhance the photothermal treatment effect. For example, porphyrin derivatives are used as photosensitizers to couple with Au NPs to generate singlet oxygen during the heating process to kill cancer cells effectively. This method can achieve high-efficiency photothermal treatment effects. 222
Clinical Diagnosis
Compared with traditional clinical methods, nanosystems based on noble metals can be quickly and accurately used for biomedical diagnosis, which has received extensive attention in recent years. And Au NPs are one of them; due to their superior biocompatibility, unique physical and chemical properties, Au NPs are increasingly used as diagnostic tools (such as biosensors or nanoprobes) to test some clinical indicators. On the one hand, specific oligonucleotides can be integrated on Au NP to identify sequence-specific DNA or RNA in the sample to be tested, which can be identified and analyzed by methods such as colorimetry and fluorescence detection. [223][224][225] On the other hand, owing to the LSPR effect and Raman scattering properties of Au NPs, it can enhance or amplify the SPR signal, so they are often used to detect the level of biomarkers of certain diseases to achieve a rapid diagnosis. 226,227 In recent years, many researchers are committed to building a platform based on Au NPs for fast and accurate diagnosis of some clinical indicators. Zhu et al developed a multifunctional nanosystem that can real-time monitor breast cancer changes in vivo. They hybridized specific aptamers with fluorescent DNA strands, combined with Au NPs through Au-S bonds. Finally, they loaded drugs into nanosystems to achieve various functions such as fluorescence monitoring of tumor cell expression, drug delivery, and photothermal therapy ( Figure 9). 228 Nietzold et al prepared Au NPs with a diameter of 20-60 nm, then fixed anti-α-fetoprotein on the surface of Au NPs, and constructed a nanoprobe for the rapid detection of tumor marker α-fetoprotein, which can detect the concentration of α-fetoprotein in the serum of 0.1-0.4 μg·mL −1 . 229 In addition, specific DNA aptamers can be conjugated with Au NPs are used as probes to detect the cancer cell marker proteins PDGF and VEGF at the nM level using colorimetry and fluorescence methods. 230 At this stage, compared with conventional clinical diagnosis methods, based on Au NPs test tools, can provide better results. For example, Au NPs can detect hepatoma up-regulated protein RNA in human urine, thereby realizing early diagnosis of bladder cancer. What is more surprising is that this low-cost diagnosis method has strong specificity (88.5%) and sensitivity (94%), a low detection limit, even the detection effect exceeds that of conventional PCR testing. 231 Gordon et al prepared polystyrene-modified rod-shaped Au NPs, which can quickly detect the signal intensity of Raman spectroscopy in urine, and quantitatively analyzing the representative tumor marker Acetyl Amantadine (AcAm), with a detection limit of 16ng/mL. 232 Besides, using Au NPs to construct microchips to detect the level of some biomarkers in the blood is also a hot spot in current research. The latest research shows that a new type of diagnosis technology uses electrically activated nanoflow chips to capture the biomarker extracellular vesicles (EVs) released by melanoma cells in the blood. Simultaneously, it can combine with a particular type of Au NPs attached to an antibody, which can adsorb unique molecules on melanoma cell EVs' surface. This
Nanozyme
The unique enzyme-like activity of ultra-small Au NPs, the catalytic sites on their surface, and their good stability and biocompatibility give them potential as nanozymes in biomedicine. [235][236][237] For instance, dendritic polymer PAMAM-modified ultra-small Au NPs can autocatalyze the decomposition of hydrogen peroxide to oxygen in an acidic environment, achieve enhanced photodynamic therapeutic effects in combination with photosensitizers. 238 Atomically engineered ultra-small Au NPs can meet the expectation of having enzymelike activity while maintaining high stability, resulting in efficient antioxidant activity and catalytic activity. 239 Precise synthesis of highly selective atomic-level artificial enzymes have become a hot research topic in recent years. Zhang et al developed gold-based nanozymes. And the nanozymes possess CAT and SOD enzyme activities, which can significantly reduce the reactive oxygen species content and alleviate neuroinflammation. 240 Recent studies have shown that the atomic-level Au 24 Ag 1 cluster enzyme has ultra-high physiological stability and its unique CAT and GPxlike enzyme activities can effectively inhibit inflammatory molecules in the brain, which is expected to play an essential role in nanomedicine. 241 On the other side, antibacterial is an essential property of nanozymes. Ultra-small size (<2 nm) Au NPs have been found to interact with bacteria and destroy their cell membrane. They exhibit significant antibacterial activity, which is not found in conventional size Au NPs. [242][243][244] Because of this, Xie et al synthesized ultra-small Au NPs with 6-mercaptohexanoic acid as ligand and systematically investigated their antibacterial activity. They found that ultra-small Au NPs (<2 nm) killed more than 90% of Staphylococcus aureus, Staphylococcus epidermidis, Bacillus subtilis, Escherichia coli, and Pseudomonas aeruginosa, and further studies confirmed that it was due to the ability of ultra-small Au NPs to induce the production of ROS. 245 Apart from that, Gu et al synthesized ultrasmall Au NPs by a simple one-step method, which can promote the release of ROS within Clostridium difficile and disrupt its cell membrane, and is expected to serve as a new avenue for the treatment of Clostridium difficile infection. 246 Besides, Au NPs prepared from Gloriosa superba leaf extracts can interact with biological membranes, leading to cell death, exhibiting significant antibacterial activity, and promising as a treatment for microbial infections drugs. 247 Chopade et al used the extract of Plumbago zeylanica facile synthesis Au NPs, which exhibited remarkable antibacterial effects against many bacteria. 110
Cancer Radiotherapy
Similar to Au NPs, ultra-small Au NPs also play an essential role in the treatment of tumors. 179 The excellent safety profile of ultra-small Au NPs, the long blood half-life, and the enhanced EPR effect due to their small size in the body, creating the conditions for their use in tumor radiotherapy. On the other hand, because of the larger atomic number, gold has stronger absorption for radiation, so it is an ideal radiosensitizer. 248,249 Given this, Xie et al designed a novel glutathione ultra-small Au NPs radiotherapy agent, in which glutathione can significantly enhance the accumulation of the drug at the tumor site. At the same time, the stronger absorption ability of gold to radiation can effectively improve the radiotherapy effect. 250 Basilion et al synthesized PSMA peptidemodified ultra-small Au NPs in situ; they confirmed that the targeted ultra-small Au NPs significantly inhibited tumor growth in the presence of radiotherapy compared to controls. 251 Xing et al first prepared ultra-small Au NPs with cyclic RGD peptide as a template, which maintained excellent stability in different physiological environments (DMEM medium, FBS serum, etc.), and next they confirmed the enhanced radiosensitizing effect and specific targeting ability of ultra-small Au NPs by animal experiments, and tumor growth was significantly inhibited after treatment. 252 Kim et al used Au NPs as a radiosensitizer for radiotherapy of melanoma. They found that the nanoparticles were effective in killing cancer cells and inhibiting their growth in the presence of X-rays, and further enriching the application of Au NPs in cancer radiotherapy. 253
Other Biomedical Application
Au NPs have a wide range of applications in gene therapy, photodynamic therapy, etc. 254,255 For example, Xu et al synthesized chitosan-coated Au NPs, which can carry the p53 gene and treat breast cancer cells efficiently. More critically, this nanoplatform enables photothermal/gene therapy as well as real-time imaging. 256 Russell's group prepared lactose-modified targeted Au NPs, and that significantly improved the hydrophobicity of the photosensitizer zinc phthalide turnip. This synthesis method can enhance photodynamic therapeutic effects with 90% cytotoxicity against human breast cancer cells SK-BR-3. 257 These examples further demonstrate the enormous potential of Au NPs in biomedicine.
Conclusion and Perspective
In this review, we overview various strategies for preparing highly stable Au NPs: polymer-protected method, green synthesis method, and size-controlled method, which have promising applications in drug delivery, bioimaging, photothermal therapy, clinical diagnostics, nanozyme, clinical diagnosis and other biomedical applications due to their excellent biocompatibility and stability under various physiological environment.
However, there are still many challenges in the preparation and biomedicine application of highly stable Au NPs. Conventional chemical methods require some reagents as reducing or protective agents to help synthesize Au NP, but these solvents are difficult to remove after the reaction. In in vivo biological applications, high doses of Au NPs are often required to meet their therapeutic effects in drug delivery, bioimaging, nanozyme, radiotherapy and photothermal therapy of cancer. Regretfully, the toxicity of high doses of Au NPs to the organism is unclear, and further clinical studies are needed. On the other hand, for bioimaging and early clinical diagnosis, it is essential to continue to improve the sensitivity and specificity of Au NPs as probes to achieve accurate and rapid imaging and diagnosis in complex body fluid environments.
In conclusion, we believe that the synthesis strategy of highly stable Au NPs further developed and functionalized to meet the application in biomedicine, thereby making remarkable contributions to human health.
|
2021-09-09T20:38:06.620Z
|
2021-08-01T00:00:00.000
|
{
"year": 2021,
"sha1": "70f431a52521d2cd9ab87a95c88a9012c413b960",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=73190",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "49cecc3609b79f894dada3597128f2cf3df7e9ea",
"s2fieldsofstudy": [
"Materials Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
211524861
|
pes2o/s2orc
|
v3-fos-license
|
Management of Neuroinflammatory Responses to AAV-Mediated Gene Therapies for Neurodegenerative Diseases
Recently, adeno-associated virus (AAV)-mediated gene therapies have attracted clinical interest for treating neurodegenerative diseases including spinal muscular atrophy (SMA), Canavan disease (CD), Parkinson’s disease (PD), and Friedreich’s ataxia (FA). The influx of clinical findings led to the first approved gene therapy for neurodegenerative disorders in 2019 and highlighted new safety concerns for patients. Large doses of systemically administered AAV stimulate host immune responses, resulting in anti-capsid and anti-transgene immunity with implications for transgene expression, treatment longevity, and patient safety. Delivering lower doses directly to the central nervous system (CNS) is a promising alternative, resulting in higher transgene expression with decreased immune responses. However, neuroinflammatory responses after CNS-targeted delivery of AAV are a critical concern. Reported signs of AAV-associated neuroinflammation in preclinical studies include dorsal root ganglion (DRG) and spinal cord pathology with mononuclear cell infiltration. In this review, we discuss ways to manage neuroinflammation, including choice of AAV capsid serotypes, CNS-targeting routes of delivery, genetic modifications to the vector and/or transgene, and adding immunosuppressive strategies to clinical protocols. As additional gene therapies for neurodegenerative diseases enter clinics, tracking biomarkers of neuroinflammation will be important for understanding the impact immune reactions can have on treatment safety and efficacy.
The influx of new findings from multiple gene therapies undergoing preclinical and clinical testing has highlighted new hurdles for treatment efficacy and safety concerns for patients. An incomplete understanding of disease pathophysiology, limited access to target tissues within the central nervous system (CNS), and complex disease presentations makes therapeutics development and outcome measurements difficult for disorders of the CNS. Additionally, early works in the field demonstrated
Neuroinflammation in Adeno-Associated Virus (AAV)-Mediated Gene Therapies
Although the overall immunogenicity of AAV-based gene therapies is well characterized [6,7,9], immunogenicity from CNS-directed AAV delivery has not been widely investigated. This is partially due to the broadly held belief that the brain and spinal cord reside in an immune privileged compartment, protected by the blood-brain barrier (BBB) [22]. However, recent gene therapy studies for CNS disorders reveal a significant involvement of the immune system in the brain [23,24] and unexpected immune reactions have been reported in several studies of AAV delivery directly into the CNS compartment [25,26].
Neuroinflammation is already a feature of many neurodegenerative diseases such as AD, Parkinson's disease (PD), and multiple sclerosis (MS) [27]. Neuroinflammation involves CNS resident cells (e.g., microglia, astrocytes, oligodendrocytes), peripheral immune cell infiltrates (T cells), breakdown of the blood-brain barrier, pro-inflammatory cytokines and other mechanisms (for detailed reviews on cell types and mechanisms associated with neuroinflammation, see Ransohoff et al. 2010Ransohoff et al. , 2012Ransohoff et al. , 2015Ransohoff et al. , 2016 [24,[27][28][29]. Inflammation can be initiated within the CNS compartment, likely microglial mediated, or from outside of the CNS and mediated by infiltrating myeloid cells. T cells can become activated in the periphery and traffic into the CNS in response to peripheral antigens. B cell-mediated humoral responses can be initiated from the periphery or from within the CNS and in neurodegenerative disease, relevant antibodies are often present in both serum and cerebral spinal fluid (CSF) [24,28,29]. The extent of involvement of each component differs between disease, and the mechanisms involved in neuroinflammation in response to AAV exposure likely differ as well. The effect of exacerbating the immune system by exposure to AAV is unknown but an important consideration especially in this class of diseases.
Because AAV-related neuroinflammation is only just emerging as an important subject in gene therapy, there has not been an emphasis on understanding the cell types and mechanisms involved or on collecting extensive biomarker data as part of clinical trial design. However, with increasing reports of neuroinflammation more recently [30,31], additional biomarkers will need to be incorporated into future trial designs as a way to track and better understand these issues. For example, blood, CSF and neuroimaging are the most commonly used sources of biomarkers in neurodegeneration and neuroinflammation due to their non-invasive nature and versatility [32]. Current AAV-based gene therapy trials typically evaluate neuroinflammation by antibodies against the vector capsid and/or transgene in CSF and blood, T cell response against capsid and/or transgene by enzyme-linked immune Brain Sci. 2020, 10, 119 3 of 16 absorbent spot (ELISPOT) assays, and the presence of pleocytosis in the CSF after dosing [1,5,9,31]. As of now, neuroimaging is typically included as a marker of disease status to evaluate treatment efficacy but is not leveraged to measure neuroinflammation. For example, magnetic resonance imaging (MRI) could be utilized to evaluate inflammation-related neuropathology such as white matter changes, ventricular enlargement [33], or cell death, and BBB breakdown by measuring leakage of gadolinium to the periphery [34]. Additionally, functional imaging can be used to monitor the activation of resident immune cells and begin to understand the mechanisms behind gene therapy-related neuroinflammation [35,36]. Solutes in the CSF including cytokines, glial fibrillary acidic protein (GFAP), and neurofilament proteins could also be employed for tracking neuroinflammation [37]. Since this is an emerging topic, comprehensive biomarker analysis beginning immediately after AAV dosing would provide important insight into the most predictive biomarkers and the mechanism of AAV-related neuroinflammation. Using this data to understand the specific mechanisms involved will help the field to develop methods for preventing or decreasing vector immunogenicity within the CNS.
Systemic Delivery
Systemic delivery of AAV is the most common route of administration for gene therapy and an effective delivery method for multi-systemic diseases with target tissues within and outside of the CNS (multiple completed and ongoing clinical trials using systemic delivery of AAV include NCT02122952, NCT03368742, NCT03315182, NCT03362502). However, to achieve clinically relevant levels of transgene expression across target tissues, especially within the CNS, this route of administration requires high vector doses. Exposing the host immune system to a greater number of viral particles and possible manufacturing impurities could result in exaggerated immune responses. Additionally, the lessons learned from peripheral immune responses elicited by exposure to high systemic doses of AAV should not be disregarded in the context of neuroinflammation since they could also inform AAV immunogenicity in the CNS. One consideration of the immune reaction to systemic vector delivery is the impact it can have on future exposure to the virus. Systemic delivery (as well as natural environmental exposure) of high vector doses will result in circulating anti-capsid and anti-transgene binding antibodies (bAb) and capsid-neutralizing antibodies (nAb), priming the immune system to detect and neutralize future exposures to the virus, presenting additional immune-related safety concerns [38]. A study of prior-immunization in rats demonstrates that a prior exposure to AAV causes high levels of circulating anti-capsid nAbs that completely block viral transduction from a follow-up CNS-specific AAV administration [39]. Additionally, patients with null mutations where the immune system is naïve to endogenous protein, also known as cross-reactive immunological material-negative (CRIM-) patients, perceive the AAV-derived transgene as immunologically foreign and develop higher antibody levels and more severe T-cell responses against the transgene product [19,40]. Clinical trials are successfully implementing additional immunosuppression strategies to limit adaptive anti-transgene immune responses in CRIM-patients [16,30,31], thus increasing the safety and efficacy of AAV-based therapies within this patient population.
Despite its limitations, systemic dosing has been safely and effectively implemented in preclinical and clinical studies for several neurodegenerative diseases requiring viral transduction in the CNS. In a preclinical study of gene replacement therapy for mucopolysaccharidosis (MPS) IIIB using a non-human primate (NHP) model, Murray et al. achieved transgene expression within the CNS after systemic injections of 1 × 10 13 or 2 × 10 13 vg/kg [41]. The authors detected anti-AAV9 and anti-transgene antibodies in the circulation of treated animals, but antibody levels did not correlate with decreased transgene expression in the CNS [41]. Systemic administration of AAV9 at doses of 2-5 × 10 13 vg/kg is being employed in ongoing clinic trials of MPS IIIB (NCT03315182) and MPS IIIA (NCT04088734) with the release of safety and efficacy data unavailable at the time that this review was written. Similarly, several preclinical studies and clinical trials [5,42] evaluated safety and efficacy data with no significant findings in support of the U.S. Food and Drug Administration (FDA) approval Brain Sci. 2020, 10, 119 4 of 16 of Zolgensma, a systemically delivered gene therapy treatment for children with SMA, which was determined to be safe and effective at a single systemic dose of 1.1 × 10 14 vg/kg [43].
Ongoing clinical trials for AAV products manage the risk of inflammation by excluding individuals with pre-existing nAbs against the viral capsid, whether it be from a natural environmental exposure or a previous sub-therapeutic dose of gene therapy (e.g., NCT03368742, NCT03315182, NCT03362502, NCT02240407). The currently accepted exclusion criteria leaves out 20-70% of otherwise eligible patients [3,44]. Additionally, multiple subjects who screen negative for pre-existing anti-AAV antibodies still experience complications, suggest that the current criteria is an incomplete measure of immune status [45][46][47]. Major limitations associated with this approach are that standardized laboratory tests have not been established for measuring pre-existing anti-AAV immunity across trials or to better define appropriate markers and thresholds for true immunologically-naïve status. Although not routine for all trials, some are also considering subjects' CRIM status and pre-existing T cell-mediated adaptive responses in their inclusion criteria and for selection of most appropriate immune interventions [31].
Central Nervous System (CNS)-Directed Delivery
CNS-directed deliveries are a compelling alternative to reduce the overall immune response because they require lower doses of vector to reach clinically relevant transgene expression in CNS tissue. The most heavily researched strategies for CNS-directed AAV delivery are intracerebroventricular (ICV), intra-cisterna magna (ICM), intrathecal (IT), and intraparenchymal injections. ICV, ICM, and IT injections deliver vector into the CSF circulation via lateral ventricle, cisterna magna, or lumbar spinal cord space, respectively. The intraparenchymal route delivers the vector directly into the target brain tissue using stereotactic surgery. These CNS-directed delivery strategies, however, are not enough to completely prevent the effect of circulating anti-AAV antibodies and delivery strategies can physically disrupt the BBB allowing even greater access for circulating antibodies to enter the brain and neutralize the vector [39,48].
Numerous preclinical studies of CNS-directed AAV administration have reported circulating nAbs against the vector capsid and elevated markers of cytotoxicity [20,25,49,50]. One study using intra-cisterna magna (ICM) delivery of AAV9 in rhesus macaques detected transgene-binding and AAV9-neutralizing antibodies in the serum and CSF of animals and anti-transgene T-cell responses regardless of the dose administered (1 × 10 12 and 1 × 10 13 vector genomes (vg)). [20] Treated animals were asymptomatic but demonstrated bilateral histopathology in the DRGs, axons emanating from dorsal spinal cord white matter, and trigeminal nerve ganglia including mononuclear cell infiltration (mostly CD20+ and CD3+ lymphocytes with few CD68+ macrophages). Mononuclear pleocytosis in the CSF and a transient increase in CSF protein was also reported in AAV-treated but not vehicle-treated animals [20]. Another preclinical study in a NHP model using a combination of intracerebroventricular (ICV) and bilateral intraparenchymal injections of AAV rh8 into the thalamus reported neurotoxicity and associated behavioral changes [25]. Animals treated at three different doses (3.2 × 10 12 , 3.2 × 10 11 , and 1.1 × 10 11 vg) developed dose-dependent white and gray matter necrosis along the injection track along with dyskinesia and ataxia. Supra-physiological levels of transgene expression were detected in the thalamus and spinal cord of all dosed animals, suggesting neurotoxicity could be associated with overexpression of the transgene. Antibody levels in the CSF were not reported. Finally, studies using dog models of Sanfilippo and Hurler syndrome used intraparenchymal delivery of AAV vector and reported neuroinflammation including lymphocyte, plasma cells, and macrophage infiltration into perivascular and subarachnoid spaces with diffuse hyperplasia and clusters of microglia [49]. Animals who were also treated with immunosuppression agents, cyclosporine (CsA) and mycophenolate mofetil (MMF), had lower incidences of neuroinflammatory findings and increased vector biodistribution [50].
In agreement with the studies described, toxicology studies performed by our group on NHP models using combined systemic and IT administration of a human frataxin (FXN)-encoding AAV9 vector shows similar histopathological abnormalities. Animals given an IT dose of 1-3 × 10 13 vg showed spinal cord and DRGs abnormalities including minimal neuronal degeneration/necrosis, Brain Sci. 2020, 10, 119 5 of 16 minimal to moderate mononuclear cell infiltration, and minimal to mild nerve fiber degeneration of the nerve roots. Two NHP treated with a similar dose of a cynomolgus-specific FXN-encoding vector had no findings at a similar dose, suggesting that an anti-transgene immune response could have contributed to the findings of inflammation in this study.
Despite encouraging findings in early preclinical studies that supported commercialization of Zolgensma [51], similar histopathological findings were recently reported in another NHP preclinical study that used IT administration of the SMA gene therapy, AVXS-101 [26]. AVXS-101 is already approved in the US as Zolgensma for systemic use in the treatment of SMA. Zolgensma has not been affected by these findings at the time that this review was written, but the FDA placed a clinical hold on the IT administration trial for subjects with SMA Type 2 (NCT03381729). Low-(6 × 10 13 vg) and mid-dose (1.2 × 10 14 vg) cohorts have been completed with no reported clinical findings but the high-dose (2.4 × 10 14 vg) cohort will not be recruited until further investigation to understand the cases of mononuclear cell infiltration and neuronal degeneration in DRGs of IT-treated NHP [26]. Furthermore, a clinical trial of IT administration of AAV9 in subjects with giant axonal neuropathy (GAN) (NCT03770572) presented findings of elevated markers of neuroinflammation including elevated anti-capsid antibodies and T-cell response and pleocytosis in the CSF [31].
On the other hand, many studies have shown no evidence of neuroinflammation [1,52,53], highlighting the need to compare experimental designs including the appropriateness of a large animal model in preclinical trials and biomarker selection across preclinical and clinical trials. These observations represent a gap in knowledge regarding the mechanisms of AAV immunogenicity in the CNS and the currently employed strategies to modulate it.
Managing AAV-Mediated Neuroinflammation
Approaches to decrease both innate and adaptive immune responses against the AAV capsid and/or transgene have been under investigation for many years. Most of the research has focused on improving efficacy of the treatment, enabling pre-exposed individuals to receive AAV-based treatment, and allowing for repeated administrations of the vector throughout an individual's lifetime. To date, the effects on neuroinflammation specifically have not been tested, but the immune modulating approaches reported warrant further characterization for their specific effect on CNS immune reactions.
Choice of AAV Capsid Serotype and Promoter
Characterization of first-generation capsids AAV 1, 2 and 5 display low levels of expression and variable cell-type specificity in the CNS. For example, AAV1 and 5 can transduce neurons and glial cells while AAV2 transduces neurons only but have limited spread within the CNS [54]. In 2002, AAV 7, 8, 9 and rh10 were discovered in primates [55]. When the different serotypes are cross-packaged with identical AAV2 genomes, they show variations in cell-type transduction efficacy and affinity to CNS substructures after systemic delivery, making each uniquely suitable for particular disease indications [56][57][58][59][60]. However, most naturally isolated and commonly used AAV vectors only minimally cross the BBB after a systemic injection. The limited amount of vector that does reach the CNS shows a strong neuronal tropism and negligible transduction of glial cells.
Of the common capsids currently available, AAV9 has emerged as the most widely used for CNS gene therapy applications due to its enhanced spread across CNS structures and its ability to penetrate the BBB even after peripheral administration in neonatal animals [61,62]. While both AAV9 and rh10 show high transgene expression throughout the brain including spinal cord regions, AAV9 shows the greatest rostral-caudal distribution and spreads to the contralateral (un-injected) hemisphere by undergoing axonal transport [57]. Studies in animal models of SMA [42], amyotrophic lateral sclerosis (ALS) [63], MPS IIIB [64], and others show that a systemic injection of AAV9 in neonatal mice results in transgene expression across key CNS substructures and neuronal subtypes such as motoneurons along with improvements in disease-related phenotypes. However, systemic injections in adult mice still result in limited expression and a shift in target cell subtypes, with preferential expression in Brain Sci. 2020, 10, 119 6 of 16 astrocytes over neurons [61,65,66]. Possible explanations for different viral tropism and spread during stages of development include structural development of the CNS restricting distribution of the virus, changes in expression of capsid internalization receptors, or neurogenesis resulting in enriched vector distribution to the newly emergent cell types.
Additional discrepancies in cell type tropism and biodistribution between AAV serotypes in the CNS are noted across species and animal models of disease. For example, the PHP.B serotype is an engineered capsid derived from AAV9 that showed remarkable CNS tropism in initial experiments in C57BL/6J mice [67]. Follow up experiments determined that the exceptional neurotropic properties are exclusive to the C57BL/6J mouse strain and not recapitulated in other mouse strains or in NHP [68]. Similarly, differences in blood-brain barrier permeability between healthy animals and models of neurodegenerative diseases could result in different AAV biodistribution in the CNS, possibly contributing to discrepancies in translating findings from mouse models of disease into toxicology studies using larger, healthy animal [69].
In addition to choice of capsid for safest and most disease-relevant spatial and temporal requirement for transgene expression, alterations to the transgene-coding region and regulatory elements are also common. Most clinical studies currently use high-expressing ubiquitous promoters such as variants of the human cytomegalovirus (CMV) or chicken beta-actin (CBA) promoters with CMV enhancer (CAG) [70]. Variations of this promoter have been characterized extensively and show robust expression throughout neuronal cell types in the CNS [71]. The incorporation of transgene-specific regulatory elements such as the endogenous transgene promoter and/or the 3' untranslated region (UTR) [72,73] is a possible strategy for reducing neurotoxicity from overexpression or expression in non-target cell populations [74]. The transgene-coding sequence can also be optimized in a variety of ways including codon optimization for enhanced expression in a particular species. When validating constructs for human transgene expression in preclinical models such as rodents or NHP, additional variables should be considered such as expression level differences in the test model and possible cross-species anti-transgene reactivity.
Thoughtful selection of capsid and transgene expression elements in early experimental design stages might support smoother clinical translation by preventing the need to use more invasive delivery methods and immunosuppression strategies. Ongoing preclinical studies in large animals including comparison experiments using co-delivery of vectors with identifying barcodes and future clinical trials will continue to inform on these important capsid tropism differences.
Route of Administration
While systemic administration safely and effectively delivers a large amount of virus across multiple tissues, it does not effectively penetrate the BBB in adults. An additional disadvantage of this administration route for CNS disorders is that delivering large amounts of virus into the circulation resulting in unnecessary exposure to the virus, risking greater host immune response. For example, experiments comparing equivalent doses of AAV9 (total vg exposure) administered either systemically or directly into the CSF resulted in dramatically enhanced transduction efficacy in CNS and sensory neurons in direct CSF delivery compared to systemically administered vector [58]. In this study, a 50-fold decrease in CSF-administered dose was sufficient to achieved similar neuronal transduction in DRGs compared to systemic administration [58]. Intracerebroventricular (ICV), intra cisterna magna (ICM), and intrathecal (IT) injections are three widely recognized strategies to delivery drugs into the CSF circulation ( Figure 1). Direct intraparenchymal injections can also be used for more selective targeting of specific brain regions and to limit spread within the CNS. and possible cross-species anti-transgene reactivity.
Thoughtful selection of capsid and transgene expression elements in early experimental design stages might support smoother clinical translation by preventing the need to use more invasive delivery methods and immunosuppression strategies. Ongoing preclinical studies in large animals including comparison experiments using co-delivery of vectors with identifying barcodes and future clinical trials will continue to inform on these important capsid tropism differences.
Route of Administration
While systemic administration safely and effectively delivers a large amount of virus across multiple tissues, it does not effectively penetrate the BBB in adults. An additional disadvantage of this administration route for CNS disorders is that delivering large amounts of virus into the An ICV injection consists of delivering the drug directly into the CSF through the lateral ventricles providing the broadest CNS distribution. Although this technique is relatively safe and effective, and is routinely undertaken by neurosurgeons [75], it is not without risks and complications including infections, intracerebral hemorrhage, subcutaneous CSF leaks and increased intracranial pressure [75][76][77][78]. However, these rare complications are most often associated with chronic delivery of biologics, and single-delivery AAV treatments will likely be safer.
An ICM injection delivers the virus to CSF via the cisterna magna, located below the fourth ventricle and between the cerebellum and medulla, resulting in more directed viral exposure to the cerebellum, brainstem, and spinal cord compared to ICV. After a single ICM injection in a feline model of MPS I, Hinderer et al. reported significant transgene expression at comparable levels across cortex, hippocampus, medulla, cerebellum, and spinal cord regions [79]. However, this approach is rarely utilized in clinical practice and would need significant procedural development to safely enter clinical trials due to the route's increased risk for medullary injury and related complications [80].
Lumbar IT injections are routine clinical procedures with few complications that are used to safely access CSF for biomarker measures and drug delivery. A mouse experiment comparing equivalent amounts of AAV9 dosed either systemically or by an IT injection shows that the IT route results in robust transduction across the CNS including spinal neurons, sensory neurons, and DRGs at every level of the spinal cord [58]. IT delivery of AAV is being tested across several clinical trials for SMA (NCT03381729), GAN (NCT03770572), MPS II (NCT0356604), and others. CSF is generated by the choroid plexus in the lateral ventricles, and flows downward through the third and fourth ventricles, down to the lumbar cistern in the spinal cord [81]. Thus, delivering virus through a lumbar IT injection provides the least amount of spread, requiring thoughtful consideration of fluid dynamics within the CSF to achieve improved CNS biodistribution. For example, Meyer et al. report that maintaining NHPs in a Trendelenburg position for 5-10 min after the IT dosing improves viral transduction in the brain and brainstem [51]. This approach has been incorporated into the IT delivery method in a clinical trial of GAN subjects (NCT02362438).
Finally, although intraparenchymal injections are more invasive [9,82] and result in a limited coverage of the CNS, this approach is likely immunologically safest, requiring the least amount of virus to reach clinically relevant transgene expression if target tissues are few and easy to isolate. This approach is best suited for indications with a well-recognized site of CNS pathology such as Huntington's disease (HD), Parkinson's disease (PD), and Canavan disease (CD). For example, in a mouse model of HD, bilateral injections of AAV5 carrying a microRNA targeting huntingtin (HTT) into the striatum resulted in reduction of toxic HTT protein and improved motor function in treated animals [83]. In the clinic, intraparenchymal dosing of AAV for the treatment of neurodegenerative diseases has shown acceptable safety profiles across several trials. A Phase I trial using bilateral injections of AAV2-AADC into the putamen of PD patients was well tolerated and resulted in improved AADC expression and motor function one-year post-dosing [84]. Another Phase I trial using six cranial burr holes to deliver a gene therapy treatment to subjects with Canavan disease also showed minimal systemic immune reactions with no overt neuroinflammation [9].
Genetic Manipulations to Decrease TLR9-Mediated Immune Responses
Toll-like receptors (TLRs) are pattern recognition receptors found on the endosomes of immune cells that play a role in the detection of pathogens and the initiation of innate immune and inflammatory responses, including type 1 interferon and pro-inflammatory cytokines [85]. TLR9 has been implicated in immune recognition of AAVs by binding to unmethylated CpG motifs in the AAV genome and activating signaling adaptor protein, myeloid differentiation primary response gene 88 (MyD88) [86]. Activation of the TLR9-MyD88 pathway subsequently promotes the development of CD8+ cytotoxic T cell responses against AAV capsid and transgene, which can result in loss of transgene expression.
Significant research has focused on characterizing the TLR9-mediated immune response to AAV-vector DNA and finding ways to ameliorate it. First, experiments comparing AAV-treated TLR9 knockout (TLR9-/-) and wild type mice support the direct involvement of the TLR9 pathway in immune activation and transgene loss [10]. When TLR9-/-and wild-type mice received intramuscular (IM) injections of an immunogenic AAVrh32.33 vector, the authors found that wild-type animals exhibited extensive immune cell infiltration into muscle tissue and that transgene expression was eventually lost. In contrast, TLR9-/-mice showed diminished immune cell infiltration and retained persistent transgene expression. Similar outcomes were seen in a follow up experiment performed in wild type mice dosed with the same vector or a CpG depleted version. However, it is unclear how translatable this strategy is to the clinic since CpG motifs are present in both the transgene encoding region and necessary regulatory elements such as the viral ITRs, promoter, and introns. Second, Martino et al. reported another contributing factor to the extent of the TLR9-dependent response to the AAV genome-that single stranded DNA viral genomes are less immunoreactive than double stranded (self-complementary) viral genomes when tested with intravenous injections in mice [87]. Finally, an approach presented at the 2019 American Society for Genetic and Cell Therapies (ASGCT) Annual meeting directly incorporated TLR9 inhibitory oligonucleotide sequences into an untranslated region of the vector genome to "cloak" vector DNA from stimulating TLR9 [12]. The group reported that following IM injection of AAVrh32.33 vectors, mice receiving the modified vector showed lower levels of CD8+ T cell infiltration into muscle tissue compared to the unmodified vector [12].
Since the robust TLR9-mediated responses to AAV were discovered from systemic or IM administration of the virus, most of the work done to understand and prevent immune recognition have also been performed outside of the CNS. As such, the role of TLR9-signaling in AAV-related neuroinflammation for CNS applications is still not well characterized. Analogous to direct CNS administration, the strategy of direct incorporation of TLR9-inhibitory oligonucleotides into the AAV vector genome was also tested in large animals via intraocular administration. The authors observed that subretinal injections of pigs with the unmodified AAV8 vector stimulated photoreceptor pathology, microglia infiltration into the photoreceptor layer and CD8+ T cell infiltration into the retina, while the modified vector evaded such pathology and immune cell infiltration [13]. These findings suggest that TLR9 may play a key role in AAV-mediated neuroinflammation in the CNS. TLRs are expressed in neurons, microglia and astrocytes with TLR9 expression predominantly in microglia [88][89][90][91]. TLR9 has been implicated in mediating the innate immune response to herpes simplex virus infection in the brain [92] as well as in pathobiology of several neurodegenerative diseases [91,93].
In an experiment to understand the neuroinflammatory effect of TLR9 activation, mice received an ICV dose of a CpG-containing oligodeoxynucleotide (CpG-ODN), a TLR9 agonist. A single low dose of TLR9 agonist induced signs of neuroinflammation including severe meningitis, increase in proinflammatory cytokines and chemokines, a breakdown of the BBB, and infiltration of immune cells from the periphery [94]. Although the TLR9 agonist used in these experiments exposed animals to much larger amounts of total nucleic acid than a typical AAV dose, this work highlights risks associated with TLR9 activation in the CNS and encourages therapeutic development to consider the immune-evasive strategies presented here as well as the identification of other strategies to modulate the TLR9 response to AAV within the CNS.
Immunomodulation Strategies and Their Effect on CNS
Even very low titers of anti-capsid antibodies can completely block the therapeutic effect of AAV administration in the CNS. AAV treatment has been associated with anti-capsid and anti-transgene circulating antibodies in blood and the CSF as well as infiltrated mononuclear cells in CSF, and neural tissue, thus highlighting a need for managing antibody-based and cell-based responses to AAV treatment. Most clinical trials now incorporate peri-procedural corticosteroids, and others include additional immunosuppressive agents such as B cell depleting rituximab and mTor regulating rapamycin [1,15]. A single subject case report by our group showed that immune modulation with rituximab and rapamycin prior to AAV administration blocked antibody-based immune responses to both capsid and transgene [15].
Plasmapheresis has been proposed as a strategy for complete removal of circulating antibodies because it would increase safety of dosing and possibly allow for the participation of pre-immune individuals in AAV-based gene therapy treatments. In a study of 10 subjects undergoing plasmapheresis, anti-capsid nAbs against AAV serotypes 1, 2, 6, and 8 were measured before and after each treatment [95]. Between 1-20-fold decreases in nAb titer were noted after each round of plasmapheresis for all serotypes analyzed. However, a "rebound" effect was observed where nAbs return to previous levels after the treatment, and even after five treatments, nAb titers fell below the cutoff criteria in only two of the 10 subjects. Importantly, those two subjects already had the lowest titers at study baseline, suggesting this approach is feasible for managing only low or moderate levels of preexisting immunity to AAV [95]. Another study used naturally exposed AAV-preimmune NHPs to evaluate the effect of two rounds of plasmapheresis on nAb titer. In contrast to the clinical findings summarized above, this preclinical study reports that the nAb titers were reduced to levels similar to the naïve animals after only two treatments in all seven NHP models treated [96]. There are still few studies on combining plasmapheresis with AAV-based gene therapy, and additional work is required to understand the best application of the technique.
In addition to managing systemically circulating antibodies, difficult to eliminate long-lived plasma cells may be reactivated by AAV treatment, secreting additional antibodies [97,98]. Plasma cells are highly resistant to most currently available immunosuppressive strategies, with stem cell transplants with anti-thymocyte globulin treatment being the most effective but presenting significant safety risk to patients [96,99,100]. Pre-treatment with plasma cell-targeting agents such as bortezomib might be useful in depleting the plasma cell population and decreasing reactivation upon subsequent AAV exposure [3,101]. Recent work suggests that microglia may also form an immunological memory similar to plasma cells [83], suggesting that long-lived immunological memory may be a concern within the CNS compartment as well. Although the effectiveness of bortezomib on neuroinflammation has not been evaluated, other agents are beginning to be tested for CNS application. For example, mTor modulator rapamycin has been shown to have specific effects on neuroinflammation. A study using in vitro and mouse models of spinal cord injury shows that treatment with rapamycin results in neuronal survival, reduced inflammation, and astrocyte proliferation after spinal cord injury [102].
While limited in scope, this data supports a role for broad immunosuppressive strategies in attenuating neuroinflammation in CNS-targeting gene therapies. Additional work is warranted to identify which agents have the best safety profiles and are effective within the CNS compartment.
Conclusion
The neuroinflammatory reaction to AAV-based gene therapies for CNS diseases is still not well characterized. However, preclinical and clinical findings in recent years indicate significant vector-related immune reactions and neuroinflammation in subjects. Several strategies to modulate immune-related vector toxicities were discussed and are summarized in Figure 2. Based on the specific disease indication, transgene, and target tissue, few or all of these strategies should be considered to enhance patient safety. The inclusion of biomarkers to evaluate neuroinflammation at key time points will be critical to meet this aim.
Conclusion
The neuroinflammatory reaction to AAV-based gene therapies for CNS diseases is still not well characterized. However, preclinical and clinical findings in recent years indicate significant vectorrelated immune reactions and neuroinflammation in subjects. Several strategies to modulate immune-related vector toxicities were discussed and are summarized in Figure 2. Based on the specific disease indication, transgene, and target tissue, few or all of these strategies should be considered to enhance patient safety. The inclusion of biomarkers to evaluate neuroinflammation at key time points will be critical to meet this aim.
|
2020-02-27T09:30:29.625Z
|
2020-02-01T00:00:00.000
|
{
"year": 2020,
"sha1": "9576cf3df13771a2187ed82d044bf2d50ba287e6",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3425/10/2/119/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a3dc5f28ef7ab2a14a60dfbffdd10328f8734fce",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
18665202
|
pes2o/s2orc
|
v3-fos-license
|
Yoga for Persons with Severe Visual Impairment: a Feasibility Study
This exploratory study aims to establish the feasibility of an Ashtanga-based Yoga Therapy (AYT) program for improving sleep disturbances , balance, and negative psychosocial states, which are prevalent issues for visually impaired (VI) individuals. Ten legally blind adult participants were randomized to an 8-week AYT program. Four subjects in the 1 st cohort and three in the 2 nd cohort successfully completed the AYT program. They convened for one session per week with an instructor and performed two home-based sessions per week using an audio CD. The Pittsburgh Sleep Quality Index (PSQI), Perceived Stress Scale (PSS), Beck Anxiety Inventory (BAI), and Beck Depression Inventory (BDI) were administered at baseline and post-intervention. A Timed One-Leg balance measure, respiratory rate (RR), and the Philadelphia Mindfulness Scale (PHLMS) were assessed in the 2 nd cohort. Both groups completed a qualitative exit survey. Positive exit survey responses (all subjects were extremely or mostly satisfied, and wanted to continue AYT) and good participation rates (7 subjects attended at least 7 of the 8 weekly sessions) support the feasibility of the AYT. PSQI, PSS, BAI and BDI scores changed in the direction of reduced negative symptoms after AYT for the 1 st cohort. Changes in PSQI and PSS for the 2 nd cohort were varied. Balance, RR and PHLMS awareness trended toward improvement for each individual. This preliminary study provides proof of concept for potential benefits of AYT that may be observed in VI subjects. Larger studies and an active control group are needed to determine efficacy.
Introduction
Visual impairment (VI) is considered among the 10 most prevalent causes of disability in the United States and approximately 1 in 28 Americans older than 40 years are legally blind. 1 Most current research for VI is focused on the development of treatments such as pharmacologic agents, gene therapy, stem cells and prosthetic devices. 2,3However, not all patients benefit from these proposed treatments, and research is also needed to understand and improve patients' disease-related symptoms that often result in reduced quality of life. 4lternative therapies such as yoga have gained popularity in the U.S. in the past decade 5,6 and have been reported to increase well-being and quality of life for the normally sighted population. 7,8In an on-line survey, physical and emotional well-being were reported as motivating factors for individuals with retinitis pigmentosa (RP) (a slowly progressive retinal degenerative disease) seeking alternative therapies such as yoga and meditation.Of the patients surveyed, 31% tried yoga and of those, 93% reported improved stress, fatigue and anxiety levels. 9Despite growing interest, yoga as a treatment for these symptoms has not been systematically studied in the VI population.
Reductions in sleep quality in persons with retinal degenerative disease and advanced vision loss may be related to decreased light processing due to reduced functioning of retinal photosensitive cells and narrowing of the field of view. 10,11Sleep disturbances in RP, characterized by reduced alertness, disturbed nighttime sleep, and greater daytime sleepiness, were found to be greater than in agematched normally sighted individuals, and may be due in part to photoreceptor loss. 12hotosensitive retinal ganglion cells containing the photopigment melanopsin have been identified as essential to circadian photoentrainment. 13ecently, Altimus et al. further demonstrated that rod and cone photoreceptors, in addition to ganglion cells containing melanopsin, help modulate the light-dark cycle in mice. 14ther studies have reported a relationship between RP and abnormal melatonin production affecting the circadian cycle in some patients. 10,11Taken together, this raises the possibility that as vision loss progresses, reductions in retinally mediated light signals affect the release of melatonin leading to the observed sleep disturbances.Studies with normally sighted individuals practicing yoga have revealed positive results on subjective sleep measures [15][16][17] and increased physiological measures of melatonin, [18][19][20] although these studies are potentially confounded by the lack of control for nighttime light exposure.Melatonin contributes to sleep onset and sleep onset latency or SOL, a measure 21 that can be reliably detected by sleep questionnaires, 16 is considered an indicator of the duration it takes to fall asleep.Persons with ocular disease that have difficulty initiating sleep may be subject to sleep disturbances.A subjective global measure of sleep disturbance and SOL was used in this study to obtain preliminary information regarding sleep problems in this population.Obtaining measures of melatonin were beyond the scope of this project.
The impact of vision loss on negative mood states (e.g.anxiety, depression, stress) may be an independent factor contributing to disturbed sleep.Transient episodes of reduced vision have been correlated with increases in perceived stress and negative mood in RP. 22,23 Functional loss of vision is a risk factor for depression and anxiety in the aging VI population. 24In general, negative psychosocial states that accompany loss of vision are often unrecognized and remain untreated. 4Positive results after yoga have been reported in the general population for stress, 25 depression 26 and anxiety. 27It has also been suggested that simply modifying patterns of improper breathing can lead to significant improvements in psychological symptoms. 28 technique yielding benefits for psychological distress such as stress, anxiety, depression as well as sleep. 29Respiratory rate (RR) has been associated with cardiovascular health and stress 30 and RR may be an indicator of improved stress and related factors. 31Thus, implementing a yoga regimen with a breath component may provide a long-lasting foundation for breathing that could have an added beneficial effect to reduce stress and anxiety.
Furthermore, vision plays a dominant role in a sighted person's ability to navigate through the environment.3][34][35][36][37] As vision declines, adopting training strategies that enhance the use of other sensory information may aid in the development of awareness of the body as it moves in space.In an eyes-closed condition, a group trained in yoga was able to retain balance on a vertical force platform better than a control group trained in physical exercise, indicating they were better able to use proprioceptive cues. 38The ability to stand on a single-leg is an important predictor of falls in the elderly. 39Due to the anticipated increase in the next 20 years of the legally blind aging population, elderly individuals in late-stages of acquired vision loss (e.g.age-related macular degeneration; RP) represent an important demographic to consider. 1 Balance is a critical prerequisite to movement 40 and has been shown to be sensitive to change over time. 41alance improved substantially in a separate study using a timed, one-legged balance test after yoga training in healthy adults 42 and in patients with osteoporosis. 43Therefore, cultivating the integration of mind and body through yoga may yield favorable outcomes as a means to improve balance.Ashtanga is a system of yoga taught by Sri K. Pattabi Jois in Mysore, India.It is an integrated system of asanas (postures), vinyasa (movement), and ujjayi (victorious breath) and is easily implemented regardless of age or level of experience.The main tenet of Ashtanga is movement with the breath.Ujjayi breathing is a specialized technique that emphasizes use of the diaphragm.Observing the quality of breath can be a diagnostic tool for the quality of the practice.In this sense, practicing asana becomes the vehicle for teaching proper breathing techniques.Most importantly, the student is taught to find a balance between what is achievable in an asana while providing room for growth to move deeper into a posture simply by listening to their ujjayi breathing.This makes Ashtanga a suitable, gentle practice since it does not emphasize flexibility or strength as a goal.Instead, the goal is to synchronize postures with the breath while keeping a steady, rhythmic breathing pattern.Strength and flexibility are by-products after persistent practice.The Ashtanga-based Yoga Therapy (AYT) is a highly modified Ashtanga yoga sequence developed specifically for the visually impaired population by the author (PEJ).This style of yoga may be beneficial for this population because it combines all elements of a mindfulness practice through breath, postures and movement.With continued practice, the sequence becomes a moving meditation.Mindfulness in this setting promotes self-awareness in a noncompetitive, non-judgmental environment and at the same time promotes awareness of muscular movements, alignment, mental states and breath. 44ncreased levels of mindfulness were found after yoga in a healthy population 45 and a chronic illness population. 17A mindfulness assessment was included in this study to evaluate the potential to develop increased mindfulness in the AYT. 46By pursuing an integrated approach, this preliminary research study aimed to determine the safety and efficacy of an AYT geared towards the VI population and whether it could have an immediate and com-prehensive impact on secondary symptoms and quality of life for individuals with severe vision loss.The primary goals of this proof of concept study were to evaluate safety and feasibility of the AYT for VI to: i) reduce sleep disturbances, ii) improve psychosocial indicators, and iii) improve balance and RR.
These data are especially important since to our knowledge there have been no previous publications involving yoga interventions for attendant symptoms experienced by the VI population.Participants were limited to a small sample of 10 set a priori for the purposes of establishing feasibility and safety before a larger trial is set in motion.Participants were included in the study if: (a) they were older than 18 years of age (b) they were diagnosed with legal blindness on the basis of visual field diameter <20° as determined by Goldmann and/or Humphrey visual field, 47 or on the basis of best-corrected visual acuity (VA) <20/200 as determined with the Early Treatment of Diabetic Retinopathy Study (ETDRS) chart, 48 (c) the ocular disease stage was expected to remain relatively stable throughout a 3-6 month period, and (d) the participant was
Study design
This proof of concept study determined the feasibility of the AYT in two separate cohorts of VI subjects.Participants were randomized to the first or second 8-week AYT.The 2 nd cohort was invited to participate in the AYT after the first group's completion of the AYT.Two cohorts allowed us to manage teacher-student interactions in smaller class sizes.It also minimized the burden to the participants' regarding questionnaires and transportation and allowed us to pilot additional measures with the second group based on experience with the first group.
Comprehensive vision tests were conducted at baseline that included VA, Pelli-Robson contrast sensitivity (P-R CS) 49 and Goldmann visual fields (GVF) 47 for both groups.Psychosocial and sleep questionnaires were administered at two time points for both groups: once at baseline before the first 8week yoga intervention and once immediately after the intervention period.
When possible, the questionnaires were administered online via SurveyMonkey.com,LLC (Palo Alto, CA) or by email on a word document, and participants used their remaining vision and/or accessibility software to selfadminister the questionnaires.In situations where the participant did not have access to a computer or support from a caregiver (n=2), the questionnaires were read aloud and responses recorded by the researchers via phone.After initial feasibility was noted with the first AYT group, we piloted three additional measures during the 2nd cohort's participation in the AYT: a mindfulness questionnaire administered at baseline, week four and postintervention, and a balance measure and respiratory rate (RR) measured before and after the AYT.A brief exit survey was collected immediately following both groups' participation in the yoga program.Participants received a free yoga mat and $25 for their participation in the study.All procedures took place at one of three locations selected to provide accessibility for our visually impaired population, i) Lions Vision Center (LVC, Wilmer Eye Institute, Baltimore, MD), ii) National Federation of the Blind (NFB, downtown Baltimore, MD) and iii) St. George's Episcopal Church (Arlington, VA).
Intervention
Participants took part in an orientation session before the yoga program began, in order to familiarize them with the style of yoga, the ujjayi breathing technique, and alignment techniques using the mat, class etiquette, and modifications.Participants in each group convened once per week for eight weekly one-hour sessions with the author (PEJ).Participants were provided with an audio CD developed by the author to practice at home and were encouraged to practice at least twice a week (i.e.equivalent to approximately 16 home practice sessions during the intervention period).The Ashtanga-based Yoga Program (AYT) is a highly modified Ashtanga yoga sequence developed specifically for the visually impaired population by the author (PEJ), a trained Ashtanga teacher of four years and practitioner of 7 years, on the basis of her experience teaching blind students at the Braille Institute in Los Angeles.Teaching yoga to those with VI requires simple modifications to postures, clear descriptions, and hands-on adjustments.Each class began with simple seated breathing, a warm-up, standing postures, seated postures, followed by breathing and a final resting pose.Table 2 lists An additional instructor participated during the 2 nd cohort's AYT participation.The second instructor had experience teaching yoga to students with physical disabilities (multiple sclerosis) and was trained to deliver AYT by the author (PEJ) who familiarized him with the protocol and needs of the VI population.The author (PEJ) observed 5 of the 8 classes taught by the second instructor to help ensure fidelity.The AYT is amenable to study because it is composed of a standardized sequence of postures held for a fixed duration.While the sequence of asanas remain the same each session, each asana is modified to suit the individual's needs.Study patients are more likely to comply with an intervention if it is safe, engaging and easy to follow. 50
Study outcomes
The 19-item Pittsburgh Sleep Quality Index (PSQI) questionnaire was used to gauge sleep quality over the past month (Cronbach alpha estimate = 0.83). 51It includes both qualitative and quantitative aspects of sleep, and evaluates seven subscale dimensions of sleep quality.Of interest in this study were the global score and sleep onset latency (SOL) sub scale score.We administered three brief online questionnaires to assess various psychosocial states: the Beck Anxiety Inventory (BAI), 52 Beck Depression Index (BDI) 53 and the Perceived Stress Scale (PSS). 54The 21-item BAI was used to measure the severity of an individual's anxiety over the past month (Cronbach alpha estimate ranges from 0.92 to 0.94). 52The 21-item BDI provided a measure of severity of depression (Cronbach alpha estimate = 0.81). 55he 14 item PSS subjectively assessed the degree to which respondents appraised situations in their life to be stressful on a given day (Cronbach alpha estimate = 0.85). 547][58][59] The questionnaires were completed online through an internet website or over the phone, depending on the subject's ability to access the computer.
The 2 nd cohort completed only the PSQI and PSS plus 3 additional new measures.The PSS for the 2 nd cohort was administered on a weekly basis to assess potential fluctuations in appraised stress.We did not include the BDI and BAI in order to reduce the burden on the 2 nd cohort participants since additional measures were added (PHLMS, respiratory rate, balance).
Changes in balance were assessed for individuals in the 2 nd cohort during their participation in the AYT.The Timed One-Leg Stance protocol requires the subject to raise one foot off the ground with their arms down at their sides, and the time up to 30 seconds is recorded until any touch down of a foot or hand to a surface occurs.1][62] Respiratory rate (RR) was measured three times for each participant to reduce within-session variability during baseline and immediately following the 8-week AYT.It was determined by counting the number of inhalations with a stethoscope for 30 seconds at rest and multiplying by two. 63][66] The Philadelphia Mindfulness Scale (PHLMS) is a 20-item scale to assess two independent components of mindfulness: acceptance and awareness. 67Total scores on both subscales range from 20 to 100, higher scores reflect greater mindfulness.The PHLMS was collected at baseline, during week 4 and postwaitlist participation.The PHLMS shows good internal reliability as reported in clinical and non-clinical samples (Awareness Cronbach alpha estimate = 0.85 & Acceptance Cronbach alpha estimate = 0.87). 67The PHLMS was used as a means to appraise the AYT as a practice that develops awareness and mindfulness in general.It has the advantage of two independent subscales that allow us to detect critical differences in two components of mindfulness. 67,68We measured adherence with practice logs and participant experience through exit surveys to further help evaluate the yoga intervention.Participants completed a weekly log online to report home practice and were used to evaluate adherence.All participants completed a brief exit survey regarding their experience with the AYT.The survey was comprised of open-ended and multiple-choice response items.
The small sample size in this proof of concept study was not powered for statistical comparisons of changes in the questionnaire scores, 69 and therefore, ANCOVA analyses of our results were not statistically significant.We were limited to evaluating the data based on descriptive statistics and no rigorous statistical analysis of the outcomes is presented here.Instead we focus on the direction of change or trends on an individual basis for each measure (Tables 3-5).
Study participation and baseline characteristics
Ten legally blind participants (Table 1) who met our study criteria were enrolled in the study.Four of the five participants in the 1st cohort attended at least 7 of the 8 weekly classes, which was the attendance compliance criteria defined a priori.One participant in the first group missed two classes during the 8week intervention, reported that she failed to follow the home practice instructions, and began taking sleeping pills, which were study exclusion criteria.Therefore, only sleep and psychosocial data for four subjects in the 1st cohort are presented in Table 3.We were unable to remove the non-compliant subject's responses from the exit survey since it was anonymous.Only 3 participants in the 2 nd cohort completed the 8-week yoga program by attending 7 of the 8 weekly scheduled classes.Two participants did not participate due to scheduling conflicts.The 2 nd cohort did not begin their participation until after the initial cohort completed the AYT (~1 month), which may have introduced new, unpredictable scheduling conflicts not present during the initial recruitment period.At least one subject opted not to participate in the AYT since her schedule became too busy.Another 2 nd cohort participant dropped out after 4 sessions of the AYT because he became unable to attend regularly scheduled classes due to his conflicting responsibilities related to family caregiving.This participant also failed to return the home practice log.
Overall weekly class attendance was 7.5 (range 7-8; SD 0.6) and 7 (SD 0) while individual home practice compliance was 14 (range 10-16; SD 2.8) and 14 (range 13-16; SD 1.7) as reported in weekly practice logs, across the 4 participants in the first AYT group and 3 participants in the 2 nd cohort group who completed the intervention, respectively.The baseline characteristics were evenly balanced between cohorts and not statistically significantly different between the two groups.No serious adverse or non-serious events were reported.
Outcome measures
Individual results for the questionnaires completed pre-and post-AYT and are presented in Table 3, and improvements in the mean scores were noted across all four questionnaires.No associations were found for severity of vision loss and the outcome measures.We suspect this may be due to the limited range of vision in our limited sample.The PSQI global, PSQI_SOL, PSS, BDI and BAI scores for the first cohort group (SBJ01-04) all changed in the direction of reduced negative symptoms after the yoga intervention, with the exception of two instances that showed no change (Table 3).The PSQI SOL scores for the first AYT group mostly changed in the direction of reduced negative symptoms (n=3 of 4 changed by approximately 45% or 31.7 min) and one showed no change after the yoga intervention.In addition, three subjects (SBJ03, SBJ04 and SBJ06) met the criteria for sleep disturbances according to their PSQI-Global score (>5) at baseline and showed improvements in their scores post-AYT.Three of four participants in the 1 st cohort showed reduced BDI scores and one showed no change.The greatest amount of change in the BDI was reported by the subject (SBJ03) who had greater than minimal depressive symptoms at baseline (BDI ≥10).BAI scores improved across all participants in the first cohort, and the largest changes on average were noted for the BAI.
Sleep and PSS data are presented in Tables 3 (bottom) and 4 for the 2 nd cohort (SBJ06, SBJ07, SBJ10).Changes in the PSQI-global scores were marginal and mixed.It's interesting to note that the direction of change in the global score was exactly the opposite for the SOL score directions, something we have no explanation for.The PSS was administered weekly for the 2 nd cohort in order to capture the degree of variability that might be observed due to acute stress (vs.chronic).Indeed, the scores in Table 4 are somewhat volatile during the course of the program.Merely looking at pre-and post-scores show one subject getting worse, the other getting better and the last one showing no change.In all cases, there is evidence of better and worse scores throughout the intervention for each subject suggesting that perhaps the PSS is not the best measure to capture treatment effects.This also highlights the inherent variability within individual subjects.However, without a much larger sample size, it is difficult to draw any substantive conclusions from these measures in this study.During the 2 nd cohort group's participation in the AYT, three additional measures were piloted for feasibility (Table 5).Both RR and balance measures improved after the intervention for all three individuals (Table 5).The PHLMS showed a positive trend for improved awareness subscale scores per individual but not for the acceptance subscale.
Discussion
In this preliminary study, the feasibility and initial safety of the AYT were demonstrated in a small group of individuals with VI.The positive exit survey responses and relatively good participation rates support the feasibility of the AYT.All participants who completed the study reported the desire to continue with the AYT and found it enjoyable.In general, the AYT appeared to provide overall benefits for sleep and psychosocial factors for the first group of participants.Improvements in sleep and PSS were not consistent across the members of the 2 nd cohort.RR and balance were only assessed in the 2 nd cohort participants, but all three subjects showed improvements in both of these measures after their AYT participation.This preliminary study provides proof of concept for potential benefits that may be expected from the AYT in a population with VI.
The mean reduction in PSS scores observed in the first cohort of 4 participants in our study (36.5%) was greater than another study that found a 29.5% decrease in scores after a stress reduction program. 70The PSS may vary significantly from week-to-week in some people, potentially leading to chance reductions or regression toward the mean in the control group when only comparing a single pre-and post-intervention measure.Furthermore, the PSS timeframe assessed stress in the past week, which may have identified acute episodes rather than capture a global measure of chronic stress in the 1 st cohort.As evidenced in the 2 nd cohort, fluctuations were observed from week to week rendering the results difficult to interpret.
Planning assessments with a longer timeframe (e.g. 2 weeks) may provide a more accurate assessment of perceived stress. 54he improvements in the BAI across the four subjects in the 1 st cohort were quite large compared to a yoga study by Harner et al. with a much larger sample size 58 (75.9% versus 38.9%).Compared to two separate studies measuring PSQI-SOL in much larger samples, 71,72 sleep onset latency in our study changed to a greater degree for the four participants in the 1 st cohort (~32 min vs. 15-18 min).Only one of the four subjects (SBJ03) in the first AYT group had greater than minimal depressive symptoms (BDI score ≥10) at baseline, and therefore, floor effects may have prevented us from observing larger changes.SBJ03's score was reduced by 6 points (55% reduction) post-intervention comparable to Woolery et al. 73 who observed significant reductions in BDI (~average 69.5%) after a 5 week yoga intervention for mildly depressed young adults.
The improvements in the RR and timed oneleg stance measures are important because they suggest that the AYT may slow breathing and improve balance in those with VI.Balance is more impaired as vision loss progresses and therefore increases the risk of falls. 32,74It has been reported that individuals increase their chances of sustaining an injury due to a fall by two times if they are unable to perform a One-Legged Stance Test for five seconds. 39Two of 3 participants were below five seconds and one improved to greater than 5 seconds after the intervention.The third subject's performance was high to begin with and improved by 6.48 seconds to the maximum attempted test time of 30 seconds.Improvements in balance in our study suggest that participants may be developing strategies that access other sensory cues and increase awareness of where their body is in space.Indeed, the average improvement range observed in our study is comparable to a yoga study for women with postmenopausal osteoporosis who are also at risk for falls. 43uture studies of the AYT should assess changes in balance in a sample larger than three VI participants, as well as determine whether changes in balance of this magnitude are clinically significant and may translate into a reduced risk for falls or injuries.
We chose to measure changes in RR in the 2 nd cohorts since the main tenet of the AYT is synchronized movement with the breath.Modest improvements were observed in our group; however, a larger sample size is needed in the future to attempt to demonstrate any statistically significant changes.
Focusing on the breath draws the attention inward and at the same time promotes selfawareness to muscular movements and alignment. 45Since the AYT contains a component of self-awareness we believed the PHLMS would be a useful measure.We found trends for improved awareness scores in the 2 nd cohort group after their participation.It is interesting to note that this difference only emerged towards the end of the program (i.e., scores were not qualitatively different from baseline at week 4 testing), which indicates that cultivating awareness may be a process that improves with practice.The scores for acceptance did not show any substantial changes, and it is possible that acceptance is not a component of the AYT itself or that it takes longer than 8-weeks to manifest.A larger sample size and/or a longer duration program will be needed in the future to critically tease these apart.To our knowledge, this is the first study that uses yoga to reduce secondary symptoms in the VI population.While we view our results as promising, there are some limitations that should be acknowledged.This proof of concept study's limited sample size was not designed or powered to detect statistically significant effects.Also due to the limited sample size, we were unable to detect interactions between measures such as stress and sleep.Self-selec-tion, or other unknown factors may have confounded our results.Scheduling and nonadherence were issues for 3 of the 10 participants.The potential influence of the investigator as yoga instructor may have been an issue however at this preliminary stage, the investigator's participation as teacher provided an intimate view of the feasibility of the sequence and the participants perspective that would not have been otherwise available.Computerassisted self-administered questionnaires have the advantage of eliciting more openness to sensitive information and are more accessible, however, to accommodate the needs of two participants in this special population, surveys were administered by phone which may have resulted in social desirability bias. 75Using mixed survey administration has been demonstrated to be effective in the visually impaired population, and indeed necessary in some cases, resulting in good data quality. 76uture research should: i) use a run-in period with more than one visit prior to randomization to promote enrollment of participants who are likely to return for multiple study visits, ii) develop strategies to improve motivation, and iii) accommodate busy schedules with various class options to help increase and facilitate participation in the AYT.A future randomized controlled trial could determine whether the AYT is capable of producing a specific beneficial result (i.e.efficacy).
In the absence of a real placebo or sham alternative for yoga, one possibility is an active control group such as an education control.An education control could include didactic presentations corresponding to information about stress and anxiety, sleep disturbances, mindbody relationships, orientation and mobility, but without providing specific guidance on strategies to improve these areas.The purpose of the active control would be to provide participants with the equivalent attention from an instructor, matched for duration and frequency, social interactions with others and enjoyment of the protocol.As described in the above discussion, the next iteration of this study would benefit from mixed methods that would include more objective measures, as well as standard validated questionnaires.
Our recruitment efforts yielded a heterogeneous sample of participants.It may be important to distinguish between those with congenital blindness vs. those with acquired VI (e.g.RP) to determine whether there are differences either at baseline or in the responses to the AYT based on the duration of VI.Participants with congenital vision loss may be better adapted and therefore not as affected by balance problems (although see Stones and Kozma, 1987) 77 The positive responses during the exit surveys and the high participation rates among the majority of the sample help demonstrate the feasibility of the AYT.Participants also reported perceived benefits and improved quality of life.The 1 st cohort of four subjects had improved scores on the stress, anxiety and depression questionnaires, while the 2 nd cohort of three subjects had improved scores for the awareness, respiration, and balance measures.Sleep disturbances trended toward improvements in the 1st cohort, but to a lesser degree in the 2 nd cohort.These promising results warrant further investigation with a larger sample size and active control.
healthy to the extent that participation in a yoga program would not exacerbate any existing disease conditions.Participants were excluded on the basis of (a) clinically diagnosed or significant sleep disorder (e.g., sleep apnea) or a medical condition (e.g., chronic pain) responsible for sleep complaints, (b) use of prescription sleep medication more than once a week for duration of the study, (c) use of other psychotropic medication, or (d) consumption of >2-3 alcoholic beverages per day or smoking >10 cigarettes per day.The protocol for the study was approved by the Institutional Review Board of the Johns Hopkins University School of Medicine and followed the tenets of the Declaration of Helsinki.The consent form was read to participants due to their visual impairment.All participants provided informed consent.
the full sequence of poses taught during AYT.Each pose was held for five breaths or for as long as the subject was able.Each class included a question and answerArticleTable 2. Details of Ashtanga-based yoga therapy.15min 1 Padmasana Lotus Posture or comfortable seated position and commence ujjayi breathing.(25 Breath Count, 5 minutes) 2 From seated position, inhale the arms overhead drawing the palms toward each other until they touch.Exhale releasing the arms down moving with the breath.5x 3 Transition to Table Pose 4 Marjaryasana (Cat) to Bitilasana (Cow) 5 From Table to Balasana (Child's) pose 6 From Table to Downward-facing Dog.Move from Downward-facing dog to standing.30 min 7 Tadasana or Samasthiti (Mountain Pose) 8 Padangusthasana (Foot Big Toe Posture) 9 Utthita Trikonasana (Extended Triangle Posture) 10 Parivritta Trikonasana (Revolved Triangle Posture) 11 Utthita Parsvakonasana (Extended Side Angle Posture) 12 Parivritta Parsvakonasana (Revolved Side Angle Posture) 13 Prasarita Padottanasana, A-D (Feet Spread Intense Stretch Posture) 14 Vrksasana (Tree pose) 15 Utkatasana (Chair Pose) Move to the floor, modified Sun Salute 16 Dandasana followed by Paschimottanasana (Western Intense Stretch Posture) 17 Purvottanasana (Eastern Intense Stretch Posture) 18 Janu Sirsasana A (Head to Knee Posture) 19 Marichyasana C (Dedicated to Marichi) 20 Navasana (Boat Posture) 21 Baddha Konasana, A &B (Bound Angle Posture) 22 Setu Bandha Sarvangasana (Bridge pose) 15 min 23 Paschimottanasana (Western Intense Stretch Posture) 24 Padmasana (Lotus Posture) or Sukhasana (Easy Pose), breathe 10x 25 Savasana (Corpse Pose) (10 minutes) N o n -c o m m e r c i a l u s e o n l y period at the beginning and end of class.
or negative psychosocial states.With the exception of the balance and RR measures, the other outcomes for the psychosocial states were self-reported, and the N o n -c o m m e r c i a l u s e o n l y subjects' responses may have been biased since it was not possible to mask the subjects to the intervention.
|
2016-04-30T06:45:01.844Z
|
2012-04-19T00:00:00.000
|
{
"year": 2012,
"sha1": "df2632e494e73c0882f9916d8caac24b01de5a9a",
"oa_license": "CCBYNC",
"oa_url": "https://www.pagepress.org/journals/index.php/ams/article/download/ams.2012.e5/pdf",
"oa_status": "GOLD",
"pdf_src": "Crawler",
"pdf_hash": "0a8b39f6d000843846d40b17565864ff56e88f58",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
254113044
|
pes2o/s2orc
|
v3-fos-license
|
On the dynamics of the general Bianchi IX spacetime near the singularity
We show that the complex dynamics of the general Bianchi IX universe in the vicinity of the spacelike singularity can be approximated by a simplified system of equations. Our analysis is mainly based on numerical simulations. The properties of the solution space can be studied by using this simplified dynamics. Our results will be useful for the quantization of the general Bianchi IX model.
Introduction
The problem of spacetime singularities is a central one in classical and quantum theories of gravity. Given some general conditions, it was proven that general relativity leads to singularities, among which special significance is attributed to big bang and black hole singularities [1].
The occurrence of a singularity in a physical theory usually signals the breakdown of that theory. In the case of general relativity, the expectation is that its singularities will disappear after quantization. Although a theory of quantum gravity is not yet available in finite form, various approaches exist within which the question of singularity avoidance can be addressed [2]. Quantum cosmological examples for such an avoidance can be found, for example, in [3][4][5][6] and the references therein.
Independent of the quantum fate of singularities, the question of their exact nature in the classical theory, and in particular for cosmology, is of considerable interest and has a long history; see, for example [7,8], for recent reviews. This is also the topic of the present paper.
Already in the 1940s, Evgeny Lifshitz investigated the gravitational stability of non-stationary isotropic models of universes. He found that the isotropy of space cannot be a e-mail: kiefer@thp.uni-koeln.de b e-mail: nk@thp.uni-koeln.de c e-mail: wlodzimierz.piechocki@ncbj.gov.pl retained in the evolution towards singularities [10] (see [11] for an extended physical interpretation). This motivated the activity of the Landau Institute in Moscow to examining the dynamics of homogeneous spacetimes [12]. A group of relativists inspired by Lev Landau, including Belinski, Khalatnikov and Lifshitz (BKL), started to investigating the dynamics of the Bianchi VIII and IX models near the initial spacelike cosmological singularity [13]. After several years, they found that the dynamical behaviour can be generalized to a generic solution of general relativity [14]. They did not present a mathematically rigorous proof, but rather a conjecture based on deep analytical insight. It is called the BKL conjecture (or the BKL scenario if specialized to the Bianchitype IX model). The BKL conjecture is a locality conjecture stating that terms with temporal derivatives dominate over terms with spatial derivatives when approaching the singularity (with the exception of possible 'spikes' [8,9]). Consequently, points in space decouple and the dynamics then turn out to be effectively the same as those of the (non-diagonal) Bianchi IX universe. (In canonical gravity, this is referred to as the strong coupling limit, see e.g. [2, p. 127].) The dynamics of the Bianchi IX towards the singularity are characterized by an infinite number of oscillations, which give rise to a chaotic character of the solutions (see e.g [15]). Progress towards improving the mathematical rigour of the BKL conjecture has been made by several authors (see e.g. [16]), while numerical studies giving support to the conjecture have been performed (see e.g. [17]).
The dynamics of the diagonal Bianchi IX model, in the Hamiltonian formulation, were studied independently from BKL by Misner [18,19]. Misner's intention was to search for a possible solution to the horizon problem by a process that he called "mixing". 1 Ryan generalized Misner's formalism to the non-diagonal case in [20,21]. A qualitative treatment of the dynamics for all the Bianchi models may be found in the review article by Jantzen [22], and we make reference to it whenever we get similar results.
Part of the BKL conjecture is that non-gravitational ('matter') terms can be neglected when approaching the singularity. An important exception is the case of a massless scalar field, which has analogies with a stiff fluid (equation of state p = ρ) and, in Friedmann models, has the same dependence of the density on the scale factor as anisotropies (ρ ∝ a −6 ). As was rigorously shown in [23], such a scalar field will suppress the BKL oscillations and thus is relevant during the evolution towards the singularity. Arguments for the importance of stiff matter in the early universe were already given by Barrow [24].
In our present work, we shall mainly address the general (non-diagonal) Bianchi IX model near its singularity. Our main motivation is to provide support for a rather simple asymptotic form of the dynamics that can suitably model its exact complex dynamics. We expect this to be of relevance in the quantization of the general Bianchi IX model, which we plan to investigate in later papers; see, for example [25]. Apart from a few particular solutions that form a set of measure zero in the solution space, no general analytic solutions to the classical equations of motion are known. Therefore, we will restrict ourselves to qualitative considerations which will be supported by numerical simulations. The examination of the non-diagonal dynamics presented in [26], though it is mathematically satisfactory, is based on the qualitative theory of differential equations, which is of little use for our purpose.
Our paper is organized as follows. Section 2 contains the formalism and presents our main results for a general Bianchi IX model. We first specify the kinematics and dynamics. We then consider a matter field in the form of (tilted) dust. This is followed by investigating the asymptotic regime of the dynamics near the singularity. Our conclusions are presented in Sect. 3. The numerical methods used in our numerical simulations are described in the Appendix.
Kinematics
The general non-diagonal case describes a universe with rotating principal axes. The metric in a synchronous frame can be given as follows (see for the following e.g. [27,28]): where N is the lapse function. Spatial hypersurfaces in the spacetime are regarded topologically as S 3 (describing closed universes), which can be parametrized by using three angles θ ,φ,ψ ∈ [0, π] × [0, 2π ] × [0, π]. The basis one-forms read σ 1 = − sin(ψ)dθ + cos(ψ) sin(θ)dφ, σ 2 = cos(ψ)dθ + sin(ψ) sin(θ)dφ, The σ i , i = 1, 2, 3, are dual to the vector fields which together with X 0 = ∂ t form an invariant basis of the Bianchi IX spacetime. The X i are constructed from the Killing vectors that generate the isometry group SO(3, R) (see [27] for more details). The basis one-forms satisfy the relation with C i jk = ε i jk being the structure constants of the Lie algebra so(3, R). The X i obey the algebra [X i , X j ] = −C k i j X k . We parametrize the metric coefficients in this frame as follows wherē The variables α, β + , and β − are known as the Misner variables. The scale factor exp(α) is related to the volume, while the anisotropy factors β + and β − describe the shape of this model universe. The variables 1 , 2 and 3 were used by BKL in their original analysis [31].
The Euler angles θ , φ, and ψ are now dynamical quantities and describe nutation, precession, and pure rotation of the principal axes, respectively. In the case of Bianchi IX spacetime, the group SO(3, R) is the canonical choice for the diagonalization of the metric coefficients. For a treatment of other Bianchi models, see [22].
Dynamics
In the following, we shall discuss the Hamiltonian formulation of this model. In order to keep track of the diffeomorphism (momentum) constraints, we replace the metric (1) by the ansatz (8) where N i are the shift functions. The Hamiltonian formulation was first derived in a series of papers by Ryan: the symmetric (non-tumbling) case obtained by constraining ψ, φ to be constant and keeping θ dynamical is discussed in [20], and the general case can be found in [21]. We write the Einstein-Hilbert action in the well known ADM form, where is the extrinsic curvature, and D i is the spatial covariant derivative in the non-coordinate basis {X i }. We will set 3 4π G σ 1 ∧ σ 2 ∧ σ 3 = 1 for simplicity. The threedimensional curvature (3) R on spatial hypersurfaces of constant coordinate time is given by We now turn to the calculation of the kinetic term and the diffeomorphism constraints. For this purpose, we define an antisymmetric angular velocity tensor ω i j by the matrix equation An explicit calculation of the right-hand side gives, using (7), The Lagrangian in the gauge N i = 0 then takes the form where the 'moments of inertia' are given by Note, in particular, that the term 1 2 I 1 ω 2 3 2 + I 2 ω 3 1 2 + I 3 ω 1 2 2 would formally correspond to the rotational energy of a rigid body if the moments of inertia were constant. The canonical momenta conjugate to the Euler angles are given by It is convenient to introduce the following (non-canonical) angular momentum like variables: The relation to the canonical momenta can now explicitly be given by It is readily shown that the variables l i obey the Poisson bracket algebra l i , l j = −C k i j l k . After the usual Legendre transform, we obtain the Hamiltonian constraint, From (9), we find that the diffeomorphism constraints (∂ L/∂ N i = 0) can be written as where is the ADM momentum. From this expression we can finally compute the diffeomorphism constraints in terms of the angular momentum-like variables and obtain that is, we can identify the diffeomorphism constraints with a basis of the generators of SO(3, R). The full gravitational Hamiltonian then reads From the diffeomorphism constraints (22) we conclude that in the vacuum case l i = 0 and that therefore no rotation is possible, that is, we recover the diagonal case. If we want to obtain a Bianchi IX universe with rotating principal axes, we are thus forced to add matter to the system. A formalism for obtaining equations of motion for general Bianchi class A models filled with fluid matter was developed by Ryan [28]. For simplicity, we will only consider the case of dust as discussed by Kuchař and Brown in [29]. If we were, for example, interested in the study of the quantum version of this model, it would be desirable to introduce a fundamental matter field instead of an ideal fluid. Standard scalar fields alone cannot lead to a rotation for Bianchi IX models. The easiest way to achieve this is, to our knowledge, the introduction of a Dirac field [30].
Adding dust to the system
The energy momentum tensor for dust reads T μν = ρu μ u ν . The local energy conservation ∇ μ T μν = 0 leads to a geodesic equation for the positions of the dust particles. Let us start therefore by considering the geodesic equation for a single dust particle, whose four-velocity we can express in the non-coordinate frame σ i used above by the Pfaffian form We partially fix the gauge by setting N i = 0. The normalization condition implies We have chosen here the minus sign because this guarantees that the proper time in the frame of the dust particle has the same orientation as the coordinate time t. The geodesic equation for the spatial components of the four velocity can then be written as The geodesic equation implies the existence of a constant of motion. To see this explicitly, we compute the expression i=1,2,3u i u i and convince ourselves that it vanishes identically. Thus the Euclidean sum is a constant of motion. Defining u ≡ (u 1 , u 2 , u 3 ) T , the geodesic equation (26) can be rewritten in vector notation, where "×" denotes the usual cross product in the three dimensional Euclidean space. Defining for convenience v ≡
and the geodesic equation simplifies to
Note that we can also write ω It will thus be possible to eliminate ω from the geodesic equation by using the diffeomorphism constraints. We now add homogeneous dust to the system. The formalism developed in [29] leads to the following form of the Hamiltonian and diffeomorphism constraints for dust in a Bianchi IX universe, where p T denotes the momentum canonically conjugate to T , where T is the global 'dust time'. Since the Hamiltonian does not explicitly depend on T , the momentum p T is a constant of motion. The fact that l 2 1 + l 2 2 + l 2 3 commutes with H implies that l 2 1 + l 2 2 + l 2 3 = (C p T ) 2 is a conserved quantity. This is consistent with (27). We note that a similar form of the constraints (30) was already presented in [20,21]. The formalism is not entirely canonical and must be complemented by the geodesic Eq. (26).
For our numerical purposes, it will be convenient to rewrite the equations of motion in the variables i introduced in (6). We find that the choice of the variables log i allow for a better control over the error in the Hamiltonian constraint.
Moreover, we pick the quasi-Gaussian gauge N = e 3α = √ 1 2 3 , N i = 0. Recall that the singularity is reached in a finite amount of comoving time (corresponding to the gauge N = 1). The choice N = e 3α allows to resolve the oscillations in the approach towards the singularity. With these choices the Hamiltonian constraint becomes where the moments of inertia are The diffeomorphism constraints read l i = p T Cv i and can be used to eliminate the angular momentum variables from the equations of motion. These equations can then be written as where we have set p T ≡ 12 p T for convenience. Note that these equations are exact. (In [13], the matter terms were neglected.) We use the diffeomorphism constraints to eliminate ω from the geodesic equation (29). If expressed in the gauge N = e 3α and using the i , the geodesic Eq. (29) can be written aṡ Together with the constraint v 2 1 + v 2 2 + v 2 3 = 1 this is all we need for numerical integration. Note that all dependence on the Euler angles and their momenta has dropped out from the equations of motion (33,34). The numerical method we use is described in Appendix.
The tilted dust case
A qualitative picture for the dynamics of the universe can be obtained by considering the Hamiltonian (30) in the quasi-Gaussian gauge N = e 3α , N i = 0. The Hamiltonian can then be interpreted as the "relativistic energy" of a point particle (called the universe point) with "spacetime coordinates" (α, β + , β − ). The universe point is subject to the forces generated by the dynamical potential which is depicted in Fig. 1. The contour lines of the curvature potential − e 6α
12
(3) R are represented by solid black lines.
The curvature potential is exponentially steep and takes its minimum at the origin β ± = 0. When the universe evolves towards the singularity (α → −∞), the curvature potential walls move away from the origin while becoming effectively hard walls in the vicinity of the singularity. The term can be interpreted as three singular centrifugal potential walls. They are represented by the dashed red lines. Asymptotically close to the singularity, these walls are expected to become static. In general, however, the centrifugal potential walls are dynamical and change in a complicated manner dictated by the geodesic Eq. (34). The centrifugal walls will prevent the universe point from penetrating certain regions of the configuration space. Misner [19] and Ryan [20,21] employed these facts to obtain approximate solutions in a diagrammatic form. The other Bianchi models can be treated in a similar way [22].
Special classes of solutions
Before doing the numerics, we comment on particular classes of solutions: one class of solutions is obtained if we choose, for example, the initial conditions v 1 = 0 = v 2 and v 3 = 1. The geodesic Eq. (34) implies now that the velocities stay constant in time. This implies that at all times l 1 = 0 = l 2 and l 3 = p ψ = p T C. This class of solutions is known as the non-tumbling case. Furthermore, there are classes of solutions which are rotating versions of the Taub solution. These solutions should be divided into two subclasses: one class that oscillates between the centrifugal walls and the curvature potential and one class that runs through the valley straight into the singularity. We set For the i variables it means that 1 = e 2α e 2β + = 2 and 3 = e 2α e −4β + . With this choice we obtain I 3 = 0 and 3I 1 = 3I 2 = sinh 2 (3β + ). Most importantly, the geodesic Eq. (34) is trivially satisfied, that is, v 1 = v 2 = 1/2 and v 3 = 0 for all times. When setting C = 0 we obtain the diagonal case which contains the isotropic case of a closed Friedmann universe. The simulation plotted in Fig. 2 was performed for the tumbling case, that is, the v i are chosen to be non-zero.
The asymptotic regime close to the singularity
In order to simplify the dynamics of the general case, BKL made two assumptions based on qualitative considerations of the equations of motion. The first assumption states that anisotropy of space grows without bound. This means that the solution enters the regime 1 2 3 .
The ordering of indices is irrelevant. In fact, there are six possible orderings of indices which each correspond to the universe point being constrained to one of the six regions bounded by the rotation and centrifugal walls sketched in Fig. 1. The region 1 > 2 > 3 corresponds to the right region above the line β − = 0 in Fig. 1. More precisely, the inequality (37) means that 2 / 1 → 0 and 3 / 2 → 0.
Our numerical simulations support the validity of this assumption (see the plot of the ratios 2 / 1 and 3 / 2 in Fig. 3). We provide plots of the two ratios 2 / 1 , 3 / 2 and the velocities v in order to provide a sanity check of the approximation we will perform later on. Fig. 3 Plots of the ratios 2 / 1 , 3 / 2 and the velocity v. The plots correspond to the numerical solution presented in Fig. 2. The peaks in the ratios appear during bounces of the universe point with the centrifugal walls. As we can see, the height of these peaks decreases in the evolution towards the singularity The second assumption made by BKL states that the Euler angles assume constant values: that is, the rotation of the principal axes stops for all practical purposes and the metric becomes effectively diagonal. The analysis of BKL [31] supports the consistency of making both assumptions at the same time. Similar heuristic considerations can possibly be applied to other Bianchi models as well [22]. In the dust model under consideration, this assumption is equivalent to the statement that the dust velocities v assume constant values v → v (0) . Our numerical results indicate that this is in fact true (see Fig. 3). BKL then arrive at the simplified effective set of equations.
Let us now carry out the approximation and apply it to our equations of motion. The kinetic term stays untouched during the approximation. The first step in the approximation is to ignore the rotational potential. In view of the strong inequality (37), we approximate the curvature potential via Furthermore, we approximate the centrifugal potential by Note that one centrifugal wall was ignored completely. Having Fig. 1 in mind, this approximation is well motivated since only two of the centrifugal walls are expected to have a significant influence on the dynamics of the universe point. After defining the new variables we arrive at a simplified Hamiltonian constraint and equations of motion, which coincides with the asymptotic form of equations obtained in [31]. Equation (43) can now be treated by the numerical methods which we have used in the previous sections. One must ensure that initial conditions are chosen such that the simulation starts close to the asymptotic regime (37).
Conclusions
The numerical simulations indicate that the non-diagonal Bianchi IX solutions, with tilted dust, evolve into the regime where 1 2 3 and v i ≈const. The results motivate us to formulate the conjecture: Given a tumbling solution to the general Bianchi IX model filled with pressureless tilted matter, there exists t 0 ∈ R such that the solution is well approximated by a solution to the asymptotic equations of motion for all times t > t 0 describing the vicinity of the singularity.
To make the notion of "approximation" mathematically more precise, a suitable measure of the "distance" on the set of solutions is needed. For this purpose, we propose to use the following simple measure: Fig. 4 The difference between the exact and the asymptotic solutions: a evolution towards the singularity, b evolution away from the singularity where { 1 , 2 , 3 } denotes the numerical solution to the exact equations of motion (33)- (34), and denote the numerical solution to the asymptotic equations of motion (43). We have evolved the exact system of equations from t = 0 forward in time until t = 3 × 10 6 . There we used the same initial conditions as the ones we used to obtain the solution shown in Fig. 2. We then took the final state at t = 3 × 10 6 as an initial condition for the asymptotic system of equations and evolved it backwards in time towards the re-bounce until t = −980. Figure 4 presents the measure (44) as a function of time. We can see fast decrease of with increasing time (evolution towards the singularity) and fast increase of with decreasing time (evolution away from the singularity).
Our numerical simulations give strong support to the conjecture concerning the asymptotic dynamics of the general Bianchi IX spacetime put forward long ago by Belinski et al. [31]. We remark that approximating the diagonal or nontumbling case by the asymptotic dynamics (43) is invalid (see [32] for more details).
It is sometimes stated that "matter does not matter" in the asymptotic regime, but this does not mean that one is allowed to use the dynamics of the purely diagonal case. One only encounters an effectively diagonal case, which is expressed in terms of the directional scale factors {a, b, c}. So there exist serious differences between the purely diagonal and effectively diagonal cases (see [32] for more details).
Employing the asymptotic form of the equations of motion may enable one to study the chaotic behaviour and other properties of the solution space for the general model. This is also important for quantizing the general Bianchi IX model, where the quantization of the exact dynamics seems to be quite difficult, whereas the quantization of the asymptotic case seems to be feasible [25]. This plot can be viewed as another check of the numerics. Between two successive bounces from the potential walls this quantity should be close to one. If this ceased to be true it would indicate that the error in the Hamiltonian constraint becomes relevant and cannot be neglected in the approach towards the singularity (see [7] for a more detailed discussion). Both plots correspond to the numerical solution shown in Fig. 2 of the order 10 −14 . We can integrate the equations of motion together with the geodesic equation to obtain a numerical solution to the system. We set up initial conditions at t = 0 and evolve the system forward in time towards the final singularity and away from the rebounce. A major problem in numerical relativity is that the Hamiltonian constraint is not preserved exactly by the numerical procedure. Similar to [33,34], we find that the error in the Hamiltonian constraint varies strongest after the start of the simulation. Furthermore, it varies strongly when the evolution of the universe approaches the point of maximal expansion. While approaching the singularity, the error approaches an approximately constant value. This can be seen in Fig. 5. Therefore we can minimize the error when we choose the initial conditions far away from the point of maximal expansion. Moreover, it turned out that the error can be further reduced when constraining the solver's maximally allowed time step size. This time step size should, however, not be chosen too small since small time step sizes can drive the propagation of round off errors. Small step sizes are, of course, also numerically more expensive. By manually fine tuning the initial conditions and the maximally allowed time step size it was possible to keep the order of the error lower than 10 −15 .
Recall that the dynamics of Bianchi IX are chaotic, that is, slightly changing initial conditions have a large effect on the long time behaviour of solutions. Since the propagation of random numerical errors cannot be avoided, we will be dealing with a "butterfly effect" and it should in general not be expected that our numerical solution is an actual approximation of some exact solution of the equations of motion when considering large time intervals.
|
2022-12-01T15:48:12.863Z
|
2018-08-30T00:00:00.000
|
{
"year": 2018,
"sha1": "f1ede9651dac711dd9c3e7119cc0e79cd6d94afd",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1140/epjc/s10052-018-6155-8.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "f1ede9651dac711dd9c3e7119cc0e79cd6d94afd",
"s2fieldsofstudy": [
"Physics",
"Mathematics"
],
"extfieldsofstudy": []
}
|
234211845
|
pes2o/s2orc
|
v3-fos-license
|
Application of fertilization and microbiological preparations in ecological technologies of agricultural enterprises
One of the methods of agricultural enterprises ecologization is the use of fertilization and microbiological preparations. The paper presents the results of the field studies into the effectiveness of seeds preparation for sowing soybeans, amaranth and buckwheat by treatment with Baikal EM-1 and potassium humate. It was found to improve the biological properties of the soil and reduce the incidence of weediness of crops when using the Baikal EM-1, as well as increasing the growth and development of plants. As a result, the use of Baikal EM-1 led to an increase in green mass yield of soybeans by 8.5% and yield of its seeds by 7.9%. For amaranth, the values are 24.6 and 24.2%, respectively, and for buckwheat they are 4.9 and 32.5%. The effectiveness of potassium humate was lower: the green mass yield of amaranth increased by 5.9%, seeds – by 5.5%; the values for buckwheat were 3.9 and 30.7%, respectively. The results indicate the prospects for use of preparations in ecological crops technologies in agricultural enterprises.
Introduction
An integral part of the sustainable development of rural areas is ecological agriculture [1,2]. Now ecological agriculture is becoming a real trend in our country. One of its directions is organic agriculture, which is steadily developing in Russia. It is based on the use of alternative means of production, since the use of synthetic agrochemicals is deemed unacceptable [3]. This is due to the negative consequences of the use of pesticides: the emergence of resistant forms of phytophages and phytopathogens and, as a result, increased pesticidal pressure; disruption of biological equilibrium in agrocenoses. The use of pesticides is associated with an increase in toxicological and ecotoxicological risks to the environment and human health [4]. They significantly limit the biological activity of soils, in particular, the activity of soil enzymes and microorganisms producing them.
However, it is allowed to use a variety of biological fungicides and insecticides in organic farming, which, subject to the application regulations, are effective for control of harmful objects [5,6].
The fertilizer system in organic farming also requires a science-based approach. For example, one of the mechanisms to increase the availability of elements of soil improving agents approved for use in organic production is to increase the biological activity of the soil, including through microbiological and organic fertilizers [7]. Unlike chemical preparations, biological preparations are recognized as harmless to humans, animals, bees, birds, and fish. They quickly decompose and do not induce tolerance in insects. The first preparations were created more than 30 years ago and have been widely recognized throughout the world. More than 110 countries use them in technology to increase yield and improve the quality of farmed products [8]. This necessitates monitoring and research support.
In this regard, the aim of the research was to establish the effectiveness of seed preparation with the Baikal EM-1 and potassium humate preparations in terms of the effect of this method on changes in the biological properties of the soil, resistance of cultivated crops to stress factors and their development and productivity in sod-podzolic soils of the Yaroslavl region.
Materials and methods
The studies were carried out in 2019 in the field experiment of the department of Agronomy of the FSBEI HE Yaroslavl State Agricultural Academy in the crop rotation: legumes -row crops -spring grains. The experiment was establish by the method of split plots with randomized placement of variants, the experiment have three-time repetition. Scheme of three-factor experience: factor A -group of crops (legumes, in 2019 -soybeans (Glycine max L.), Svetlaya variety; row crops, in 2019 -amaranth (Amaranthus L.), Kinelsky 254 variety; spring crops, in 2019 -buckwheat (Fagopyrum esculentum Moench), Kalininskaya variety); factor B -the system of tillage (plowing, surface); factor Cpreparations (without preparations; preparation 1, in 2019 -Baikal EM-1; preparation 2, in 2019 -Potassium Humate). The soil is sod-podzolic gley, medium loamy. The area of elementary plots is 12 m 2 , the total area of the experiment is 648 m 2 . The article describe data on the variant of surface tillage, which meets the requirements of ecological farming as a resource-saving system to a higher degree.
The objects of research were preparations of various operating principles: Baikal EM-1 (manufacturer LLC NPO EM-CENTER) containing microorganisms, and Potassium Humate Sakhalin BP 2.5% (manufacturer LLC Biofit), which is a fertilizer based on humic acid salts with a set of nutrient elements.
Experimental variables were determined by research methods generally accepted in the experimental work: the activity of cellulose decomposition -by the application method; the number of earthworms (family Lumbricidae) -by the method of trial plots, ground beetles (family Carabidae) -by Barber traps; the number of weeds -according to the method of B.A. Smirnov; crop diseases -according to the methodology of VNIIZR; germination was determined by the ratio of the germinated plants to the number of seeds sown (in%); the height of the plants was determined by measuring at constant sites; yields -by the plot method with conversion to standard humidity. Statistical processing used the analysis of variance (ANOVA).
The weather conditions of the growing season of 2019 in the study region were characterized by increased temperature at the beginning of the growing season (May-June) and lower temperatures at the end (July-August), while the amount of precipitation was significantly different from the average annual observations in July -the excess was 77%. In general, meteorological conditions can be characterized as atypical.
Results and Discussion
One of the indicators characterizing the energy of mobilization of soil processes in general, the activity of cellulose decomposition is of interest for studying [9], due to the fact that the characteristics of the studied preparation Baikal EM-1 included the activation of beneficial soil microflora.
The results of determining this indicator showed that the use of Baikal EM-1 increased the activity of soil microorganisms under field of amaranth by 6.5%, under buckwheat -by 2.4%, but in soybean field the indicator decreased (by 6.5%), which is likely, due to an ability of soybeans as a legume plant to independently create focus of activity of soil microflora in their rhizosphere (Table 1). An important indicator of soil fertility especially that of sod-podzolic soils characterized by low natural fertility, is the number of earthworms. As it is known, they have a very positive effect on agrophysical properties, decreasing soil density and hardness, improving soil structure and water permeability, as well as influencing the processing of organic matter and accumulation of humus [10,11]. The positive effect of Baikal EM-1 in comparison with the variant without preparations was noted in amaranth field -the number of earthworms increased by 9.1%, in the buckwheat field it increased by 30.0%; Potassium Humate was effective in soybean (the indicator significantly exceeded the control by 49.9%) and buckwheat. This suggests the creation of favorable microbiological and nutritional conditions for earthworms when using preparations.
Other representatives of the useful soil fauna -predatory ground beetles, which are considered bioindicators of the ecological well-being of phytocenoses [12].
On average during the growing season of crops, it was found that the use of the Baikal EM-1 and Potassium Humate preparations increased the number of ground beetles under the crops of amaranth, by 19.6 and 15.5% and under buckwheat by 29.8 and 0.8% respectively, in comparison with the control. Under soybean, the indicator was declining by 9.5% for Baikal EM-1 and by 26.6% for potassium Humate.
In ecological farming systems, where the use of pesticides is unacceptable, an important link is the maintenance of a favorable phytosanitary situation through mechanical and biological methods [13].
One of such methods can be the use of preparations that increase the resistance of cultivated plants to phytopathogens and competitiveness to weeds due to the improvement of soil microbiota and optimization of the nutritional regime.
Accounting for the prevalence and intensity of diseases was carried out on soybeans and buckwheat. Amaranth plants did not show signs of any diseases ( Table 2).
Soybean plants were exposed to phytopathogens to a small extent -ascochytosis (pathogen Ascochyta sojaecola Abramov) and Septoria blight (pathogen Septoria glycines Hemmi) were detected. It is noteworthy that the beneficial effect of using Baikal EM-1 was seen in comparison with the variant without it in the decrease in the rate of disease development -it decreased by 4.0 and 0.3%, respectively, for ascochytosis and Septoria blight, while the prevalence was the same (5.0-7.0%). The use of the Humate Potassium, on the contrary, increased the prevalence of diseases (by 66.0-74.6%) simultaneously with a decrease in their intensity (1.1-2.0 times) in comparison with the control.
On buckwheat, ascochytosis (pathogen Ascochyta fagopyri Bres.), bacteriosis (pathogen Pseudomonas syringae van Hali) and downy mildew (peronosporosis) (pathogen Peronospora fagopyri Elenev) were determined. In general, it can be noted that the prevalence of buckwheat diseases did not exceed 12.0%, and the intensity was 6.0%. At the same time, the use of the biological product Baikal EM-1 reduced the rates of disease development in comparison with the variant without its use. So, the prevalence of ascochytosis decreased by 1.7%, its intensity -by 2.8%; disease rates of bacteriosis, respectively, by 1.7 and 0.2%, peronosporosis -by 3.3 and 2.0%. In the variants using Potassium Humate, the phytosanitary situation was slightly worse: the prevalence of ascochytosis was higher not only compared to the Baikal EM-1 application (41.0%), but also compared to the control (17.0%). The intensity of bacteriosis was also higher when using the Potassium Humate preparation; in other cases, the rates of buckwheat disease development was lower than the control. The weed component of agrophytocenoses significantly affects the decrease in crop yields, the deterioration of their quality due to competition for life factors [14]. The management strategies for this component differ significantly in intensive and ecological technologies. In the first case, it is based on chemical suppression of undesirable vegetation, which is very effective, but unsafe from an environmental point of view [15]. In the second case, weed control is carried out on the basis of mechanical techniques and biological methods that do not adversely affect product quality [16][17][18].
In our experience, the total number of weed plants was much higher in the amaranth planting due to its wide-row sowing, while buckwheat showed the smallest weed count, which is due to its rapid growth in the initial phases of development and suppression of weeds. An interesting regularity is the decrease in the total number of weeds on the variants using preparations in comparison with the options without them -when using both Baikal EM-1 and Potassium Humate, the indicator decreased significantly for all the studied crops. So, in soybean planting, the decrease was 51.6% in the variants with Baikal EM-1 and 24.6% in the variants with Potassium Humate; in the crop of amaranth, respectively, by 69.3 and 37.0%; buckwheat -by 43.5 and 23.9%, which confirms the increased competitiveness of cultivated plants against weeds due to the creation of more favorable conditions, however, the effect of improving the microbiological regime when applying Baikal EM-1 was higher than that of optimization of nutritional conditions when using Potassium Humate.
Thus, due to the creation of optimal conditions, the indicators of growth and development of cultivated plants during the growing season were mainly higher when using preparations (Table 3).
Soybean development rates were higher when using the Baikal EM-1 biological productgermination increased by 15.6%, plant height at the beginning of the growing season by 3.9%, as a result, green mass yield was characterized by an increase of 8.5%, seed yield -7.9 %. The use of Potassium Humate did not lead to a positive effect on development of soybean plants and its productivity in comparison with the control, which is probably due to the lower need to improve the nutritional conditions for soybeans as a culture capable of symbiotic nitrogen fixation.
The studied preparations had a positive effect on the growth and development indicators of amaranth and buckwheat plants, providing averaged increase in amaranth germination of 3.8%, in buckwheat - 5.0%, plant height increased by 18.1 and 1.4%, respectively. As a result, the preparations determined a tendency to increase the yield of amaranth green mass -Baikal EM-1 by 24.6%, Potassium Humateby 5.9% and buckwheat green mass, by 4.9 and 3.9%, respectively. The seed yield of the studied crops also increased when using Potassium Humate (amaranth by 5.5%, buckwheat by 30.7%), and especially Baikal EM-1 (amaranth by 24.2%, buckwheat by 32.5 %).
Conclusion
The use of preparations that improve microbiological and nutritional regimes has a positive effect. A greater effect was noted when treating the seeds with a Baikal EM-1 biological product solution, which improves the biological properties of the soil, reduces the harm from phytopathogens, optimizes the conditions for growth and development of cultivated plants, which ultimately leads to an increase in seed yield of crops grown on average 21.5%, which confirms the prospects of this method in ecological crops cultivation technologies in agricultural enterprises.
|
2021-05-11T00:04:32.922Z
|
2021-01-08T00:00:00.000
|
{
"year": 2021,
"sha1": "d4bd953a2308b7fea6794b6e6b7d2f0a40b1dce2",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1755-1315/624/1/012234/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "aa0f250533b02f7bdbb9b3e81612264a256a5e58",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Physics",
"Environmental Science"
]
}
|
55020373
|
pes2o/s2orc
|
v3-fos-license
|
Zoeal stages of Pseudomicippe varians Miers, 1879 (Decapoda: Brachyura: Majoidea: Majidae) and a comparison with other Majidae larvae
Pseudomicippe varians Miers, 1879 is a majid crab recorded from Western Australia (Shark Bay) and northern Queensland. The zoeal stages are described from laboratory reared material. The zoeal stages of P. varians can be easily distinguished by the absence of carapace spines and extremely large mandibles. These characters are likely diagnostic among majoideans in general. Additionally, recent phylogenetic studies of majoids using larval characters showed the Majidae as one of the few families for which there is larval support for its monophyly. Furthermore, based on the monophyly of Majidae and the morphology of P. varians, a set of characters is established that could be used as a diagnostic for majids in general.
Within this family, the genus Pseudomicippe Heller, 1861 is represented by 11 species distributed throughout the Indo-west Pacific region. Pseudomicippe varians Miers, 1879 occurs in Western Australia (Shark Bay), and northern Queensland (Griffin & Tranter 1986).
The purpose of this is to describe the zoeal stages of P. varians, which comprise the first larval description for the genus. In addition, the zoeal information available in the literature is used to compare the findings of this study with those of other species previously described for this family.
Larval development and description
A single ovigerous specimen of Pseudomicippe varians was collected in August 2001 on seaweed beds, in Heron Island, Queensland, Australia (23u279S, 151u559E). The specimen was held in an aquarium in a temperature-controlled room (24¡1uC) until hatching, which occurred at night. After hatching, 50 of the most active, positively phototactic larvae were placed individually into 100 ml acrylic jars containing 50 ml of filtered seawater. The remaining larvae were kept in mass culture as extra specimens to be used for morphological description.
Newly hatched larvae were fed ad libitum with Artemia nauplii. Sea water was changed, and larvae were inspected and fed daily. All acrylic jars were washed in fresh water and airdried before re-use with fresh seawater on the following day. Average salinity was 32 psu. A natural photoperiod was maintained (%14L:10D).
Whenever possible, a minimum of five specimens of each stage were dissected for morphological description. For slide preparations polyvinyl lactophenol was used as the mounting medium with acid fuchsin and/or chlorazol black stains. The description of setae follows Pohle and Telford (1981), but here includes only analysis by light microscopy (LM), using an Olympus BX-51 microscope with Differential Interference Contrast and drawing tube. Some of the setae designated as plumose herein may be plumodenticulate setae because of the lower resolution limits of LM as compared to scanning electron microscopy (SEM). Description guidelines of Clark et al. (1998) were generally followed.
Specimens of larval stages and a spent female crab have been deposited at the Museu de Zoologia da Universidade de São Paulo, São Paulo, Brazil, accession number 17281.
Larval development and description
Zoeal development of Pseudomicippe varians consists of two zoeal stages. The duration of the first zoeal stage was 4-11 days (3.9¡0.3), 5-12 days (5.0¡0.6) for the second stage. It was not possible to obtain megalop stages because of fungus infection in our culture that arrested the development of the second zoeal stage leading to death. Only morphological changes are described for the second zoea. Carapace ( Figure 1A). Dorsal, rostral, and lateral spines absent. Ventral margin posterior to scaphognathite notch with small plumodenticulate seta preceding densely plumose ''anterior seta'', followed by five plumose setae. Eyes sessile. Small indistinct median ridge frontally and a small median tubercle on posterodorsal margin, each bearing cuticular dorsal organ (sensu Martin and Laverack 1992). Pair of simple setae present on posterodorsal margin.
Antenna ( Figure 1C). Biramous, spinous process of the protopod pointed, bearing two rows of sharp spinules; endopod bud present with half of the exopod length; onesegmented exopod shorter than protopod with two subterminal setae with similar size.
Mandible ( Figure 1D). With medial toothed molar process and much enlarged lateral incisor processes, as long as antenna. Palp absent.
Telson ( Figure 1J). Bifurcated, distinct median notch, three pairs of plumodenticulate setae on inner margin; each furcal shaft proximally bearing lateral spine, furcal shafts and spines covered in rows of spinules to just below tips. Grouped denticulettes present.
Taxonomic grouping
The zoeae of Pseudomiccipe varians conform to previous characterizations of this phase for Majoidea (Rice 1980(Rice , 1983 in having nine or more setae on the scaphognatite of the maxilla, and well-developed pleopods in the second zoeal stage. However, P. varians does not share the characters proposed by Ingle (1979) for the family Majidae, which he divided into two groups according to morphological resemblance. Ingle's (1979) larval classification of the Majidae (i.e. his Majinae) included species of the genera Maja and Schyzophrys.
The zoeae of the group I taxa are diagnosed by the presence of lateral carapace spines; a dorsal spine that is often well developed and usually of moderate length; a prominent rostral spine; dorso-lateral processes on abdominal segments two and three; and posterolateral processes on abdominal somites 3-5 that are prominent and sometimes long. The group II, which comprises the genera Leptomithrax and Acanthophrys, is characterized in zoeal stages by the absence of lateral carapace spines; dorsal spine sometimes reduced or absent; rostral spine sometimes reduced; dorso-lateral processes only on abdominal somite two; and posterolateral processes on abdominal somites 3-5 that are not prominent and sometimes short. The zoeal stages of P. varians agree with Group II in having no dorsal or lateral carapace spines; a dorsolateral process that is restricted to abdominal somite 2; posterolateral processes on abdominal somites 3-5 being short; the basal segment of maxilliped 2 with no more than three marginal setae; and the antennal exopod having terminal setae. However, P. varians differs substantially in lacking a rostral spine and in having only one spine on each telson fork (Tables I and II). Among the Majidae the zoeal stages of Pseudomiccipe varians can be easily distinguished by lacking rostral, dorsal, and lateral spines, the relative size of the mandible that is as long as the antenna, and the number of furcal spines (Tables I and II). The absence of carapace spines and very large mandible are especially distinctive and likely diagnostic among majoideans in general.
Larval comparison and taxonomic affinities among species of Majidae
Within Majidae, a considerable number of workers have addressed the larval descriptions for species of Maja, Schizophrys, Jacquinotia, Notomithrax, and Leptomithrax. However, some of these accounts are not suitable for comparisons because they lack descriptions of appendages or stages, as is the case for the descriptions of Schizophrys aspera (Milne-Edwards, 1834) (cf. Kurata 1969;Tirmizi & Kazmi 1987), Leptomithrax edwardsi (de Haan, 1839) and L. bifidus Ortmann, 1893(cf. Kurata 1969, and L. longimanus Miers, 1876(cf. Webber & Wear 1981. Descriptions of the three species of majids by Kurata (1969) lack zoeal information such as on the maxillule, maxilla and other possibly distinctive appendages. The description of Tirmizi and Kazmi (1987) for S. aspera was included in Tables I and II, but the data should be viewed with caution because of considerable disparities between illustrations and descriptions. Due to the inaccurate or incomplete nature of these older descriptions, the comparisons we make are based on the most recent larval descriptions or those we consider to provide adequate morphological information.
The genus Jacquinotia, represented by J. edwardsii (Jacquinot 1853), was considered as a member of the family Mithracidae by Webber and Wear (1981). However, Griffin and Tranter (1986) placed this monotypic genus within the family Majidae based on several adult characters, and we followed this taxonomic placement here since the larval characters for this genus match the pattern observed for Majidae. Additionally, the genus Eurynome Leach, 1814, traditionally viewed as a member of the family Pisidae (Sakai 1965(Sakai , 1976Griffin 1966;Ingle 1980Ingle , 1991Salman 1982;Aiyun & Siliang 1991;Santana et al. 2004), was placed in the family Majidae by Hale (1927Hale ( -1929 and by Griffin and Tranter (1986). In this case we do not follow Hale's or Griffin and Tranter's approach, leaving the genus as a member of Pisidae based on the differences with the majid larval characters. This is largely based on the setation of the maxilla, first maxilliped, and other appendages (Tables I and II). The genus Acanthophrys A. Milne-Edwards, 1865, traditionally considered as a majid (Griffin 1966;Kurata, 1969;Ingle 1979;Clark 1986), was placed in the family Pisidae by Griffin and Tranter (1986). However, due to the lack of larval information we could not make any assertion on the position of the genus, and we do not include Acanthophrys in the comparisons.
Based on morphological comparisons (Tables I and II), several characters appear to be diagnostic for some majid genera. For species of Maja, the presence of rostral, dorsal and lateral spines (also in Leptomithrax longipes (Thomson 1902)), and dorsolateral processes on the second and third abdominal somites in both zoeal stages are diagnostic for this genus. For Schizophrys, the setation of the basial endite of the maxillule in zoea I, and the setation of the basis of the first maxilliped, and basial endite of the maxilla in both zoeal stages are
Phylogenetic relationships
In recent phylogenetic analyses of majoids using larval characters (Marques & Pohle 1998;Pohle & Marques 2000), the family Majidae appears to be one of the few families that have larval support for its monophyly. The authors base the monophyly of Majidae on the exopod of the antenna bearing a well-developed terminal spine, half or more the length of apical setae but not extending beyond the tip of the setae in the zoea; proximal coxal lobe of the maxilla in zoea II bearing three setae; scaphognathite of maxilla bearing 21-28 setae in zoea II; and the fork of the telson bearing three lateral spines (Tables I and II). Except for the presence of one rather than three telson furcal spines in P. varians and S. aspera, and the number of setae on the scaphognathite of the maxilla of zoea II in N. minor (Filhol 1885), the other majid species are in agreement with the synapomorphies determined by Pohle (1998, 2003), and Pohle and Marques (2000) for Majidae. So we believe that those characters cited above, based on a cladistic analysis, could better define this family, and should be used instead of the Ingle's (1979) groups.
|
2018-12-11T07:15:20.508Z
|
2006-12-29T00:00:00.000
|
{
"year": 2006,
"sha1": "f7bd4f19d92354cbfd53aff6a53dc23d538a8e97",
"oa_license": "CCBY",
"oa_url": "https://zenodo.org/record/5230619/files/source.pdf",
"oa_status": "GREEN",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "293e62d96290e95d482dfecafd1c031f811fb692",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
}
|
258812787
|
pes2o/s2orc
|
v3-fos-license
|
Nanocomposites Based on Spin-Crossover Nanoparticles and Silica-Coated Gold Nanorods: A Nonlinear Optical Study
A nanocomposite based on silica-coated AuNRs with the aminated silica-covered spin-crossover nanoparticles (SCO NPs) of the 1D iron(II) coordination polymer with the formula [Fe(Htrz)2(trz)](BF4) is presented. For the synthesis of the SCO NPs, the reverse micelle method was used, while the gold nanorods (AuNRs) were prepared with the aspect ratio AR = 6.0 using the seeded-growth method and a binary surfactant mixture composed of cetyltrimethylammonium bromide (CTAB) and sodium oleate (NaOL). The final nanocomposite was prepared using the heteroaggregation method of combining different amounts of SCO NPs with the AuNRs. The nonlinear optical (NLO) properties of the hybrid AuNRs coated with different amounts of SCO NPs were studied in detail by means of the Z-scan technique, revealing that the third-order NLO properties of the AuNRs@SCO are dependent on the amount of SCO NPs grafted onto them. However, due to the resonant nature of the excitation, SCO-induced NLO switching was not observed.
Recently, our group presented a novel nanosynthetic protocol [27] based on a reverse micelle method for the synthesis of water-soluble aminated silica hybrid SCO nanoparticles (NPs) of the 1D coordination polymer [Fe II (Htrz) 2 (trz)](BF 4 ), where (Htrz = 1,2,4-1Htriazole), displaying stable aqueous and ethanol colloidal dispersions and retaining their SCO characteristics, accompanied by relevant thermochromic features (from colorless for the HS state to purple for the LS state). The novelty of this method is based on a two-step hydrolysis/condensation mechanism of tetraethyl orthosilicate (TEOS), in the first step, and an appropriate mixture of 3-aminopropyltriethoxy silane (APTES) with TEOS in the second step, leading to the final aminated silica hybrid SCO NPs.
Introducing functional groups onto the silica surfaces of SCO NPs and grafting luminescence molecules [30,31] and/or gold NPs [32][33][34][35][36][37] is a promising strategy for obtaining multifunctional nanoplatforms. Especially for the case of hybrid SCO@Au NPs, the photothermal heating properties of the AuNPs in both visible and near IR due to the nonradiative decay of surface plasmons significantly reduces the energy power responsible for the application of the SCO phenomenon [36,37]. Recently, it was found that nanocomposites of SCO@Au NPs, where the gold nanoparticles have the morphology of nanorods (AuNRs), can improve the diffusion of heat between the AuNRs and SCO NPs. The broad longitudinal surface plasmon resonance (LSPR) band in the near IR of AuNRs was found to be temperature-dependent upon the HS-LS switching of the SCO NPs [34]. A less pronounced shift of the SPR peak has been observed on hybrid thin films of Au@SCO and/or relative heterostructures [35,37].
In all the above studies, the scientific interest has been mainly focused on the monitoring of the SPR dependence on the SCO phenomenon, while the effects of the latter on the nonlinear optical (NLO) response of SCO@Au NPs have been left relatively unexplored so far. Having in mind that the experimental evidence of SCO-induced NLO switching in iron(II) complexes is still in its infancy [38], the need for more experimental studies and evidence aiming to correlate the SCO phenomenon with the NLO response is of the outmost importance for applications related to NLO-switching phenomena. A promising scenario to develop NLO devices/switches is the incorporation of suitable SCO materials with thermal hysteresis loops centered at room temperature, accompanied by NLO properties that are temperature sensitive.
From this point of view, in the present work, the influence of the SCO phenomenon on the NLO response of a nanocomposite consisting of silica AuNRs coated with aminated silica hybrid SCO NPs of the 1D iron(II) coordination polymer is investigated. For this purpose, a heteroaggregation method for the combination of the SCO NPs with the AuNRs was employed. AuNRs carrying different loads of SCO NPs were prepared and their NLO response was studied using the Z-scan technique [39]. For the Z-scan measurements, the visible (i.e., 532 nm) and infrared (i.e., 1064 nm) outputs of a 4 ns Q-switched Nd:YAG laser were selected for use because they allow for the resonant excitation of the transverse and longitudinal surface plasmon resonances (SPRs) of the AuNRs located at~530 and 1060 nm, respectively, allowing for their efficient photothermal heating.
General Synthetic Aspects
The synthesis and characterization of the aminated silica hybrid SCO NPs have been presented elsewhere [27], while the general synthetic protocol is also shown in Figure 1. In general, the reverse micellar method was used, according to which two water phases, (a) the metal salt of Fe(BF 4 ) 2 .6H 2 O and (b) the ligand HTrz in a 1:3 molar ratio, were mixed with the organic phase consisting of the surfactant (Triton X-100) and co-surfactants n-hexanol and cyclohexane also playing the role of the organic solvent. To further functionalize the SCO NPs with aminated silica shells, a two-step hydrolysis/condensation protocol was followed, according to which (a) 100 µL TEOS was added in the two aqueous phases; (b) 100 µL APTES (3-aminopropyltriethoxy silane) and 100 µL TEOS were added after mixing the two water-in-oil microemulsions. According to the distributions obtained from TEM images [27], the size of the SCO NPs in EtOH is close to 45 nm, with a spherical morphology (Figure 1a where R is the aspect ratio, and the calculated AR is close to 6.2. According to the TEM images, the mean dimensions of the rod-shaped gold NPs are 90.0 ± 8.0 nm and 15.3 ± 3.4 nm concerning the length and width, respectively, while the aspect ratio of AR ∼6.1 is close to the value obtained from the UV-VIS-NIR measurements (Figures 2 and Figure S1 in the Supporting information file). The silica coating of the AuNRs succeeded with the hydrolysis/condensation of TEOS in a basic medium (pH = 10.5~11.0). The mesostructured growth of the silica was obtained using the CTAB/NaOL as a template, and under these highly basic conditions, both hydrolysis and condensation occur at the same time, leading to the almost vertical orientation of the mesoporosity of the silica shell to the gold core (Figure 2b,c) [42]. The effective full coating of the AuNRs with SiO2 was further supported by the fact that the colloidal stability of the core−shell nanoparticles was maintained in DMF solution, with no aggregation phenomena. The absorption spectra of the AuNRs@SiO2 dispersions present a t-SPR band at ca. 1057 nm, revealing a redshift of 30 nm due to the higher refractive index of the mesoporous SiO2, as shown in Figure 2e [43]. Furthermore, the effective coverage of the AuNRs with SiO2 is reflected in ζ-potential measurements, where the value of the ζ-potential was decreased considerably from high positive values of 36.7 mV (CTAB-covered AuNRs) to negative values close to −7.2 mV for AuNRs@SiO2 (Figure 2d). A seed-mediated growth method was used for the synthesis of rod-like gold NPs using a mixture of surfactants (CTAB/NaOL) [40]. Absorption peaks at ca. 518 nm and ca. 1027 nm were revealed in the absorption spectrum of the AuNRs, denoted as transverse SPR (t-SPR) and longitudinal SPR (l-SPR), respectively. The aspect ratio (AR) of the AuNRs is directly related to the position of the l-SPR peak, according to Equation (1) [41]: where R is the aspect ratio, and the calculated AR is close to 6.2. According to the TEM images, the mean dimensions of the rod-shaped gold NPs are 90.0 ± 8.0 nm and 15.3 ± 3.4 nm concerning the length and width, respectively, while the aspect ratio of AR~6.1 is close to the value obtained from the UV-VIS-NIR measurements ( Figure 2 and Figure S1 in the Supporting information file). The silica coating of the AuNRs succeeded with the hydrolysis/condensation of TEOS in a basic medium (pH = 10.5~11.0). The mesostructured growth of the silica was obtained using the CTAB/NaOL as a template, and under these highly basic conditions, both hydrolysis and condensation occur at the same time, leading to the almost vertical orientation of the mesoporosity of the silica shell to the gold core (Figure 2b,c) [42]. The effective full coating of the AuNRs with SiO 2 was further supported by the fact that the colloidal stability of the core−shell nanoparticles was maintained in DMF solution, with no aggregation phenomena. The absorption spectra of the AuNRs@SiO 2 dispersions present a t-SPR band at ca. 1057 nm, revealing a redshift of 30 nm due to the higher refractive index of the mesoporous SiO 2 , as shown in Figure 2e [43]. Furthermore, the effective coverage of the AuNRs with SiO 2 is reflected in ζ-potential measurements, where the value of the ζ-potential was decreased considerably from high positive values of 36.7 mV (CTAB-covered AuNRs) to negative values close to −7.2 mV for AuNRs@SiO 2 (Figure 2d). A heteroaggregation procedure [44] was followed to coat the silica-covered AuNRs@SiO2 dispersed in DMF with aminated silica-covered SCO NPs dispersed in EtOH by simply mixing the two solutions. In this nonaqueous mixing procedure, the SCO NPs are destabilized in the DMF solution, assembling a dense coating on the surfaces of the AuNRs@SiO2. Three amounts of an ethanol solution of the SCO NPs were used (100 μL, 150 μL, and 200 μL) for mixing with the DMF solution of GNRs@SiO2, and the final colloidal dispersions in DMF were stable for months. It was found that for amounts greater than 200 μL, the resulting colloidal dispersion in DMF was unstable and quickly decolored. In all three cases, the AuNRs are surrounded by SCO NPs, creating aggregated patterns ( Figure 3). This is probably because the SCO NPs are aggregated in DMF solution, with no well-defined morphologies ( Figure 1c). A heteroaggregation procedure [44] was followed to coat the silica-covered AuNRs@SiO 2 dispersed in DMF with aminated silica-covered SCO NPs dispersed in EtOH by simply mixing the two solutions. In this nonaqueous mixing procedure, the SCO NPs are destabilized in the DMF solution, assembling a dense coating on the surfaces of the AuNRs@SiO 2 . Three amounts of an ethanol solution of the SCO NPs were used (100 µL, 150 µL, and 200 µL) for mixing with the DMF solution of GNRs@SiO 2 , and the final colloidal dispersions in DMF were stable for months. It was found that for amounts greater than 200 µL, the resulting colloidal dispersion in DMF was unstable and quickly decolored. In all three cases, the AuNRs are surrounded by SCO NPs, creating aggregated patterns ( Figure 3). This is probably because the SCO NPs are aggregated in DMF solution, with no well-defined morphologies ( Figure 1c).
UV-Vis-NIR Absorption Measurements of AuNRs@SCO
The UV-Vis-NIR absorption spectra of the DMF solutions of AuNRs@SiO2 and AuNRs@SCO are presented in Figure 4. The transverse and longitudinal surface plasmon resonances (SPRs) of the AuNRs@SCO hybrids were located at 520 and 1060 nm, respectively. The absorption spectra of the AuNRs@SCO dispersions present a systematic small redshift of the l-SPR peak by 9 nm, compared to the respective AuNRs@SiO 2 dispersions, attributed to the presence of the SCO NPs on their surfaces. This redshift is commonly observed when iron NPs are deposited onto AuNRs [45,46], and it is attributed to the higher refractive index of iron NPs than that of mesoporous SiO 2 (1.28-1.45) [47]. In addition, as the amount of SCO NPs was increased, a decrease in the absorbance was observed. Product loss during centrifugation could be responsible for this decrease [44].
UV-Vis-NIR Absorption Measurements of AuNRs@SCO
The UV-Vis-NIR absorption spectra of the DMF solutions of AuNRs@SiO2 and AuNRs@SCO are presented in Figure 4. The transverse and longitudinal surface plasmon resonances (SPRs) of the AuNRs@SCO hybrids were located at 520 and 1060 nm, respectively. The absorption spectra of the AuNRs@SCO dispersions present a systematic small redshift of the l-SPR peak by 9 nm, compared to the respective AuNRs@SiO2 dispersions, attributed to the presence of the SCO NPs on their surfaces.
UV-Vis-NIR Absorption Measurements of AuNRs@SCO
The UV-Vis-NIR absorption spectra of the DMF solutions of AuNRs@SiO2 and AuNRs@SCO are presented in Figure 4. The transverse and longitudinal surface plasmon resonances (SPRs) of the AuNRs@SCO hybrids were located at 520 and 1060 nm, respectively. The absorption spectra of the AuNRs@SCO dispersions present a systematic small redshift of the l-SPR peak by 9 nm, compared to the respective AuNRs@SiO2 dispersions, attributed to the presence of the SCO NPs on their surfaces. The arrows indicate the laser excitation wavelengths (i.e., 532 and 1064 nm). Normalized view of the l-SPR peaks (right).
Nonlinear Optical Properties of AuNRs@SCO
The NLO properties of the AuNRs@SCOs were systematically studied under resonant excitation conditions (i.e., by exciting them at the transverse (i.e., 532 nm) and longitudinal (i.e., 1064 nm) SPR peaks). In Figure 5, some representative OA and CA Z-scans of AuNRs@SCO dispersed in DMF are presented, under 4 ns and 532 and 1064 nm laser excitations, respectively while general information about Z-scan experimental setup and theory is presented in the Supporting Information file (Section S1). It should be noted that the solvent (i.e., DMF) did not exhibit any NLO response for the range of laser intensities employed in the present experiments. In addition, the NLO response of the SCO NPs in ethanol/DMF dispersions was also investigated under identical experimental conditions to those used for the AuNRs@SCO dispersions. However, no significant NLO response was observed. Therefore, the Z-scan recordings of the AuNRs@SCO dispersions shown in Figure 5 directly reflect the NLO response of the AuNRs@SCO hybrid material. The symbols shown in Figure 6 correspond to the experimental data points, while the continuous lines correspond to the theoretical fitting of the OA and CA recordings by Equations (S1) and (S3), respectively. For the accurate determination of the NLO parameters of the AuNRs@SCOs, the Z-scan measurements were performed for a wide range of incident laser intensities, ranging from 10 to 65 MW cm −2 . As can be seen from Figure 5a, the AuNRs@SCO dispersions were found to exhibit saturable absorption (SA) behavior under infrared (i.e., 1064 nm) laser excitation, corresponding to the negative nonlinear absorption coefficient (β). The observed SA behavior is attributed to the resonant excitation conditions met. In fact, the electric field of the resonant laser radiation induces the effective collective oscillation of the conduction-band electrons and the holes in the valence bands of the AuNRs. In general, excitation in the vicinity of the l-SPR results in the efficient excitation of the electrons from the valence to the conductive band, resulting in empty states of positive charges (i.e., holes) in the valence band accompanied by photoexcited electron-hole pairs, which are converted to hot carriers. Next, on a time scale of from 100 fs to 1 ps [48], the hot carriers redistribute their energy among carriers possessing lower energies through electron-electron scattering [49]. As a result, the hot carriers cool down via a relaxation process, building an equilibrium Fermi-Dirac-like distribution. As the sample approaches the focal plane (i.e., experiences higher laser intensity), the interband transitions become more efficient, causing the bleaching of the ground-state plasmon band, expressed as the saturation of the absorption (i.e., SA behavior) [50]. Under visible excitation (i.e., 532 nm), all the AuNRs@SCO samples exhibited reverse saturable absorption (RSA) (see, e.g., Figure 5c), corresponding to the positive nonlinear absorption coefficient (β). The observed RSA behavior can be described in terms of twophoton absorption (TPA) and/or excited-state absorption (ESA) [51]. In general, TPA is a relatively weak nonlinear process, enabled by virtual states, resulting in the relatively weak (or negligible) depopulation of the ground state, thus giving rise to an intensity- As far as concerns the NLO refractive response of the AuNRs@SCOs, all the dispersions were found to exhibit self-defocusing behavior, as depicted in Figure 6b,d, corresponding to the negative nonlinear refractive parameter (γ′), as evidenced by the characteristic peak-valley configuration of the CA Z−scans. Some representative CA Z-scans, obtained under both excitation conditions (i.e., 532 and 1064 nm), are shown in Figure 5b,d. From these measurements, the γ΄ was determined. Then, the nonlinear refractive index (n2) could be calculated from Equation (2): where c is the speed of light (in m s −1 ), and n0 is the linear refractive index at the laser excitation wavelength.
The determined values of the nonlinear absorption coefficient (β), nonlinear refractive index parameter (γ′), and nonlinear refractive index (n2) are listed in Table 1. The imaginary and real part of the third-order susceptibility (i.e., Imχ (3) and Reχ (3) ) were calculated using Equations (S2) and (S4), respectively. Finally, by inserting the values obtained from Equations (S2) and (S4) into Equation (S5), the magnitude of the nonlinear third-order susceptibility (χ (3) ) was determined and is also listed in Table 1. To make it easier to compare the values of the NLO parameters of the AuNRs@SCOs with different loads of SCOs, the shown values have been normalized by the corresponding linear absorption coefficient (α0) at the respective laser excitation wavelength. Thus, they all refer to the absorption coefficient α0 = 1 cm −1 . As can be seen from this table, the magnitudes of all the NLO parameters were found to increase with the load of SCO NPs. A graphical representation of this trend is presented in Figure 7. As can be seen from the plots in Figure 7, the variation in the NLO response of the AuNRs@SCOs (i.e., third-order susceptibility (χ (3) )) versus the SCO load is slightly larger in the case of infrared (i.e., at 1064 nm) excitation. Under visible excitation (i.e., 532 nm), all the AuNRs@SCO samples exhibited reverse saturable absorption (RSA) (see, e.g., Figure 5c), corresponding to the positive nonlinear absorption coefficient (β). The observed RSA behavior can be described in terms of twophoton absorption (TPA) and/or excited-state absorption (ESA) [51]. In general, TPA is a relatively weak nonlinear process, enabled by virtual states, resulting in the relatively weak (or negligible) depopulation of the ground state, thus giving rise to an intensityindependent nonlinear absorption coefficient (β). On the contrary, ESA is a stronger process, as it involves real intermediate states, leading to the more efficient depletion of the ground state, giving rise to an intensity-dependent nonlinear absorption coefficient (β) [52]. In addition, as has been discussed elsewhere [51], ESA is more likely to occur under ns excitation. To shed more light on the mechanism responsible for the RSA behavior, Z-scan measurements were performed under different laser intensities, and the nonlinear absorption coefficient (β) of the AuNPs@SCOs was determined. The variation in the β values with the incident laser intensity is presented in Figure 6. As can be seen, the nonlinear absorption coefficient (β) is clearly intensity-dependent, suggesting that ESA is most probably the main mechanism responsible for the RSA behavior under 532 nm laser excitation.
As far as concerns the NLO refractive response of the AuNRs@SCOs, all the dispersions were found to exhibit self-defocusing behavior, as depicted in Figure 6b,d, corresponding to the negative nonlinear refractive parameter (γ ), as evidenced by the characteristic peakvalley configuration of the CA Z−scans. Some representative CA Z-scans, obtained under both excitation conditions (i.e., 532 and 1064 nm), are shown in Figure 5b,d. From these measurements, the γ was determined. Then, the nonlinear refractive index (n 2 ) could be calculated from Equation (2): where c is the speed of light (in m s −1 ), and n 0 is the linear refractive index at the laser excitation wavelength.
The determined values of the nonlinear absorption coefficient (β), nonlinear refractive index parameter (γ ), and nonlinear refractive index (n 2 ) are listed in Table 1. The imaginary and real part of the third-order susceptibility (i.e., Imχ (3) and Reχ (3) ) were calculated using Equations (S2) and (S4), respectively. Finally, by inserting the values obtained from Equations (S2) and (S4) into Equation (S5), the magnitude of the nonlinear third-order susceptibility (χ (3) ) was determined and is also listed in Table 1. To make it easier to compare the values of the NLO parameters of the AuNRs@SCOs with different loads of SCOs, the shown values have been normalized by the corresponding linear absorption coefficient (α 0 ) at the respective laser excitation wavelength. Thus, they all refer to the absorption coefficient α 0 = 1 cm −1 . As can be seen from this table, the magnitudes of all the NLO parameters were found to increase with the load of SCO NPs. A graphical representation of this trend is presented in Figure 7. As can be seen from the plots in Figure 7, the variation in the NLO response of the AuNRs@SCOs (i.e., third-order susceptibility (χ (3) )) versus the SCO load is slightly larger in the case of infrared (i.e., at 1064 nm) excitation.
Investigation of SCO Phenomenon in AuNRs@SCO and Dependence of NLO Response on Spin Transition
The SPR dependence on the SCO phenomenon, for the amounts of 150 μL and 200 μL of SCO NPs in the final hybrid AuNRs@SCO, was investigated and is shown in Figure 8. To assess the influence of temperature on the samples, a specific methodology was employed. Initially, 300 μL of the sample was transfused into a 1 mm thick cuvette, which was then placed within a homemade copper sample holder wrapped with a constantan wire, allowing for its controlled heating via an ITC502S Oxford Instruments temperature controller. The temperature was stabilized within ±0.1 °C. Once the temperature reached ~80 °C, the cuvette was removed from the sample holder and the UV-VIS-NIR spectra were recorded. For both samples, a blue shift of the l-SPR appeared when the temperature of the DMF solution of AuNRs@SCO was at 80 °C. The value of the shift was determined to be about 20 and 30 nm for the cases of the 150 μL and 200 μL samples, respectively. This shift depends on the amount of SCO NPs and is related to the spin transition from
Investigation of SCO Phenomenon in AuNRs@SCO and Dependence of NLO Response on Spin Transition
The SPR dependence on the SCO phenomenon, for the amounts of 150 µL and 200 µL of SCO NPs in the final hybrid AuNRs@SCO, was investigated and is shown in Figure 8. To assess the influence of temperature on the samples, a specific methodology was employed. Initially, 300 µL of the sample was transfused into a 1 mm thick cuvette, which was then placed within a homemade copper sample holder wrapped with a constantan wire, allowing for its controlled heating via an ITC502S Oxford Instruments temperature controller. The temperature was stabilized within ±0.1 • C. Once the temperature reached 80 • C, the cuvette was removed from the sample holder and the UV-VIS-NIR spectra were recorded. For both samples, a blue shift of the l-SPR appeared when the temperature of the DMF solution of AuNRs@SCO was at 80 • C. The value of the shift was determined to be about 20 and 30 nm for the cases of the 150 µL and 200 µL samples, respectively. This shift depends on the amount of SCO NPs and is related to the spin transition from LS to HS, as the refractive index in the LS state is higher than in the HS state. Quickly after taking the absorption spectra of the samples, to avoid the cooling of the solutions, measurements of the NLO response of the AuNRs@SCO samples were performed. However, the measurements revealed negligible changes in their NLO properties compared to those measured under room-temperature conditions. In fact, the determined NLO parameters (nonlinear absorption and nonlinear refractive index (n 2 )) were found to be almost identical (within the experimental accuracy) to those determined in the case of room-temperature samples. It should be noted at this point that because the laser excitation of the samples was performed at 1064 nm, full resonant excitation conditions were met. Although the applied heating of the samples resulted in a significant shift of the corresponding l-SPR peaks, the resonant excitation conditions are still valid, as can be seen from the enlarged view of the absorption spectra shown in Figure 8b,d. Thus, most probably, any change in the NLO properties of the samples arising from the SCO phenomenon cannot be observed due to the much stronger resonant excitation contributions, effectively preventing the observation of the expected much weaker NLO contribution due to the SCO phenomenon. Hence, it can be concluded that although heating can trigger the SCO phenomenon, its possible contribution to the NLO response of the hybrid samples is unobserved due to the resonant character of the excitation.
Molecules 2023, 28, x FOR PEER REVIEW 10 of 14 (nonlinear absorption and nonlinear refractive index (n2)) were found to be almost identical (within the experimental accuracy) to those determined in the case of room-temperature samples. It should be noted at this point that because the laser excitation of the samples was performed at 1064 nm, full resonant excitation conditions were met. Although the applied heating of the samples resulted in a significant shift of the corresponding l-SPR peaks, the resonant excitation conditions are still valid, as can be seen from the enlarged view of the absorption spectra shown in Figure 8b,d. Thus, most probably, any change in the NLO properties of the samples arising from the SCO phenomenon cannot be observed due to the much stronger resonant excitation contributions, effectively preventing the observation of the expected much weaker NLO contribution due to the SCO phenomenon. Hence, it can be concluded that although heating can trigger the SCO phenomenon, its possible contribution to the NLO response of the hybrid samples is unobserved due to the resonant character of the excitation.
The TEM study was performed utilizing an FEI CM20 TEM operating at 200 kV. TEM specimens were prepared by drop casting a 3 µL droplet of AuNRs@SCO nanoparticle suspension in DMF on a carbon-coated Cu TEM grid. The size of the particles was determined with "manual counting" using ImageJ software v 1.54d (https://imagej.net accessed on 10 April 2023). The UV-VIS-NIR measurements were conducted utilizing a double-beam Jasco V-670 spectrophotometer.
Synthesis of Gold Nanorods (AuNRs) and Silica-Covered AuNRs@SiO 2
The synthetic protocol of the AuNRs is presented in detail elsewhere [53]. An amount of 75 µL of 0.1 M aqueous CTAB solution and 250 µL of 0.1 M aqueous NaOH solution were injected into 10 mL of the previously prepared AuNR solution (O.D. = 2.9, C[Au] = 260.6 µg/mL), which was kept at 29 • C to avoid CTAB crystallization and stirred for 5 min. A total of 120 µL TEOS was then added at a rate of 30 µL/30 min. The resulting solution was stirred for about 48 h. The silica-coated AuNRs were washed three times with H 2 O with centrifugation at 6000 rpm for 20 min, and the resulting pellet was redispersed in DMF. An aqueous solution of Fe(BF 4 ) 2 ·6H 2 O (337 mg, 1.00 mmol) in 0.5 mL of deionized H 2 O and 0.1 mL of TEOS was added to a solution containing Triton X-100 (1.8 mL), nhexanol (1.8 mL), and cyclohexane (7.5 mL). The resulting mixture was stirred for 30 min until the formation of a clear water-in-oil microemulsion. A similar procedure was applied to 1,2,4,1H-Triazole (HTrz) (210 mg, 3.00 mmol) in 0.5 mL of deionized H 2 O. Both microemulsions were quickly combined, and the mixture was stirred for 24 h in the dark until the addition of 100 µL APTES. After 30 min of stirring, 100 µL TEOS was added, and the stirring continued for a further 24 h, followed by the addition of acetone to break the microemulsion. The precipitated nanoparticles were isolated by centrifugation at 6000 rpm, washed several times with EtOH/acetone, and finally dried under vacuum. Anal. Calcd.
Conclusions
A facile method for the preparation of a nanocomposite based on silica-coated AuNRs with the aminated silica-covered spin-crossover nanoparticles (SCO NPs) of the 1D iron(II) coordination polymer with the formula [Fe(Htrz) 2 (trz)](BF 4 ) is presented. The nonlinear optical (NLO) properties of the hybrid AuNRs coated with different amounts of SCO NPs were studied in detail by means of the Z-scan technique, revealing that the third-order NLO properties of the AuNRs@SCO are dependent on the amount of SCO NPs grafted onto them. The triggering of the SCO phenomenon was possible using a heating process, and a shift of the l-SPR peak (20-30 nm) was observed at 80 • C, related to the spin transition from LS to HS. However, changes in the NLO response were not observed, most probably masked by the stronger resonant NLO response, preventing their individual observation and quantification. A possible scenario for the effective monitoring of the NLO properties of SCO materials could be based on avoiding the thermal triggering of the SCO phenomenon and/or employing laser excitation, being nonresonant with the plasmonic features. These findings may pave the way for the development of new strategies for monitoring the SCO dependence of NLO properties for applications related to NLO-switching phenomena.
|
2023-05-21T15:06:11.018Z
|
2023-05-01T00:00:00.000
|
{
"year": 2023,
"sha1": "74ad697ac1825ef95dc8b6892cfa16c817686064",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/molecules28104200",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3f637c1aa79b65036ed684f65d168b2cabbeec73",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
49691156
|
pes2o/s2orc
|
v3-fos-license
|
First-order interpretations of bounded expansion classes
The notion of bounded expansion captures uniform sparsity of graph classes and renders various algorithmic problems that are hard in general tractable. In particular, the model-checking problem for first-order logic is fixed-parameter tractable over such graph classes. With the aim of generalizing such results to dense graphs, we introduce classes of graphs with structurally bounded expansion, defined as first-order interpretations of classes of bounded expansion. As a first step towards their algorithmic treatment, we provide their characterization analogous to the characterization of classes of bounded expansion via low treedepth decompositions, replacing treedepth by its dense analogue called shrubdepth.
Introduction
The interplay of methods from logic and graph theory has led to many important results in theoretical computer science, notably in algorithmics and complexity theory. The combination of logic and algorithmic graph theory is particularly fruitful in the area of algorithmic meta-theorems. Algorithmic meta-theorems are results of the form: every computational problem definable in a logic L can be solved efficiently on any class of structures satisfying a property P. In other words, these theorems show that the model-checking problem for the logic L on any class C satisfying P can be solved efficiently, where efficiency usually means fixed-parameter tractability.
The archetypal example of an algorithmic meta-theorem is Courcelle's theorem [1,2], which states that model-checking a formula ϕ of monadic second-order logic can be solved in time f (ϕ) · n on any graph with n vertices which comes from a fixed class of graphs of bounded treewidth, for some computable function f . Seese [33] proved an analogue of Courcelle's result for the model-checking problem of first-order logic on any class of graphs of bounded degree. Following this result, the complexity of first-order model-checking on specific classes of graphs has been studied extensively in the literature. See e.g. [5-7, 9-12, 15, 19, 20, 22, 24-26, 33, 34]. One of the main goals of this line of research is to find a structural property P which precisely defines those graph classes C for which model checking of first-order logic is tractable.
So far, research on algorithmic meta-theorems has focused predominantly on sparse classes of graphs, such as classes of bounded treewidth, excluding a minor or which have bounded expansion or are nowhere dense. The concepts of bounded expansion and nowhere denseness were introduced by Nešetřil and Ossona de Mendez with the goal of capturing the intuitive notion of sparseness. See [31] for an extensive cover of these notions. The large number of equivalent ways in which they can be defined using either notions from combinatorics, theoretical computer science or logic, indicate that these two concepts capture some very natural limits of "well-behavedness" and algorithmic tractability. For instance, Grohe et al. [22] proved that if C is a class of graphs closed under taking subgraphs then model checking first-order logic on C is tractable if, and only if, C is nowhere dense (the lower bound was proved in [9]). As far as algorithmic meta-theorems for fixed-parameter tractability of first-order modelchecking are concerned, this result completely solves the case for graph classes which are closed under taking subgraphs, which is a reasonable requirement for sparse but not for dense graph classes.
Consequently, research in this area has shifted towards studying the dense case, which is much less understood. While there are several examples of algorithmic metatheorems on dense classes, such as for monadic second-order logic on classes of bounded cliquewidth [3] or for first-order logic on interval graphs, partial orders, classes of bounded shrubdepth and other classes, see e.g. [13][14][15]17], a general theory of meta-theorems for dense classes is still missing. Moreover, unlike the sparse case, there is no canonical hierarchy of dense graph classes similar to the sparse case which could guide research on algorithmic meta-theorems in the dense world.
Hence, the main research challenge for dense model-checking is not only to prove tractability results and to develop the necessary logical and algorithmic tools. It is at least as important to define and analyze promising candidates for "structurally simple" classes of graph classes which are not necessarily sparse. This is the main motivation for the research in this paper. Since bounded expansion and nowhere denseness form the limits for tractability of certain problems in the sparse case, any extension of the theory should provide notions which collapse to bounded expansion or nowhere denseness, under the additional assumption that the classes are closed under taking subgraphs. Therefore, a natural way of seeking such notions is to base them on the existing notions of bounded expansion or nowhere denseness.
In this paper, we take bounded expansion classes as a starting point and study two different ways of generalizing them towards dense graph classes preserving their good properties. In particular, we define and analyze classes of graphs obtained from bounded expansion classes by means of first-order interpretations and classes of graphs obtained by generalizing another, more combinatorial characterization of bounded expansion in terms of low treedepth colorings into the dense world. Our main structural result shows that these two very different ways of generalizing bounded expansion into the dense setting lead to the same classes of graphs. This is explained in greater detail below.
Interpretations and transductions. One possible way of constructing "well-behaved" and "structurally simple" classes of graphs is to use logical interpretations, or the related concept of transductions studied in formal language and automata theory. For our purpose, transductions are more convenient and we will use them in this paper. Intuitively, a transduction is a logically defined operation which takes a structure as input and nondeterministically produces as output a target structure. In this paper we use first-order transductions, which involve first-order formulas (see Section 2 for details). Two examples of such transductions are graph complementation, and the squaring operation which, given a graph G, adds an edge between every pair of vertices at distance 2 from each other.
We postulate that if we start with a "structurally simple" class C of graphs, e.g. a class of bounded expansion or a nowhere dense class, and then study the graph classes D which can be obtained from C by first-order transductions, then the resulting classes should still have a simple structure and thus be well-behaved algorithmically as well as in terms of logic. In other words, the resulting classes are interesting graph classes with good algorithmic and logical properties, and which are certainly not sparse in general. For instance, a useful feature of transductions is that they provide a canonical way of reducing model-checking problems from the generated classes D to the original class C , provided that given a graph H ∈ D, we can effectively compute some graph G ∈ C that is mapped to H by the transduction. In general, this is a hard problem, requiring a combinatorial understanding of the structure of the resulting classes D.
The above principle has so far been successfully applied in the setting of graph classes of bounded treewidth and monadic second-order transductions: it was shown by Courcelle, Makowsky and Rotics [4] that transductions of classes of bounded treewidth can be combinatorially characterized as classes of bounded cliquewidth. This, combined with Oum's result [32] gives a fixed-parameter algorithm for model-checking monadic second-order logic on classes of bounded cliquewidth. More recently, the same principle, but for first-order logic, has been applied to graphs of bounded degree [14], leading to a combinatorial characterization of first-order transductions of such classes, and to a model-checking algorithm.
Applying our postulate to bounded expansion classes yields the central notion of this paper: a class of graphs has structurally bounded expansion if it is the image of a class of bounded expansion under some fixed first-order transduction. This paper is a step towards a combinatorial, algorithmic, and logical understanding of such graph classes.
Low Shrubdepth Covers. The method of transductions is one way of constructing complex graphs out of simple graphs. A more combinatorial approach is the method of decompositions (or colorings) [31], which we reformulate below in terms of covers. This method can be used to provide a characterization of bounded expansion classes in terms of very simple graph classes, namely classes of bounded treedepth. A class of graphs has bounded treedepth if there is a bound on the length of simple paths in the graphs in the class (see Section 2 for a different but equivalent definition). A class C has low treedepth covers if for every number p ∈ N there is a number N and a class of bounded treedepth T such that for every G ∈ C , the vertex set V (G) can be covered by N sets U 1 , . . . , U N so that every set X ⊆ V (G) of at most p vertices is contained in some U i , and for each i = 1, . . . , N , the subgraph of G induced by U i belongs to T . A consequence of a result by Nešetřil and Ossona de Mendez [29] on a related notion of low treedepth colorings is that a graph class has bounded expansion if, and only if, it has low treedepth covers.
The decomposition method allows to lift algorithmic, logical, and structural properties from classes of bounded treedepth to classes of bounded expansion. For instance, this was used to show tractability of first-order model-checking on bounded expansion classes [8,21].
An analogue of treedepth in the dense world is the concept of shrubdepth, introduced in [17]. Shrubdepth shares many of the good algorithmic and logical properties of treedepth. This notion is defined combinatorially, in the spirit of the definition of cliquewidth, but can be also characterized by logical means, as first-order transductions of classes of bounded treedepth. Applying the method of decompositions to the notion of shrubdepth leads to the following definition. A class C of graphs has low shrubdepth covers if for every number p ∈ N there is a number N and a class B of bounded shrubdepth such that for every G ∈ C , there is a p-cover of G consisting of N sets U 1 , . . . , U N ⊆ V (G), so that every set X ⊆ V (G) of at most p vertices is contained in some U i and for each i = 1, . . . , N , the subgraph of G induced by U i belongs to B. Shrubdepth properly generalizes treedepth and consequently classes admitting low shrubdepth covers properly extend bounded expansion classes.
It was observed earlier [27] that for every fixed r ∈ N and every class C of bounded expansion, the class of rth power graphs G r of graphs from C (the rth power of a graph is a simple first-order transduction) admits low shrubdepth colorings.
Our contributions. Our main result, Theorem 15, states that the two notions introduced above are the same: a class of graphs C has structurally bounded expansion if, and only if, it has bounded shrubdepth covers. That is, transductions of classes of bounded expansion are the same as classes with low shrubdepth covers (cf. Figure 1). This gives a combinatorial characterization of structurally bounded expansion classes, which is an important step towards their algorithmic treatment.
One of the key ingredients of our proof is a quantifier-elimination result (Theorem 16) for transductions on classes of structurally bounded expansion. This result strengthens in several ways similar results for bounded expansion classes due to Dvořák, Král', and Thomas [8], Grohe and Kreutzer [21] and Kazana and Segoufin [26]. Our assumption is more general, as they assume that C has bounded expansion, and here C is only (1) is by [17]. Equality (2) is by [29]. Equality ( ) is the main result of this paper, Theorem 15. required to have low shrubdepth covers. Also, our conclusion is stronger, as their results provide quantifier-free formulas involving some unary functions and unary predicates which are computable algorithmically, whereas our result shows that these functions can be defined using very restricted transductions. Quantifier-elimination results of this type proved to be useful for the model-checking problem on bounded expansion classes [8,21,26], and this is also the case here. As explained earlier, the transduction method allows to reduce the model-checking problem to the problem of finding inverse images under transductions, which is a hard problem in general and depends very much on the specific transduction. On the other hand, as we show, the cover method allows to reduce the model-checking problem for classes with low shrubdepth covers to the problem of computing a bounded shrubdepth cover of a given graph. In fact, as a consequence of our proof, in Theorem 40 we show that it is enough to compute a 2-cover of a given graph G from a structurally bounded expansion class, in order to obtain an algorithm for the model-checking problem for such classes. We conjecture that such an algorithm exists and that therefore firstorder model-checking is fixed-parameter tractable on any class of graphs of structurally bounded expansion. We leave this problem for future work.
Organization. In Section 2 we collect basic facts about logic, transductions, treedepth, shrubdepth and the notion of bounded expansion. In Section 3 we provide the formal definitions of structurally bounded expansion classes and classes with low shrubdepth covers, and state the main results and their proofs using lemmas which are proved in the following three sections. We consider algorithmic aspects in Section 7 and conclude in Section 8. We aim to present an easy to follow proof of our main result. For this reason, we present proofs of the key lemmas in the main body of the paper, while rather technical results that disturb the flow of ideas are presented in full detail in the appendix.
Preliminaries
Basic notation. We use standard graph notation. All graphs considered in this paper are undirected, finite, and simple; that is, we do not allow loops or multiple edges with the same pair of endpoints. We follow the convention that the composition of an empty sequence of (partial) functions is the identity function. For an integer k, we denote [k] = {1, . . . , k}.
Structures, logic, and transductions
Structures and logic. A signature Σ is a finite set of relation symbols, each with prescribed arity that is a non-negative integer, and unary function symbols. A structure A over Σ consists of a finite universe V (A) and interpretations of symbols from the signature: each relation symbol R ∈ Σ, say of arity k, is interpreted as a k-ary relation We drop the superscipt when the structure is clear from the context, thus identifying each symbol with its interpretation. If A is a structure and X ⊆ V (A) then we define the substructure of A induced by X in the usual way except that a unary function f (x) in A becomes undefined on all x ∈ X for which f (x) ∈ X. The Gaifman graph of a structure A is the graph with vertex set V (A) where two elements u, v ∈ A are adjacent if and only if either u and v appear together in some tuple in some relation in For a signature Σ, we consider standard first-order logic over Σ. Let us clarify the usage of function symbols. A term τ (x) is a finite composition of function symbols applied to a variable x. In a structure A, given an evaluation of x, the term τ (x) either evaluates to some element of A in the natural sense, or is undefined if during the evaluation we encounter an element that does not belong to the domain of the function that is to be applied next. In first order logic over Σ we allow usage of atomic formulas of the following form: • R(τ 1 (x 1 ), . . . , τ k (x k )) for a relation symbol R of arity k, terms τ 1 , . . . , τ k , and variables x 1 , . . . , x k ; • τ 1 (x 1 ) = τ 2 (x 2 ) for terms τ 1 , τ 2 and variables x 1 , x 2 ; and • dom f (τ (x)) for term τ and variable x.
Here, the predicate dom f (τ (x)) checks whether τ (x) belongs to the domain of f . The semantics are defined as usual, however an atomic formula is false if any of the terms involved is undefined. Based on these atomic formulas, the syntax and semantics of first order logic is defined in the expected way.
Graphs, colored graphs and trees. Graphs can be viewed as finite structures over the signature consisting of a binary relation symbol E, interpreted as the edge relation, in the usual way. For a finite label set Λ, by a Λ-colored graph we mean a graph enriched by a unary predicate U λ for every λ ∈ Λ. We will follow the convention that if C is a class of colored graphs, then we implicitly assume that all graphs in C are over the same fixed finite signature. A rooted forest is an acyclic graph F together with a unary predicate R ⊆ V (F ) selecting one root in each connected component of F . A tree is a connected forest. The depth of a node x in a rooted forest F is the distance between x and the root in the connected component of x in F . The depth of a forest is the largest depth of any of its nodes. The least common ancestor of nodes x and y in a rooted tree is the common ancestor of x and y that has the largest depth.
Transductions. We now define the notion of transduction used in the sequel. A transduction is a special type of first-order interpretation with set parameters, which we see here (from a computational point of view) as a nondeterministic operation that maps input structures to output structures. Transductions are defined as compositions of atomic operations listed below. An extension operation is parameterized by a first-order formula ϕ(x 1 , . . . , x k ) and a relation symbol R. Given an input structure A, it outputs the structure A extended by the relation R interpreted as the set of k-tuples of elements satisfying ϕ in A. A restriction operation is parameterized by a unary formula ψ(x). Applied to a structure A it outputs the substructure of A induced by all elements satisfying ψ. A reduct operation is parameterized by a relation symbol R, and results in removing the relation R from the input structure. Copying is an operation which, given a structure A outputs a disjoint union of two copies of A extended with a new unary predicate which marks the newly created vertices, and a symmetric binary relation which connects each vertex with its copy. A function extension operation is parameterized by a binary formula ϕ(x, y) and a function symbol f , and extends a given input structure by a partial function f defined as follows: f (x) = y if y is the unique vertex such that ϕ(x, y) holds. Note that if there is no such y or more than one such y, then f (x) is undefined. Finally, suppose σ is function that maps each structure A to a nonempty family σ(A) of subsets of its universe. A unary lift operation, parameterized by σ, takes as input a structure A and outputs the structure A enriched by a unary predicate X interpreted by a nondeterministically chosen set U ∈ σ(A).
We remark that function extension operations can be simulated by extension operations, defining the graphs of the functions in the obvious way. They are, however, useful as a means of extending the expressive power of transductions in which only quantifier-free formulas are allowed, as defined below.
Transductions are defined inductively: every atomic transduction is a transduction, and the composition of two transductions I and J is the transduction I; J that, given a structure A, first applies I to A and then J to the output I(A). A transduction is deterministic if it does not use unary lifts. In this case, for every input structure there is exactly one output structure. A transduction is almost quantifier-free if all formulas that parameterize atomic operations comprising it are quantifier-free 1 , and is deterministic almost quantifier-free if it additionally does not use unary lifts.
If C is a class of structures, we write I(C ) for the class which contains all possible outputs I(A) for A ∈ C . We say that two transductions I and J are equivalent on a class C of structures if every possible output of I(A) is also a possible output of J(A), and vice versa, for every A ∈ C .
It may happen that an atomic operation I is undefined for a given input structure A. For example, for an extension operation parametrized by a first order formula ϕ using a relation symbol R, if the input structure A does not carry the symbol R, then I(A) is undefined according to the above definition. This will never occur in our constructions. However, for completeness, we may define I(A) as a fixed structure ⊥ in such situations.
When considering a composition of atomic operations, we avoid overriding symbols by later operations, i.e., we always assume that subsequent atomic operations create relation symbols which are distinct from previously created relations symbols and also from symbols in the original signature. Since every transduction I is a composition of finitely many atomic operations, the result of I applied to a structure over a finite signature Σ will be again a structure over a finite signature Γ, which depends on Σ and I only (unless the result is undefined).
Example 1. Let C be the class of rooted forests of depth at most d, for some fixed d ∈ N. We describe an almost quantifier-free transduction which defines the parent function in C . First, using unary lifts introduce d + 1 unary predicates D 0 , ..., D d , where D i marks the vertices of the input tree which are at distance i from a root. Next, using a function extension, define a partial function f which maps a vertex v in the input tree to its parent, or is undefined in case of a root. This can be done by a quantifier-free formula, which selects those pairs x, y such that x and y are adjacent and D i (x) implies D i−1 (y).
It will sometimes be convenient to work with the encoding of bounded-depth trees and forests as node sets endowed with the parent function, rather than graphs with prescribed roots. As seen in Example 1, these two encodings can be translated to each other by means of almost quantifier-free transductions, which render them essentially equivalent.
Normal forms. It will sometimes be useful to assume a certain normal form of transductions. We will need two similar, yet slightly different normal forms: one for general transductions and one for almost quantifier-free transductions. The proofs are standard, for completeness, we give them in the appendix. • C is a sequence of copying operations; • F is a sequence of function extension operations, one for each function on the output; • E is a sequence of extension operations, one for each relation on the output; • X is a single restriction operation; and • R is a sequence of reduct operations.
Moreover, formulas parameterizing atomic operations in F; E; X use only relations and functions that appeared originally on input or were introduced by L; C. In particular, none of these formulas uses any function or relation introduced by an atomic operation in F; E.
Lemma 3 ( ). Every almost quantifier-free transduction is equivalent to an almost quantifier-free transduction that first applies a sequence of unary lifts and then applies a deterministic almost quantifier-free transduction.
Treedepth and shrubdepth
The treedepth of a graph G is the minimal depth of a rooted forest F with the same vertex set as G, such that for every edge uv of G, u is an ancestor of v, or v is an ancestor of u in F . A class C of graphs has bounded treedepth if there is a bound d ∈ N such that every graph in C has treedepth at most d. Equivalently, C has bounded treedepth if there is some number k such that no graph in C contains a simple path of length k [31]. The notion of treedepth lifts to structures: a class C of structures has bounded treedepth if the class of their Gaifman graphs has bounded treedepth.
Shrubdepth. The following notion of shrubdepth has been proposed in [17] as a dense analogue of treedepth. Originally, shrubdepth was defined using the notion of tree-models. We present an equivalent definition basing on the notion of connection models, introduced in [17] under the name of m-partite cographs of bounded depth.
A connection model with labels from Λ is a rooted labeled tree T where each leaf x is labeled by a label λ(x) ∈ Λ, and each non-leaf node v is labeled by a (symmetric) binary relation C(v) ⊆ Λ × Λ. Such a model defines a graph G on the leaves of T , in which two distinct leaves x and y are connected by an edge if and only if (λ(x), λ(y)) ∈ C(v), where v is the least common ancestor of x and y. We say that T is a connection model of the resulting graph G.
Example 4. Fix n ∈ N, and let G n be the bi-complement of a matching of order n, i.e., the bipartite graph with nodes a 1 , . . . , a n and b 1 , . . . , b n , such that a i is adjacent to b j if and only if i = j. A connection model for G n is shown below: We can naturally extend the definition above to structures with unary functions by regarding each unary function by a binary relation selecting all (argument, value) pairs. A class of graphs C has bounded shrubdepth if there is a number h ∈ N and a finite set of labels Λ such that every graph G ∈ C has a connection model of depth at most h using labels from Λ.
Shrubdepth can be equivalently defined in terms of another graph parameter, as follows. Given a graph G and a set of vertices W ⊆ V (G), the graph obtained by flipping the adjacency within W is the graph G with vertices V (G) and edge set which is the symmetric difference of the edge set of G and the edge set of the clique on W .
The subset-complementation depth, or SC-depth, of a graph is defined inductively as follows: • a graph with one vertex has SC-depth 0, and • a graph G has SC-depth at most d, where d 1, if there is a set of vertices W ⊆ V (G) such that in the graph obtained from G by flipping the adjacency within W all connected components have SC-depth at most d − 1.
Example 5. A star has SC-depth at most 2: flipping the adjacency within the set consisting of the vertices of degree 1 yields a clique, which in turn has SC-depth at most 1.
The notion of SC-depth leads to a natural notion of decompositions. An SCdecomposition of a graph G of SC-depth at most d is a rooted tree T of depth d with leaf set V (G), equipped with unary predicates W 0 , . . . , W d on the leaves. Each child s of the root in T corresponds to a connected component C s of the graph G obtained from G by flipping the adjacency within W 0 , such that the subtree of T rooted at s, together with the unary predicates W 1 , . . . , W d restricted to V (C s ), form an SC-decomposition of C s .
We will make use of the following properties, where the first one follows from the definition of shrubdepth, and the remaining ones follow from [17]. Proposition 6. Let C be a class of graphs. Then: 1. If C has bounded shrubdepth then the class of all induced subgraphs of graphs from C also has bounded shrubdepth.
2. C has bounded shrubdepth if and only if for some d ∈ N all graphs in C have SC-depth at most d.
3. If C has bounded treedepth then C has bounded shrubdepth.
4. If C has bounded shrubdepth and I is a transduction that outputs colored graphs, then I(C ) has bounded shrubdepth.
It is well-known (see [23]) that in the absence of large bi-cliques (complete bipartite graphs) a graph of bounded cliquewidth has in fact bounded treewidth. The same holds also for shrubdepth and treedepth. The lemma is proved by an easy induction on the depth of the connection models. Lemma 7 ( ). A class of graphs C has bounded treedepth if and only if graphs in C have bounded shrubdepth and exclude some fixed bi-clique as a subgraph.
Bounded expansion
A graph H is a depth-r minor of a graph G if H can be obtained from a subgraph of G by contracting mutually disjoint connected subgraphs of radius at most r. A class C of graphs has bounded expansion if there is a function f : N → N such that |E(H)| |V (H)| f (r) for every r ∈ N and every depth-r minor H of a graph from C . Examples include the class of planar graphs, or any class of graphs with bounded maximum degree.
We will use the following lemma.
Lemma 8. Let C be a class of (colored) graphs of bounded expansion and let C be a copy operation. Then C(C ) is a class of colored graphs of bounded expansion.
Proof. Let G ∈ C . The Gaifman graph of C(G) is a subgraph of the so-called lexicographic product of G with K 2 , i.e., it is constructed from the latter by replacing every vertex with two clones of it. It is known that if a class of graphs C has bounded expansion, then the class of lexicographic products of graphs from C with any fixed graph H also has bounded expansion; see e.g., [31,Proposition 4.6].
The connection between treedepth and graph classes of bounded expansion can be established via p-treedepth colorings. For an integer p, a function c : V (G) → C is a p-treedepth coloring if, for every i p and set X ⊆ V (G) with |c(X)| = i, the induced graph G[X] has treedepth at most i. A graph class C has low treedepth colorings if for every p ∈ N there is a number N p such that for every G ∈ C there exists a p-treedepth coloring c : V (G) → C with |C| N p . Theorem 9 ( [29]). A class of graphs C has bounded expansion if, and only if, it has low treedepth colorings.
Main results
In this section we introduce two notions which generalize the concept of bounded expansion. Then we state the main results and outline the proof. First, we introduce classes of structurally bounded expansion. This notion arises from closing bounded expansion graph classes under transductions. The second notion, low shrubdepth covers, arises from the low treedepth coloring characterisation of bounded expansion (see Theorem 9) by replacing treedepth by its dense counterpart, shrubdepth. For convenience, we formally define this in terms of covers.
Definition 11. A cover of a graph G is a family U G of subsets of V (G) such that U G = V (G). A cover U G is a p-cover, where p ∈ N, if every set of at most p vertices is contained in some U ∈ U G . If C is a class of graphs, then a (p-)cover of C is a family We say that the cover U has bounded treedepth (respectively, bounded shrubdepth) if the class C [U] has bounded treedepth (respectively, shrubdepth).
Example 12. Let T be the class of trees and let p ∈ N. We construct a finite p-cover U of T which has bounded treedepth. Given a rooted tree T , let U T = {U 0 , . . . , U p }, where U i is the set of vertices of T whose depth is not congruent to i modulo p + 1. Note that T [U i ] is a forest of height p, and that U T is a p-cover of T . Hence U = (U T ) T ∈T is a finite p-cover of T of bounded treedepth.
In analogy to low treedepth colorings, we can now characterize graph classes of bounded expansion using covers. We say that a class C of graphs has low treedepth covers if for every p ∈ N there is a finite p-cover U of C with bounded treedepth. The following lemma follows easily from Theorem 9.
Lemma 13 ( ). A class of graphs has bounded expansion if, and only if, it has low treedepth covers.
We now define the second notion generalizing the concept of bounded expansion. The idea is to use low shrubdepth covers instead of low treedepth covers.
Definition 14.
A class C of graphs has low shrubdepth covers if, and only if, for every p ∈ N there is a finite p-cover U of C with bounded shrubdepth.
It is easily seen that Lemma 13 together with Proposition 6(3) imply that every class of bounded expansion has low shrubdepth covers. Our main result is the following theorem.
Theorem 15. A class of graphs has structurally bounded expansion if, and only if, it has low shrubdepth covers.
As a byproduct of our proof of Theorem 15 we obtain the following quantifierelimination result, which we believe is of independent interest. Theorem 16. Let C be a class of colored graphs which has low shrubdepth covers. Then every transduction I is equivalent to some almost quantifier-free transduction J on C .
We now outline the proof of Theorem 15 and Theorem 16. Both theorems follow easily from Proposition 18 and Proposition 19 stated below. These are proved in subsequent sections.
We start with the following lemma, which intuitively shows that covers commute with almost quantifier-free transductions.
Lemma 17. If a class of graphs C has low shrubdepth covers and I is an almost quantifier-free transduction that outputs colored graphs, then I(C ) also has low shrubdepth covers.
Proof (sketch). The idea is that for any almost quantifier-free transduction I there is a constant c such any induced substructure of I(G) on p elements depends only on an induced substructure of G of size p · c. In particular, a (p · c)-cover of G induces a p-cover of I(G). Moreover, as having bounded shrubdepth is preserved by transductions, a low shrubdepth cover of C induces a low shrubdepth cover of I(C ). The details are presented in Section 4.
The main novel ingredient in our proof of Theorem 15 and Theorem 16 is the following result, which intuitively states that classes with low shrubdepth covers are bi-definable with classes of bounded expansion, using almost quantifier-free transductions.
Proposition 18. Suppose C is a class of graphs with low shrubdepth covers. Then there is a pair of transductions S and I, where S is almost quantifier-free and I is deterministic almost quantifier-free, such that S(C ) is a class of colored graphs of bounded expansion and I(S(G)) = {G} for each G ∈ C .
Clearly, Proposition 18 implies that C has structurally bounded expansion, since it can be obtained as a result of transduction I to a class S(C ) of bounded expansion. Thus, the right-to-left implication of Theorem 15 is a corollary of the proposition. The proof of Proposition 18 is presented in Section 5. We sketch the rough idea below.
Proof (sketch). First, in Lemma 31 of Section 5.2, we prove the special case where C is a class of graphs of bounded shrubdepth, and for those we prove bi-definability with classes of trees of bounded depth. In particular, if D is a class of graphs of bounded shrubdepth, then there is a pair of almost quantifier-free transductions T, I 0 such that T(D) is a class of colored trees of bounded depth and such that I 0 (T(H)) = {H} for all H ∈ D. Lemma 31 is the combinatorial core of this paper.
To prove Proposition 18, we lift Lemma 31 to the general case using covers, as follows. Let C be a class with low shrubdepth covers and let U be a 2-cover of C of bounded shrubdepth, and let N be such that |U G | N for G ∈ C . We apply the bounded shrubdepth case to the class D = C [U], yielding almost quantifier-free transductions T and I 0 as above. The transduction S works as follows: given a graph G ∈ C , introduce N unary predicates marking the cover U G of G, and for each U ∈ U G , apply T to the induced subgraph G[U ] of G, yielding a colored tree T(G[U ]). Define S(G) as the union of the trees T(G[U ]), for U ∈ U G . As U G is a 2-cover of G, G is the union of the induced graphs G[U ] for U ∈ U G . As each graph G[U ] can be recovered from the tree T(G[U ]) using the inverse transduction I 0 , it follows that G can be recovered from the union S(G). This yields the inverse transduction I such that I(S(G)) = {G}. As S is almost quantifier-free by construction, it follows from Lemma 17 that S(C ) is a class with low shrubdepth covers. Moreover, each graph in S(C ) is a union of at most N trees, so it does not contain K N +1,N +1 as a subgraph. It follows from Lemma 7 that the low shrubdepth cover of S(C ) is in fact a low treedepth cover. Hence, S(C ) has low treedepth covers, i.e., has bounded expansion. Theorem 16, and the remaining implication in Theorem 15 are consequences of the following result.
Proposition 19. Let C be a class of graphs of bounded expansion and let I be a transduction. Then I is equivalent to an almost quantifier-free transduction J on C .
We note that Proposition 19 is a strengthening of similar statements provided by Dvořák et al. [9] and of Grohe and Kreutzer [21], and could be derived by a careful analysis of their proofs. In Section 6 we provide a self-contained proof, which we believe is simpler than the previous proofs, and is sketched below.
Proof (sketch). We use the characterization of bounded expansion classes as those which have low treedepth covers. We first prove Proposition 19 for forests of bounded depth. This can be handled by a direct (although slightly cumbersome) combinatorial argument, similarly as in [9]. In Appendix F.2 we present an argument using tree automata.
The statement for classes of forests of bounded depth then easily lifts to classes of bounded treedepth. Here we use the fact that in a graph of bounded treedepth it is possible to encode a depth-first search forest of bounded depth, by using unary predicates marking the depth of each node in the spanning forest.
We then lift the result from classes of bounded treedepth using covers. Specifically, suppose for simplicity that the transduction I is a single extension operation, parametrized by a formula ψ. We then proceed by induction on the structure of the formula ψ and show that it can be replaced by a quantifier-free formula, at the cost of introducing unary functions defined by an almost quantifier-free transduction.
In the inductive step, the only nontrivial case is the one of existential quantification, i.e., of formulas of the form where ϕ(x,ȳ) may be assumed to be a quantifier-free formula involving unary functions, by inductive assumption. We consider a p-cover U of C where p is a constant such that there are at most p different terms occurring in ϕ(x,ȳ). Since C has bounded expansion, we may assume that the cover U has bounded treedepth, and that there is a constant N ∈ N such that |U G | N for all G ∈ C . For a fixed graph G ∈ C , the existentially quantified variable x must be in one of the sets U ∈ U G . Therefore, the formula ψ(ȳ) is equivalent to a disjunction of at most N formulas ψ i (ȳ), for i = 1, . . . , N , where each formula ψ i (ȳ) performs existential quantification restricted to the ith set in U G (where U G is ordered arbitrarily). By the special case of the proposition proved for classes of bounded treedepth, ψ i (ȳ) is equivalent to a quantifierfree formula on C [U] (the quantifier-free formula uses unary functions introduced by almost quantifier-free transductions). Reassuming, ψ is equivalent on G to a disjunction of quantifier-free formulas involving unary functions that are introduced by almost quantifier-free transductions. This deals with the inductive step.
We finally show how to conclude Theorem 15 and Theorem 16 from Lemma 17, Proposition 18 and Proposition 19.
Proof (of Theorem 15). As observed, the right-to-left implication of Theorem 15 follows from Proposition 18. We now show the left-to-right implication.
Let C be a class of bounded expansion and let I be a transduction that outputs colored graphs. We show that I(C ) has low shrubdepth covers.
By Lemma 13, C has low treedepth covers. Applying Proposition 19 yields an almost quantifier-free transduction J such that I(C ) = J(C ). As C in particular has low shrubdepth covers (cf. Proposition 6 (3)), we may apply Lemma 17 to J and C to deduce that J(C ) = I(C ) has low shrubdepth covers.
Proof (of Theorem 16). Proposition 18 allows to reduce the theorem to the case of classes of bounded expansion, as almost quantifier-free transductions are closed under composition. The case of bounded expansion classes is handled by Proposition 19.
It remains to provide the details of the proofs of Lemma 17, Proposition 18 and Proposition 19. This is done in Section 4, Section 5 and Section 6, respectively. After that, in Section 7 we conclude with a preliminary algorithmic result concerning the model-checking problem for first-order logic on classes with structurally bounded expansion.
Proof of Lemma 17 (almost quantifier-free transductions commute with covers)
In this section we prove Lemma 17, which we restate for convenience.
Lemma 17. If a class of graphs C has low shrubdepth covers and I is an almost quantifier-free transduction that outputs colored graphs, then I(C ) also has low shrubdepth covers.
We start with formulating the following lemma which states that almost quantifierfree transductions are, in a certain sense, local.
Lemma 20. For every deterministic almost quantifier-free transduction I there is a constant c ∈ N such that the following holds. For every structure A and every element v of I(A) there is a set S v ⊆ V (A) of size at most c such that for any sets U, W with In order to prove the lemma, we define the following notions of dependency and support.
carrying partial functions f 1 , . . . , f p , we say that an element v ∈ V (A) τ -depends with respect to τ on itself and all elements of the form (f p • · · · • f i )(v) for i ∈ [p], whenever defined. For a quantifier-free formula ϕ(x 1 , . . . , x k ), an element v ∈ V (A) ϕ-depends on all elements on which v τ -depends, for any term τ appearing in ϕ. For an element v, the set of elements on which v ϕ-depends in A will be denoted by cl A ϕ (v); note that the size of this set is always bounded by a constant depending only on ϕ. Observe also that given elements v 1 , . . . , v k , to check whether ϕ(v 1 , . . . , v k ) holds in A it suffices to check whether it holds in the substructure of A induced by all elements on which v 1 , . . . , v k ϕ-depend.
With the auxiliary notion of dependency defined we can come to the definition of support.
Definition 22.
Suppose I is a deterministic almost quantifier-free transduction, and let A be an input structure. For an element v ∈ V (I(A)) and a subset S ⊆ V (A), we now define what it means that v is I-supported by S. We first define this for atomic operations (note that unary lifts are excluded since I is assumed to be deterministic): • If I is a reduct operation or a copy operation, then v is I-supported by S if and only if v ∈ S.
• If I is a restriction or an extension operation, say parameterized by a formula ϕ, • Suppose I is a function extension operation, say introducing a partial function f using a binary formula ϕ(x, y). Then v is I-supported by S if and only if cl A ϕ (v) ⊆ S and the following holds: Finally, for non-atomic deterministic almost quantifier-free transductions the notion of I-supporting is defined by induction on the structure of the transduction. Suppose I is the composition I 1 ; I 2 of two transductions. Then v ∈ V (I(A)) is I-supported by S ⊆ V (A) if there exists a subset T ⊆ V (I 1 (A)) and, for each w ∈ T , a subset S w ⊆ S such that v is I 2 -supported by T and each w ∈ T is I 1 -supported by S w .
The notion of supporting is trivially closed under taking supersets: if v is I-supported by S, then v is also I-supported by any superset of S.
Proof (of Lemma 20). By induction on the definition of an almost quantifier-free transduction I it is easy to see that for By induction we also observe that if W ⊆ V (I(A)) and U ⊆ V (A) are such that every v ∈ W is I-supported by U then This proves the lemma.
We can now prove Lemma 17.
Proof (of Lemma 17). Let C be a class with low shrubdepth covers and let I be an almost quantifier-free transduction that outputs colored graphs. We show that I(C ) has low shrubdepth covers. By normalizing I as described in Lemma 3, we may assume that I is of the form L; J, where L is a sequence of unary lifts and J is deterministic almost quantifier-free. As C has low shrubdepth covers, the class D = L(C ) also has low shrubdepth covers (this is implied by Proposition 6(4)). Moreover, I(C ) = J(D). Therefore, it suffices to focus on the deterministic almost quantifier-free transduction J applied to the class D. Note that D is a class of colored graphs, i.e., graphs with unary predicates on their vertices.
Let c be the constant provided by Lemma 20 for the transduction J. We need to find, for every p ∈ N, a finite p-cover of J(D) of bounded shrubdepth, so let us fix p. Let U be a finite (c · p)-cover of D of bounded shrubdepth. For a graph G ∈ D and U ∈ U G , let Clearly |W J(G) | |U G |, so W is finite as well. We need to verify that W is a p-cover and that it has bounded shrubdepth. To see that W is a p-cover, take any p elements To see that W is a bounded shrubdepth cover, observe that by assumption D[U] has bounded shrubdepth, hence by Proposition 6(4) we find that J(D[U]) also has bounded shrubdepth. By Lemma 20, for each G ∈ D and W U ∈ W J(G) , the induced substructure , which also has bounded shrubdepth by Proposition 6(1).
Proof of Proposition 18 (bi-definability of classes with low shrubdepth covers and classes of bounded expansion)
In this section we prove Proposition 18, which we repeat for convenience.
Proposition 18. Suppose C is a class of graphs with low shrubdepth covers. Then there is a pair of transductions S and I, where S is almost quantifier-free and I is deterministic almost quantifier-free, such that S(C ) is a class of colored graphs of bounded expansion and I(S(G)) = {G} for each G ∈ C .
Clearly, Proposition 18 implies that C has structurally bounded expansion, since it can be obtained as a result of transduction I to a class S(C ) of bounded expansion. Thus, the right-to-left implication of Theorem 15 is a corollary of the proposition.
The idea of the proof of Proposition 18 is as follows. We first prove in Lemma 23 of Section 5.1 that connected components in graphs of bounded shrubdepth are definable by almost quantifier-free transductions. We use Lemma 23 to first prove Proposition 18 for the special case where C is a class of graphs of bounded shrubdepth, and for those we prove bi-definability with classes of trees of bounded depth. This is done in Lemma 31 of Section 5.2. Then, we conclude the general case in Section 5.3, by lifting Lemma 31 using covers.
Defining connected components in graphs of bounded shrubdepth
The following lemma is the combinatorial core of our proof of Proposition 18.
Lemma 23. Let C be a class of graphs of bounded shrubdepth. There is an almost quantifier-free transduction F such that for a given G ∈ C , every output of F on G is equal to G enriched by a function g : if and only if v and w are in the same connected component of G.
The rest of Section 5.1 is devoted to the proof of Lemma 23.
Guidance systems. We first introduce the notions of guidance systems and of functions guided or guidable by them. This is a combinatorial abstraction for functions computable by almost quantifier-free transductions.
Let G be a graph. A guidance system in G is any family U of subsets of the vertex set of G. The size of a guidance system U is the cardinality of the family U. We say that a partial function f : Observe that an -guidable partial function maps each vertex v from its domain to a vertex in the same connected component as v. The following lemmas will be useful for operating on guidable functions. Lemma 24 ( ). Let G be a graph and suppose g : V (G) V (G) is a partial function such that the restriction g| C of g to each connected component C of G is -guidable. Then g is -guidable.
Lemma 25 ( ). Let G be a graph and let g 1 , . . . , g s : Finally, guidable functions can be computed using almost quantifier-free transductions.
Lemma 26 ( ). Let C be a class of graphs and let ∈ N be fixed. Suppose that each G ∈ C is equipped with an -guidable function f G : V (G) V (G). Then there exists an almost quantifier-free transduction which given G ∈ C has exactly one output: the graph G enriched with f G .
We will use the following fact stating that graphs of bounded shrubdepth do not admit long induced paths. Lemma 27 ( [16]). For every class C of graphs of bounded shrubdepth there exists a constant r ∈ N such that no graph from C contains a path on more than r vertices as an induced subgraph. Consequently, for every graph G ∈ C every connected component of G has diameter at most r.
Spanning forests. For a graph G and a function g : V (G) → V (G), we say that g defines a spanning forest of depth r on G if g is guarded by G and the r-fold composition The following lemma states that guidance systems can define shallow spanning forests in graph classes of bounded shrubdepth.
Lemma 28.
For every class C of graphs of bounded shrubdepth there exist constants q, r ∈ N such that for every G ∈ C there is a function f G : V (G) → V (G) which is q-guidable as a partial function on G and defines a spanning forest of depth r on G.
We first show how Lemma 23 follows from Lemma 28.
Proof (of Lemma 23). By Lemma 26, there is an almost quantifier-free transduction I which, given a graph G ∈ C on input, constructs the function f G obtained from Lemma 28. Now let g = f r G be the r-fold composition of f . Clearly, g can be computed by an almost quantifier-free transduction using a single function extension operation, making use of the function f G constructed by I. As g is constant on every connected component of G, Lemma 23 follows.
It remains to prove Lemma 28.
Constructing guidable choice functions. Lemma 28 will follow easily from the fact that connected components of graphs of bounded shrubdepth have bounded diameter by Lemma 27, and from the following lemma, essentially stating that every total binary relation whose graph has bounded shrubdepth contains a guidable choice function.
Lemma 29.
For every class C of graphs of bounded shrubdepth there exists a constant p ∈ N such that the following holds. Suppose G ∈ C and A and B are two disjoint subsets of vertices of G such that every vertex of A has a neighbor in B. Then there is a function f : A → B which is p-guidable as a partial function on G.
We found two conceptually different proofs of this result. We believe that both proofs describe complementary viewpoints on the problem, so we present both of them. To keep the presentation concise, in the main body of the paper we give only one proof, using the characterization of classes of bounded shrubdepth using connection models, and their close connection to bi-cographs. We present the second proof in Appendix D.2, which provides an explicit greedy procedure leading to the construction of f . We first prove a special case of Lemma 29 for graphs which have a connection model using two different labels α and β, where one part of G has label α and the other part has label β. Such graphs are called bi-cographs (cf. [18]).
Lemma 30. Let G be a bi-cograph with parts A, B and with a connection model of height h where vertices in A have label α and vertices in B have label β. Suppose further that every vertex in A has a neighbor in B. Then there is a function f : A → B which is h-guidable as a partial function on G.
Proof. By Lemma 24, it is enough to consider the case when G is connected. Let T be the assumed connection model of height h.
We prove that there is an h-guidable function f : A → B. The proof proceeds by induction on h. The base case, when h = 1 is trivial, because then every vertex of A is adjacent to every vertex of B, so picking any w ∈ B the function f : A → B which maps every v ∈ A to w is guided by the guidance system consisting only of {w}.
In the inductive step, assume that h 2 and the statement holds for height h − 1. Since G is connected, either the label C(r) of the root r contains the pair (α, β), or r has only one child v. In the latter case, the subtree of T rooted at v is a connection model of G of height h − 1, so the conclusion holds by inductive assumption. Hence, we assume that (α, β) ∈ C(r).
Let S be the set of bipartite induced subgraphs H of G such that H is defined by the connection model rooted at some child of r in T . As (α, β) ∈ C(r), it follows that if H 1 , H 2 ∈ S are two distinct graphs, then every vertex with label α in H 1 is connected to every vertex with label β in H 2 . We consider two cases, depending on whether S contains more than one graph H containing a vertex with label β, or not.
In the first case, there are at least two graphs H 1 , H 2 ∈ S such that H 1 and H 2 both contain a vertex with label β. Pick w 1 ∈ V (H 1 ) and w 2 ∈ V (H 2 ), both with label β. Then every vertex in A is adjacent either to w 1 or to w 2 . Let f : A → B be a function which maps a vertex v ∈ A to w 1 if v is adjacent to w 1 , and to w 2 otherwise. Then f is guided by the guidance system consisting of {w 1 } and {w 2 }.
In the second case, there is only one graph H ∈ S which contains a vertex with label β. We now prove Lemma 29 in the general case.
Proof (of Lemma 29). Let C be a class of graphs of bounded shrubdepth. Hence, there is a finite set of labels Λ and a number h ∈ N such that every graph G ∈ C has a connection model of height h using labels from Λ. For α ∈ Λ, let V α denote the set of vertices of G which are labeled α.
Define a function µ : A → Λ 2 as follows: for every vertex v define µ(v) as (α, β), where α is the label of v, and β ∈ Λ is an arbitrary label such that v has a neighbor in B with label β.
For every pair of labels α, β, consider the bipartite graph G αβ which is the subgraph of G consisting of µ −1 ((α, β)) on one side and B ∩ V β on the other side, and all edges between these sets; note that they are disjoint, as one is contained in A and second in B. Observe that G αβ is a bi-cograph with a connection model of height h, such that every vertex in V (G αβ ) ∩ A has a neighbor in V (G αβ ) ∩ B. By Lemma 30 there is a function f αβ : Observe that f αβ is also h-guidable when treated as a partial function on G; it suffices to take the same guidance system, but with all its sets restricted to B.
Constructing guidable spanning forests. We are ready to complete the proof of Lemma 28 stating that shallow spanning forests on classes of bounded shrubdepth are definable by guidance systems.
Proof (of Lemma 28). Let C be a class of graphs of bounded shrubdepth, and let r and p be constants provided by Lemma 27 and Lemma 29, respectively, for the class C . Let R 0 ⊆ V (G) be a set of vertices which contains exactly one vertex in each connected component C of G. By Lemma 27, we may assume that every vertex in G is at distance at most r from a unique vertex in R 0 . For i = 1, . . . , r, let R i be the set of vertices of G whose distance to some vertex in R 0 is equal to i. Then the sets R 0 , R 1 , . . . , R r form a partition of the vertex set of G. Furthermore, observe that for i = 1, . . . , r, every vertex of R i has a neighbor in R i−1 .
Fix a number i ∈ {1, . . . , r}. Apply Lemma 29 to R i as A and R i−1 as B. This yields a function f i : In particular, f i is also a p-guidable partial function f i : V (G) V (G). Let f 0 be a partial function from V (G) to V (G) that fixes every vertex of R 0 and is undefined otherwise. Then f 0 is guided by the guidance system {R 0 }, hence it is 1-guidable in G.
Consider now the function f G : . . , r}. By the first item of Lemma 25 we find that f G is p(r + 1)-guidable. By construction, f G is guarded, and f r G maps every vertex v ∈ V (G) to the unique vertex in R 0 which lies in the connected component of v. This proves that f G defines a spanning forest of depth r on G.
This completes the proof of Lemma 23.
Proposition 18 for classes of bounded shrubdepth
In this section, we prove Proposition 18 in the special case when C is a class of graphs of bounded shrubdepth: Moreover, for any G ∈ B, every t ∈ T(G) is an SC-decomposition of G.
We remark that in Lemma 31, every output of the transduction T is an SCdecomposition of the input graph of bounded depth, whereas the transduction B recovers the graph from its SC-decomposition.
In other words, the lemma allows to construct the SC-decomposition of a graph from a class of graphs of bounded shrubdepth using an almost quantifier-free transduction. This argument is the combinatorial cornerstone of our approach. Conceptually, it shows that bounded-height decompositions of graphs from classes of bounded shrubdepth can be defined in a very weak logic, as essentially the whole information about the decomposition can be pushed to unary predicates on vertices (added using unary lifts), and from this information the decomposition can be reconstructed using only deterministic almost quantifier-free formulas.
We need one more auxiliary lemma which allows to apply a transduction in parallel to a disjoint union of structures. Suppose K is a set of structures over the same signature. The bundling of K is a structure obtained by taking the disjoint union K of the structures in K, extended with a set X disjoint from V ( K) and a function f : V ( K) → X such that f (x) = f (y) if and only if x, y belong to the same structure in K. We denote such a bundling by K X . We now prove that an almost quantifier-free transduction working on each structure separately can be lifted to their bundling.
Lemma 32 ( ). Let I be an almost quantifier-free transduction. Then there is an almost quantifier-free transduction I such that if the input to I is the bundling K X of K, then I ( K X ) is the set containing the bundling of every set formed by taking one member from I(K) for each K ∈ K.
We can now give a proof of Lemma 31.
Proof (of Lemma 31). Let B d be the class of graphs of SC-depth at most d. We prove the statement for B = B d , yielding appropriate transductions B d and T d . Observe that this implies the general case: if B is any class of graphs of bounded shrubdepth, then by Proposition 6(2) there is a number d such that every graph from B has SC-depth at most d, hence we may set B = B d , T = T d , and T = T(B).
The proof is by induction on d. The base case, when d = 0, is trivial. In general, every output of T d will be an SC-decomposition of the input graph of depth d. That is, it is a tree of height d, here encoded as a structure by providing its parent function. The leaves of this tree are exactly the original vertices of the input graph G. They are colored with d unary predicates W 0 , W 1 , . . . , W d−1 , corresponding to flip sets used on consecutive levels of the SC-decomposition. Now, given an almost quantifier-free transduction T d we construct an almost quantifier-free transduction T d+1 . The transduction T d+1 , given a graph G, nondeterministically computes a rooted tree t G as above in the following steps. Implementing each of them using an almost quantifier-free transduction is straightforward, and to keep the description concise, we leave the implementation details to the reader.
• Since G ∈ B d+1 , there is a vertex subset W ⊆ V (G) such that in the graph G obtained from G by flipping the adjacency within W every connected component belongs to B d . Using a unary lift, introduce a unary predicate W 0 selecting the set W and compute G by flipping the adjacency within W 0 .
• Let g : V (G ) → V (G ) be the function given by Lemma 23, applied to the graph G . Note that g can be constructed using an almost quantifier-free transduction. Using copying and restriction, create a copy X of the image of g. By composing g with the function that maps each element of the image of g to its copy (easily constructible using function extension), we construct a function g : V (G ) → X such that g (v) = g (w) if and only if v and w are in the same connected component of G .
Hence, g : V (G ) → X defines a bundling of the set of connected components of G .
• Apply Lemma 32 to the transduction T d yielding a transduction T d . Our transduction T d+1 now applies T d to the bundling given by g , resulting in a bundling of the family of colored trees t C , for C ranging over the connected components of G .
• Using extension, mark the roots of the trees t C with a new unary predicate; for C ranging over the connected components of G these are exactly elements that do not have a parent. Create new edges which join each such a root r with g (r). In effect, for every connected component C of G , all the roots of the trees t C are appended to a new root r C . At the end clear all unnecessary relations from the structure. Note that the obtained tree t G retains all unary predicates W 1 , . . . , W d that were introduced by the application of the transduction T d to G , as well as the predicate W 0 introduced at the very beginning. All these predicates select subsets of leaves of t G .
This concludes the description of the almost quantifier-free transduction T d+1 . The transduction B d+1 is defined similarly, and reconstructs G out of t G recursively as follows: • Let r be the root of t G ; it can be identified as the only vertex that does not have a parent. Remove r from the structure, thus turning t G into a forest t G , where the roots of t G are children of r in t G .
• Using function extension, add a function f which maps every vertex v to its unique root ancestor in t G . This can be done by taking f to be the d-fold composition of the parent function of t G with itself (assuming each root points to itself, which can be easily interpreted).
• Copy all the roots of trees in t G and let X be the set of those copies. Construct a function f : V (t G ) → X that maps each vertex v to the copy of f (v). Observe that f defines a bundling of the trees of t G .
• Apply the transduction B d obtained from Lemma 32 to the above bundling. This yields a bundling of the family of connected components of G , where G is obtained from G by flipping the adjacency within W 0 .
• Forgetting all elements of the structure apart from the bundled connected components of G yields the graph G . Construct the graph G by flipping the adjacency inside the set W 0 . Note here that since the remaining vertices are exactly the leaves of the original tree t G , the predicate W 0 is still carried by them. Finally, clean the structure from all unnecessary predicates.
It is straightforward to see that transductions T d and B d satisfy all the requested properties. This concludes the proof of Lemma 31.
Proposition 18 for classes of with low shrubdepth covers
We now prove Proposition 18 in the general case. As noted earlier, this will finish the proof of the right-to-left implication in Theorem 15.
Proof (of Proposition 18). Let C be a class of graphs with low shrubdepth covers.
We fix a finite 2-cover U of C such that C [U] has bounded shrubdepth. Let N = sup{|U G | : G ∈ C }, and for G ∈ C let G be the extension of G by unary predicates , yielding almost quantifier-free transductions T and B. It is easy to construct an almost-quantifier free transduction S such that for G ∈ C , the structure S ( G) is the union of the trees T U ∈ T(G[U ]), one tree per each U ∈ U G , where the union is disjoint apart from the vertices which belong to V (G) (the leaves of the trees). Indeed, we process U 1 , . . . , U N in order, and for each consecutive U i we apply the transduction T to G[U i ], appropriately modifying all its atomic operations so that the elements outside of U i are ignored and kept intact. Recall all the constructed trees have depth bounded by a constant, say d. Now obtain S from S by precomposing with a sequence of unary lifts introducing the predicates U 1 , . . . , U N , and appending the following operations. First, using extension operations introduce unary predicates D i, for i ∈ {1, . . . , N } and ∈ {0, 1, . . . , d} such that D i, selects nodes at depth in the tree T U i . Next, using an extension operation that introduces an adjacency relation binding every pair of elements u, v such that f (u) = v for some function f in the signature (the parent functions). Finally, use a sequence of reduct operations which drop all functions and non-unary relations from the signature, apart from adjacency. Thus every output of S is a colored graph.
Let F = S(C ). By Lemma 17, F has low shrubdepth covers. Furthermore, each graph H ∈ S(G) for some G ∈ C is the union of at most N trees, hence H is N -degenerate and in particular excludes the biclique K N +1,N +1 as a subgraph.
Hence by Lemma 7 we infer that S(C ) has low treedepth covers, so by Lemma 13, S(C ) is a class of bounded expansion.
We are left with constructing a deterministic almost quantifier-free transduction I satisfying I(S(G)) = {G}. This transduction should take on input a graph H ∈ S(G) and turn it back to G. The vertex set of H consists of V (G) and trees T U for U ∈ U G , each built on top of the subset U of V (G) and of depth at most d. Using predicates D i, it is easy to use a sequence of quantifier-free function extension operations to construct, for each U ∈ U G , the parent function of T U , thus turning the substructure induced by the nodes of T U back into T U . Similarly as before, it is now straightforward to construct a transduction I that applies the transduction B to each colored tree T U , thus turning the set of its leaves into G[U ]. Since U was a 2-cover, for every edge e of G there exists U ∈ U G that contains both endpoints of e. Hence, applying I to the current structure recovers the graph G; this concludes the construction of I. Note that I is deterministic almost quantifier-free.
Proof of Proposition 19 (quantifier elimination for classes of bounded expansion)
In this section we prove Proposition 19, which we repeat for convenience.
Proposition 19. Let C be a class of graphs of bounded expansion and let I be a transduction. Then I is equivalent to an almost quantifier-free transduction J on C .
We note that Proposition 19 is a strengthening of similar statements provided by Dvořák et al. [9] and of Grohe and Kreutzer [21], and could be derived by a careful analysis of their proofs, and by using the Lemma 33 below.
For a graph G and a partial function f : V (G) V (G), we say that f is guarded by G if for every vertex in the domain of f is mapped to itself or to its neighbor. Lemma 33 ( ). Let C be a class of graphs which has 2-covers of bounded treedepth, and for each G ∈ C , let G be the graph G extended by a partial function f : V (G) V (G) which is guarded by G. Then there is an almost quantifier-free transduction F using only unary lifts and a single function extension such that F(G) = G.
To derive Proposition 19 from [9], one would need to prove that the unary functions constructed in their proofs can be obtained as compositions of guarded functions, and conclude using Lemma 33. Rather then doing that, below we provide a self-contained proof of Proposition 19, which we also believe is simpler than the existing proofs, among other reasons, thanks to the notion of covers. In Section 6.1 we outline how the result of Dvořák, Král', and Thomas can be deduced from our proof.
We will use the following restricted form of transductions. A faithful transduction is a transduction which does not use copying and restrictions. A guarded transduction is a faithful transduction which given a structure A, produces a structure whose Gaifman graph is a subgraph of the Gaifman graph of A. In the following lemmas, we identify a first-order formula ϕ(x) with the transduction which inputs a structure A and outputs A extended with a single relation, consisting of those tuplesā which satisfy ϕ(x) in A (this transduction is a composition of an extension operation followed by a sequence of reduct operations which drop all the symbols from the input structure).
Lemma 34. Let ϕ(x) be a first-order formula and let C be a class of graphs of bounded expansion. Then there is a guarded transduction I which adds unary function and relation symbols only, and a quantifier-free formula ϕ (x), such that ϕ is equivalent to I; ϕ on C .
Before proving Lemma 34, we first show how to conclude Proposition 19 using it.
Proof (of Proposition 19). For simplicity we assume that the signature produced by I consists of one relation P ; lifting the proof to signatures containing more relation and function symbols is immediate. By Lemma 2, we may express I as I = L; C; E; X; R, where • L is a sequence of unary lifts, • C is a sequence of copying operations, • E is a single extension operation introducing the final relation P using some formula ϕ(x), • X is a single universe restriction operation using some formula ψ(x) that does not use symbol P , and • R is a sequence of reduct operations that drop all relations and functions apart from P .
From Lemma 8 it follows that the class C(L(C )) of colored graphs is a class of bounded expansion, and therefore, we may apply Lemma 34 to it, and to the formulas ϕ(x) and ψ(x) considered above. Using Lemma 34 we replace the formulas ϕ(x) and ψ(x) by quantifier-free formulas, at the cost of introducing additional guarded transductions which introduce unary function and relation symbols. Using Lemma 33, every such transduction is equivalent to an almost quantifier-free transduction. Hence, the transductions E and X can be replaced in I by almost quantifier-free transductions, yielding an almost quantifier-free transduction J that is equivalent to I on C .
As explained, Proposition 19 together with Proposition 18 yields Theorem 16. It remains to prove Lemma 34. Similarly as in [9,21], we first prove the statement for classes of colored forests of bounded depth: Lemma 35 ( ). Let ϕ(x) be a first-order formula and let F be a class of colored rooted forests of bounded depth. Then there is a transduction I ϕ which, given a rooted forest F ∈ F extends it by the parent function of F and some unary predicates, and there exists a quantifier-free formula ϕ (x) such that ϕ is equivalent to I ϕ ; ϕ on F .
Let us remark that the presented proof of Lemma 35 is based on the automata approach and is conceptually different from the ones used in [9,21]. Note that the transduction I ϕ produced in Lemma 35 is in particular a guarded transduction, since the parent of a vertex in a forest is in particular a neighbor of that vertex. The next step is to lift Lemma 35 to classes of structures of bounded treedepth. We first observe that classes of bounded treedepth are bi-definable with classes of forests of bounded depth, using almost quantifier-free transductions. This result is similar, but much simpler to prove than Lemma 31, which is an analogous statement for classes of bounded shrubdepth.
Lemma 36. Let C be a class of structures of bounded treedepth. There is a pair of faithful transductions T and C and a class F of colored rooted forests of bounded depth such that T(C ) ⊆ F , C(F ) ⊆ C and C(T(A)) = {A} for A ∈ C . Moreover, the transduction T is guarded, and C is deterministic almost quantifier-free.
Proof. We follow the well-known encoding of structures of bounded treedepth inside colored forests, where a structure A ∈ C is encoded in a depth-first search forest of its Gaifman graph, as follows.
A depth first-search (DFS) forest of a graph G is a rooted forest F which is a subgraph of G, such that every edge of G connects an ancestor with a descendant in F .
It is known that a graph G of treedepth at most d has a DFS forest of depth at most 2 d . If A is a structure over a fixed signature Σ, G is its Gaifman graph and F is a DFS forest of G of depth 2 d , then A can be encoded in F using a bounded number of additional unary predicates by labeling every node v of F by the isomorphism type of the substructure of A induced by v 1 , . . . , v t , where v 1 , . . . , v t are the nodes on the path from a root of F to v, v = v t and t 2 d . The number of used unary predicates depends only on the signature Σ and d.
If C be a class of structures of treedepth at most d, then the transduction T, given a structure A ∈ C outputs a DFS forest F of the Gaifman graph of A of depth at most 2 d , extended with unary predicates encoding A, as described above. The structure A can be recovered from F (together with the unary predicates) using a deterministic almost quantifier-free transduction, which first introduces the parent function, and then uses a quantifier-free formula to determine the quantifier-free type of a tuple of vertices.
Using Lemma 36 we easily lift the quantifier-elimination result from forests of bounded depth to classes of low treedepth.
Lemma 37. Let ϕ(x) be a first-order formula and let C be a class of structures of bounded treedepth. Then there is a guarded transduction I ϕ and a quantifier-free formula ϕ (x) such that ϕ is equivalent to I ϕ ; ϕ on C .
Proof. Let C, T and F be as in Lemma 36. Since C(T(A)) = {A} and C is deterministic, there is a formula ψ(x) such that ϕ is equivalent to T; ψ on C . Now, apply Lemma 35 to the class F and the formula ψ(x), yielding a guarded transduction J and a quantifier-free formula ψ (x), such that ψ is equivalent to J; ψ on F . By composition, ϕ is equivalent to T; J; ψ on C . Note that T; J is a guarded transduction, since T and J are such. This proves the lemma.
Finally, we lift the quantifier elimination procedure to classes with low shrubdepth covers using Lemma 20 and a reasoning very similar to the proof of Lemma 17. Again, conceptually this lift is exactly what is happening in [9,21], however, our approach based on covers makes it quite straightforward. The key observation is encapsulated in the following lemma.
Lemma 38. Let D be a class of structures with unary relation and function symbols only, and let ϕ(x) be a quantifier-free formula with p free variables, involving c distinct terms. Then there is a quantifier-free formula ϕ (x) such that following conditions are equivalent for a structure A ∈ D, a c · p-cover U A of the Gaifman graph of A, and a p-tupleā of elements of A : Proof. We first consider the special case when ϕ(x) is an atomic formula. Each term t occurring in ϕ(x) defines a partial function t A : V (A) V (A) on a given structure A, in the natural way. Let T denote the set of terms occurring in ϕ(x). By assumption, |T | c. For a tupleā = (a 1 , . . . , a p ) of elements of a structure A, denote by Since ϕ(x) is an atomic formula, for any p-tupleā of elements of A and any set U ⊆ V (A) containing T A (ā) we have the following equivalence: Take ϕ (x) = ϕ(x). The equivalence of the two items then follows by assumption that U G is a p · c-cover of A, so for everyā, there is some set U ∈ U G containing T A (ā).
To treat the general case of a quantifier-free formula, we take ϕ (x) to be a conjunction of ϕ(x) and a formula which verifies that all the values in T A (ā) are defined. We leave the details to the reader.
We are ready to prove Lemma 34.
Proof (of Lemma 34). The proof proceeds by induction on the structure of the formula ϕ(x). In the base case, ϕ(x) is a quantifier-free formula, so we may take I to be the identity transduction.
In the inductive step, we consider two cases. If ϕ(x) is a boolean combination of simpler formulas, then the statement follows immediately from the inductive assumption. The interesting case is when ϕ(x) is of the form ∃y.ψ(x, y), for some formula ψ(x, y). We consider this case below. Denote by p the number of free variables in the formula ψ(x, y).
Apply the inductive assumption to the formula ψ(x, y), yielding a guarded transduction I ψ and a formula ψ (x, y). Let c be the number of distinct terms (including subterms) appearing in the formula ψ (x, y). Let D = I ψ (C ). Note that every structure in D has unary function and relation symbols only, and is guarded by some graph in C . By Lemma 17, we can pick a finite c · p-cover U of C , so that the class C [U] has bounded treedepth. As I ψ is guarded, it follows that also the class D[U] has bounded treedepth.
Apply Lemma 38 to D and ψ (x, y), yielding a formula ψ (x, y) such that for every graph G ∈ C , p-tuple of vertices (ā, b) and the c · p-cover U G of G ∈ C , the following equivalences hold: Apply Lemma 37 to the class D[U] and the formula ∃y.ψ (x, y), yielding a guarded transduction F and quantifier-free formula ρ(x) such that for every A ∈ D[U] and tuplē a ∈ V (A) |x| , A,ā |= ∃y.ψ (x, y) ⇐⇒ F(A),ā |= ρ(x).
Claim 1. For each graph G ∈ C and tupleā ∈ V (H) |x| , the following conditions are equivalent: Proof. We have the following equivalences: This proves the claim.
Let N = sup{|U G | : G ∈ C }. For each graph G ∈ C , fix an enumeration U 1 , . . . , U N of the cover U G .
Claim 2.
There is a guarded transduction F and quantifier-free formulas ρ 1 (x), . . . , ρ N (x) such that given a graph G ∈ C , a number i ∈ {1, . . . , N } and a tupleā of elements of U i , Proof. We construct a guarded transduction F which, given a graph G ∈ C , first applies the guarded transduction I ψ , then introduces unary predicates marking the sets U 1 , . . . , U N , and then, for each such unary predicate U i , applies to the structure I ψ (G)[U i ] the transduction F, modified so that each function symbol f is replaced by a new function symbol f i .
Then the formula ρ i (x) is obtained from the formula ρ(x), by replacing each function symbol f by the function symbol f i .
Combining Claim 1 and Claim 2 we get the following equivalence: concluding the inductive step. This finishes the proofs of Lemma 34 and Proposition 19.
Effectivity
As a side remark, we note that we can easily derive the result of Dvořák, Král', and Thomas, by observing that the above proof of Lemma 34 is effective, and can be leveraged to construct a transduction I which is a linear time computable function.
We say that a transduction I is a linear time transduction if there is an algorithm which, given a structure A as input, produces some structure B ∈ I(A) in linear time.
Here, the structure A is represented using the adjacency list representation, i.e., for a colored graph, the size of the description is linear in the sum of the number of vertices and the number of edges in the graph.
We show the following, effective variant of Lemma 34.
Lemma 39. Let ϕ(x) be a first-order formula and let C be a class of graphs of bounded expansion. Then there is a guarded transduction I which adds unary function and relation symbols only, and a quantifier-free formula ϕ (x), such that ϕ is equivalent to I; ϕ on C . Moreover, I is a linear time transduction.
Proof. To prove Lemma 39, we observe that the transduction I in Lemma 34 is a linear time transduction. The proof follows by tracing the proof of Lemma 34, and observing the following.
1. In Lemma 35, the constructed transduction I is a linear time transduction. This is because the transduction only adds the parent function (which is clearly lineartime computable, given a rooted forest) and some unary predicates, each of which can be computed in linear time, since each unary predicate is produced by running a deterministic threshold tree automaton on the input tree.
2. In Lemma 36, the transduction T is a linear time transduction, since it amounts to running a depth-first search on the input graph.
3. In Lemma 37, the produced transduction J = T; J is a linear time transduction, as a composition of two linear time transductions.
4. In the proof of Lemma 34, the nontrivial step is in the inductive step, in the case of an existential formula. In this case, the constructed transduction F is a linear time transduction, assuming C has bounded expansion, as F amounts to introducing unary predicates denoting the elements of a cover U G , and applying transductions I ψ and F which are linear time transductions, respectively, by the inductive assumption, and by the effective version of Lemma 37 discussed above.
We note that if C has bounded expansion then for any fixed p 0 there is a finite p-cover U of C of bounded treedepth such that U G can be computed from a given G ∈ C in time f (p) · |V (G)|, for some function f depending on C (the function f may not be computable). To compute U G , we may first compute a g(p)-treepdepth coloring of G for some function g (as required in the proof of Lemma 13) and observe that it can be converted to a cover in linear time, as in the proof of Lemma 13. A p-treedepth coloring can be computed in linear time, cf. [8,30,31].
Algorithmic aspects
In this section we give a preliminary result about efficient computability of transductions on classes with structurally bounded expansion. When we refer to the size of a structure in the algorithmic context, we refer to its total size, i.e., the sum of its universe size and the total sum of sizes of tuples in its relations.
Call a class C of graphs of structurally bounded expansion efficiently decomposable if there is a finite 2-cover U of C and an algorithm that, given a graph G ∈ C , in linear time computes the cover U G and for each U ∈ U G , an SC-decomposition S U of depth at most d of the graph G[U ], for some constant d depending only on C . Our result is as follows.
Theorem 40. Suppose J is a deterministic transduction and C is a class of graphs that has structurally bounded expansion and is efficiently decomposable. Then given a graph G ∈ C , one may compute J(G) in time linear in the size of the input plus the size of the output.
We remark that instead of efficient decomposability we could assume that the 2-cover U G of a graph G and corresponding SC-decompositions for all U ∈ U G is given together with G as input. If only the cover is given but not the SC-decompositions, we would obtain cubic running time because bounded shrubdepth implies bounded cliquewidth and we can compute an approximate clique decomposition in cubic time [32]. Then, SC-decompositions of small height are definable in monadic second-order logic, and hence they can be computed in linear time using the result of Courcelle, Makowski and Rotics [3].
Observe that the theorem implies that we can efficiently evaluate a first-order sentence and enumerate all tuples satisfying a formula ϕ(x 1 , . . . , x k ) on the given input graph, since this amounts to applying the theorem to a transduction consisting of a single extension operation. This strengthens the analogous result of Kazana and Segoufin [25] for classes of bounded expansion.
Proof (sketch). We will make use of transductions S and I constructed in the proof of Proposition 18. Recall that S(C ) is a class of colored graphs of bounded expansion, I is deterministic, and I(S(G)) = {G} for each G ∈ C . Observe that J is equivalent to S; I; J on C . Defining K as I; J, we get that J(G) = K(S(G)) for G ∈ C . Moreover, since I is deterministic, it follows that K is deterministic.
Let G ∈ C be an input graph. By efficient decomposability of C , in linear time we can compute a cover U G of G together with an SC-decomposition S U of depth at most d of G[U ], for U ∈ U G . Each S U is a colored tree, and by the construction described in the proof of Proposition 18, the trees S U for U ∈ U G , glued along the leaves form a structure belonging to S(G). As J(G) = K(S(G)), it suffices to apply the enumeration result of Kazana and Segoufin for classes of bounded expansion [25] to the colored graph S(G) and to all formulas occurring in the transduction K.
Conclusion
In this paper we have provided a natural combinatorial characterization of graph classes that are first-order transductions of bounded expansion classes of graphs. Our characterization parallels the known characterization of bounded expansion classes by the existence of low treedepth decompositions, by replacing the notion of treedepth by shrubdepth. We believe that we have thereby taken a big step towards solving the model-checking problem for first-order logic on classes of structurally bounded expansion.
On the structural side we remark that transductions of bounded expansion graph classes are just the same as transductions of classes of structures of bounded expansion (i.e., classes whose Gaifman graphs or whose incidence encodings have bounded expansion). On the other hand, it remains an open question to characterize classes of relational structures, rather than just graphs, which are transductions of bounded expansion classes. We are lacking the analogue of Lemma 31; the problem is that within the proof we crucially use the characterization of shrubdepth via SC-depth, which works well for graphs but is unclear for structures of higher arity.
Finally, observe that classes of bounded expansion can be characterized among classes with structurally bounded expansion as those which are bi-clique free. It follows, that every monotone (i.e., subgraph closed) class of structurally bounded expansion has bounded expansion. Exactly the same statement holds characterizing bounded treedepth among bounded shrubdepth, and the second item holds for treewidth vs cliquewidth. In particular, for monotone graph classes all pairs of notions collapse.
We do not know how to extend our results to nowhere dense classes of graphs, mainly due to the fact that we do not know whether there exists a robust quantifier-elimination procedure for these graph classes.
A Normalization lemmas for transductions
In this section we give proofs omitted from Section 2.1.
Proof (of Lemma 2 and of Lemma 3). We give appropriate swapping rules that allow us to arrange the atomic operations comprising I into the desired normal form.
We start with putting all the unary lifts at the front of the sequence. Observe that whenever an atomic operation is followed by a unary lift, then these two operations may be appropriately swapped. This is straightforward for all atomic operations apart from copying. For this last case, observe that copying followed by a unary lift introducing a unary predicate X is equivalent to a transduction that does the following. First, using unary lifts introduce two auxiliary unary predicates X 1 and X 2 , interpreted to select vertices that are supposed to be selected by X in the original universe, respectively in the copy of the universe. Then perform copying. Finally, use extension and reduct operations to appropriately interpret X and drop predicates X 1 , X 2 .
Having applied the above swapping rules exhaustively, the formula is rewritten into the form L; I where I does not contain any lifts. Observe that if I was almost quantifier-free, then I is deterministic almost quantifier-free. This proves Lemma 3.
Next, we perform swapping within I so that all copying operations are put at the front of the sequence of atomic operations. Again, it suffices to show that whenever an atomic operation is followed by copying, then the two operations may be swapped. For reducts this is obvious, while for extensions and restrictions one should modify the formula parameterizing the operation in a straightforward way to work on each copy separately. Thus we have rewritten I into the form L; C; I where I does not use lifts or copying. Now consider I . It is clear that all reduct operations can be moved to the end of the transduction, since it does not harm to have more relations in the structure. Next, we move all restriction operations to the end (before reduct operations) by showing that each restriction operation can be swapped with any extension or function extension operation. Suppose that the restriction is parameterized by a unary formula ψ, and it is followed by an extension operation (normal or function), say parameterized by a formula ϕ. Then the two operations may be swapped provided we appropriately relativize ϕ as follows: add guards to all quantifiers in ϕ so that they run only over elements satisfying ψ, and for every term τ used in ϕ add guards to check that all the intermediate elements obtained when evaluating τ satisfy ψ.
Applying these swapping rules exhaustively rewrites I into the form I ; X ; R, where I is a sequence of extension and function extension operations, X is a sequence of restriction operations, and R is a sequence of reduct operations. We now argue that X can be replaced with a single restriction operation X. It suffices to show how to do this for two consecutive restriction operations, say parameterized by ψ 1 and ψ 2 , respectively. Then we may replace them by one restriction operation parameterized by ψ 1 ∧ ψ 2 , where ψ 2 is obtained from ψ 2 by relativizing it with respect to ψ 1 just as in the previous paragraph.
We are left with treating the extension and function extension operations within I . Whenever a formula ϕ parameterizing some extension or function extension operation within I uses a relation symbol R introduced by some earlier extension operation within I , say parameterized by formula ϕ , then replace all occurrences of R in ϕ with ϕ . Similarly, if ϕ uses some function f that was introduced by some earlier function extension operation within I , say using formula ϕ (x, y), then replace each usage of f in ϕ by appropriatiely quantifying the image using formula ϕ (x, y). Perform the same operations on the formula parameterizing the restriction operation X.
Having performed exhaustively the operations above, formulas parameterizing all atomic operations in I ; X use only relations and functions that appear originally in the structure or were added by L; C. Hence, all extension and function extension operations within I which introduce symbols that are later dropped in R can be simply removed (together with the corresponding reduct operation). It now remains to observe that all atomic operations within I commute, so they can be sorted: first function extensions, then (normal) extensions.
B Proof of Lemma 7
In this section we prove Lemma 7. One implication is easy: it is known [17] that every class of bounded treedepth also has bounded shrubdepth, and moreover the bi-clique K s,s has treedepth s + 1, so every class of bounded treedepth excludes some bi-clique.
We need to prove the reverse implication: any class of bounded shrubdepth that moreover excludes some bi-clique has bounded treedepth. We will use the following well-known characterization of classes of bounded treedepth (see [31,Theorem 13.3]).
Lemma B.41. A class of graphs C has bounded treedepth if and only if there exists a number d ∈ N such that no graph from C contains a path on more than d vertices as a subgraph.
By Lemma B.41 and Proposition 6(3), to prove Lemma 7 it is sufficient to prove the following.
Lemma B.42. There exists a function g : N × N × N → N such that the following holds. For all integers h, m, s ∈ N, if a graph G does not contain the bi-clique K s,s as a subgraph and admits a connection model of height at most h using at most m labels, then G does not contain any path on more than g(h, m, s) vertices as a subgraph.
Proof. We proceed by induction on the height h. For h = 0, only one-vertex graphs admit a connection model of height 0, so we may set g(0, m, s) = 1.
For the induction step, suppose G does not contain K s,s as a subgraph and admits a connection model T of height h 1 and using m labels. Call two vertices u and v of G related if they are contained in the same subtree of T rooted at a child of the root of G, and unrelated otherwise. Whenever u and v are unrelated, their least common ancestor is the root of T , so whether they are adjacent depends solely on the pair of their labels.
Let P = (v 1 , . . . , v p ) be a path in G. A block on P is a maximal contiguous subpath of P consisting of vertices that are pairwise related. Thus, P breaks into blocks B 1 , . . . , B q , appearing on P in this order. Note that each block B i is a path that is completely contained in an induced subgraph of G that admits a connection model of height h − 1 and using m labels. Hence, by the induction hypothesis we have that each block B i has at most g(h − 1, m, s) vertices.
For a non-last block B i (i.e. i q), define the signature of B i as the pair of labels of the following two vertices: the last vertex of B i and of its successor on P , that is, the first vertex of B i+1 . The following claim is the key point of the proof.
Claim 3. For any signature, the number of non-last blocks with this signature is at most 4(s − 1).
Proof. Let σ = (λ 1 , λ 2 ) be the signature in question and let B be the set of blocks with signature σ; suppose for the sake of contradiction that |B| > 4(s − 1). Consider the following random experiment: independently color each subtree of T rooted at a child of the root black or white, each with probability 1/2. Call a block B i ∈ B split if the last vertex of B i is white and the first vertex of B i+1 is black. Since these two vertices are unrelated (by the maximality of B i ), each block B i is split with probability 1/4, implying that the expected number of split blocks is |B|/4 > s − 1. Hence, some run of the experiment yields a white/black coloring of subtrees rooted at children of the root of T and a set S ⊆ B of s blocks that are split in this coloring.
Let u 1 , . . . , u s be the last vertices of blocks from S and v 1 , . . . , v s be their successors on the path P , respectively. By assumption, all vertices u i have label λ 1 and all vertices v i have label λ 2 . Further, all vertices u i are white and all vertices v i are black, implying that u i and v j are unrelated for all i, j ∈ [s]. Since u i is unrelated and adjacent to v i , it follows that u i is adjacent to all vertices v j , j ∈ [s], as these vertices are also unrelated to u i and have the same label as v j . We conclude that u 1 , . . . , u s and v 1 , . . . , v s form a bi-clique K s,s in G, a contradiction.
This concludes the inductive proof.
C Proof of Lemma 13
Proof (of Lemma 13). We will prove that a graph class C has low treedepth colorings if and only if it has low treedepth covers. The result then follows from Theorem 9.
We start with the left-to-right direction. Assume C has low treedepth colorings. Then for every graph G ∈ C and p ∈ N we may find a vertex coloring γ : V (G) → [N ] using N colors where every i p color classes induce in G a subgraph of treedepth at most i; here, N depends only on p and C . Assuming without loss of generality that N p, define a p-cover U G of size at most N p as follows: , |X| = p}. Then U = (U G ) G∈C is a finite p-cover of C of bounded treedepth.
Conversely, suppose that every graph G ∈ C admits a p-cover U G of size N where G[U ] has treedepth at most d for each U ∈ U G ; here, N and d depend only on p and C . Define a coloring χ : V (G) → P(U G ) as follows: for v ∈ V (G), let χ(v) be the set of those U ∈ U G for which v ∈ U . Thus, χ is a coloring of V (G) with 2 N colors. Take any p subsets X 1 , . . . , , whereas the latter graph has treedepth at most d by the assumed properties of U G . We conclude that every p color classes in χ induce a subgraph of treedepth at most d.
It remains to refine this coloring so that we in fact obtain a coloring such that every at most i p color classes induce a subgraph of treedepth at most i. As every p color classes in χ induce a subgraph of treedepth at most d, we can fix for every p color classes I of χ a treedepth decomposition Y I of height at most d. We define the coloring ξ such that every vertex v gets the color {(I, h I ) : I is a subset of p color classes containing v and h I is the depth of v in the decomposition Y I }. Note that since the number of colors of χ is finite, the number of colors used by ξ is also finite.
We now prove that in the refined coloring, any i p colors in ξ have treedepth at most i. Fix any i p colors in ξ and denote the tuple of colors by J. As ξ is a refinement of χ, there exists a tuple I of at most p colors in χ which contains all vertices of G[J]. Furthermore, the i selected colors of J are contained in i levels of the treedepth decomposition Y I . Taking the restriction of these i levels yields a forest of height at most i, which is a witness that G[J] has treedepth at most i.
D Proofs of Section 5.1
In this section we present the missing proofs of Section 5.1 as well as a second proof for Lemma 29.
D.1 Guided and guidable functions
Proof (of Lemma 24). For each connected component C of G we may find a guidance system U C = {U C 1 , . . . , U C } that guides g| C . Since g| C is undefined for vertices outside of C, we may assume that U C i ⊆ V (C) for each i ∈ [ ]. It follows that g is guided by the guidance system U = {U 1 , . . . , U } defined by setting U i to be the union of U C i throughout connected components C of G.
Proof (of Lemma 25). Let U i be a guidance system of size at most that such that g i is guided by U i . Then U = s i=1 U i is a guidance system of size at most · s. It is easy to see that U guides the partial function g.
Proof (of Lemma 26). Let U be a guidance system of size at most such that f G is guided by U. For each vertex x such that f (x) is a neighbor of x, pick an arbitrary set V (x) ∈ U such that f (x) is the unique neighbor of x in V (x).
We now present an almost quantifier-free transduction that constructs f G . First, for each U ∈ U use a unary lift to introduce a unary predicate that selects the vertices of U . Next, introduce two unary predicates, Null and Self, which select the vertices x such that f (x) is undefined or f (x) = x, respectively. Finally, for each V ∈ U introduce a unary predicate G V that selects vertices x with V (x) = V . Now, for each U ∈ U, construct the partial function d U which maps every vertex x to its unique neighbor in U (if it exists) using the function extension operation parameterized by the formula E(x, y) ∧ U (y). Finally, construct f G using the function extension operation parameterized by the formula α(x, y) stating that x ∈ Null and either x ∈ Self and y = x, or x ∈ G V and y = d U (x).
D.2 Greedy proof of Lemma 29
We now present the second proof of Lemma 29. As asserted by Lemma 27, graphs from a fixed class of bounded shrubdepth do not admit arbitrarily long induced paths. We need a strengthening of this statement: classes of bounded shrubdepth also exclude induced structures that roughly resemble paths, as made precise next.
Definition D.43. Let G be a graph. A quasi-path of length in G is a sequence of vertices (u 1 , u 2 , . . . , u ) satisfying the following conditions: • for every odd i ∈ [ ] and even j ∈ [ ] with j > i + 1, we have u i u j / ∈ E(G).
Note that in a quasi-path we do not restrict in any way the adjacencies between u i and u j when i, j have the same parity, or even when i is odd and j is even but j < i − 1. We now prove that classes of bounded shrubdepth do not admit long quasi-paths; note that since an induced path is also a quasi-path, the following lemma actually implies Lemma 27.
Lemma D.44. For every class C of graphs of bounded shrubdepth there exists a constant q ∈ N such that no graph from C contains a quasi-path of length q.
Proof. It suffices to prove the following claim. The proof is by induction on h. Observe first that graphs admitting a connection model of height 0 are exactly graphs with one vertex, hence we may set g = f (0, m) = 1 for all m ∈ N.
We now move to the induction step. Assume G admits a connection model T of height h 1 where λ : V (G) → Λ is the corresponding labeling of V (G) with a set Λ consisting of m labels. Call two vertices u, v ∈ V (G) related if in T they are contained in the same subtree rooted at a child of the root of T ; obviously this is an equivalence relation. The least common ancestor of two unrelated vertices is always the root of T , hence for any two unrelated vertices u, v, whether u and v are adjacent depends only on the label of u and the label of v.
Now suppose G admits a quasi-path Q = (u 1 , . . . , u ). A block in Q is a maximal contiguous subsequence of Q consisting of pairwise related vertices. Thus Q is partitioned into blocks, say B 1 , . . . , B p appearing in this order on Q. Observe that every block B i either is a quasi-path itself or becomes a quasi-path after removing its first vertex. Since vertices of B i are pairwise related, they are contained in an induced subgraph of G that admits a tree model of height h − 1 and using m labels, implying by the induction hypothesis that every block has length at most f (h − 1, m) + 1. (1) Next, for every non-last block B i (i.e. i = p), let the signature of B i be the following triple: • the parity of the index of the last vertex of B i , • the label of the last vertex of B i , and • the label of its successor on Q, that is, the first vertex of B i+1 .
The next claim is the key step in the proof.
Claim 5.
There are no seven non-last blocks with the same signature.
Proof. Supposing for the sake of contradiction that such seven non-last blocks exist, by taking the first, the fourth, and the seventh of them we find three non-last blocks B i , B j , B k with sames signature such that 1 i < j < k < p and j − i > 2 and k − j > 2. Let 1 a < b < c < be the indices on Q of the last vertices of B i , B j , B k , respectively. By the assumption, λ(u a ) = λ(u b ) = λ(u c ), λ(u a+1 ) = λ(u b+1 ) = λ(u c+1 ), and a, b, c have the same parity. Suppose for now that a, b, c are all even; the second case will be analogous. Further, the assumptions j − i > 2 and k − j > 2 entail b > a + 2 and c > b + 2.
Observe that u a+1 and u b have to be related. Indeed, u a has the same label as u b , while it is unrelated and adjacent to u a+1 . So if u a+1 and u b were unrelated, then they would be adjacent as well, but this is a contradiction because a + 1 is odd, b is even, and a + 2 < b. Similarly u a and u c+1 are related and u b and u c+1 are related. By transitivity we find that u b and u b+1 are related, a contradiction.
The case when a, b, c are all odd is analogous: we similarly find that u a is related to u b+1 , u a is related to u c+1 , and u b is related to u c+1 , implying that u b is related to u b+1 , a contradiction. This concludes the proof.
Since there are 2m 2 different signatures, Claim 5 implies that the number of blocks is at most 12m 2 + 1.
Assertions Equation 1 and Equation 2 together imply that (f (h−1, m)+1)(12m 2 +1). As Q was chosen arbitrarily, we may set This concludes the proof of Claim 4 and of Lemma D.44. Now Lemma 29 immediately follows from the following (essentially reformulated) statement.
Lemma D.45. For every class C of graphs of bounded shrubdepth there exists a constant p ∈ N such that the following holds. Suppose G ∈ C and A and B are two disjoint subsets of vertices of G such that every vertex of A has a neighbor in B. Then there exist subsets B 1 , . . . , B p ⊆ B with the following property: for every vertex v ∈ A there exists i ∈ [p] such that v has exactly one neighbor in B i .
Proof. Call a vertex u ∈ B a private neighbor of a vertex v ∈ A is u is the only neighbor of v in B. Consider the following procedure which iteratively removes vertices from A and B until A becomes empty. The procedure proceeds in rounds, where each round consists of two reduction steps, performed in order: 1. B-reduction: As long as there exists a vertex u ∈ B that is not a private neighbor of any v ∈ A, remove u from B.
A-reduction:
Remove all vertices from A that have exactly one neighbor in B.
Observe that in the B-reduction step we never remove any vertex that is a private neighbor of some vertex in A, so during the procedure we maintain the invariant that every vertex of A has at least one neighbor in B. Note also that in any round, after the B-reduction step the set B remains nonempty, due to the invariant, and then every vertex of B is a private neighbor of some vertex of A. Thus, the A-reduction step will remove at least one vertex from A per each vertex of B, so the size of A decreases in each round. Consequently, the procedure stops after a finite number of rounds, say , when A becomes empty.
Let B 1 , . . . , B be subsets of the original set B such that B i denotes B after the ith round of the procedure. Further, let A 1 , . . . , A be the subsets of the original set A such that A i comprises vertices removed from A in the ith round. Note that A 1 , . . . , A form a partition of A. The following properties follow directly from the construction: 1. Every vertex of A i has exactly one neighbor in B i , for each 1 i .
2. Every vertex of A i has at least two neighbors in B i−1 , for each 2 i .
3. Every vertex of B i has at least one neighbor in A i , for all 1 i .
For Property 2 observe that otherwise such a vertex would be removed in the previous round.
Property 1 implies that subsets B 1 , . . . , B satisfy the property requested in the lemma statement. Hence, it suffices to show that , the number of rounds performed by the procedure, is universally bounded by some constant p depending on the class C only.
Take any vertex v ∈ A . By Property 1 and Property 2, it has at least two neighbors in B −1 , out of which one, say u , belongs to B , and another, say u −1 , belongs to B −1 − B . Next, by Property 3 we have that u −1 has a neighbor v −1 ∈ A −1 . Observe that v −1 cannot be adjacent to u , because v −1 has exactly one neighbor in B −1 by Property 1 and it is already adjacent to u −1 = u . Again, by Property 1 and Property 2 we infer that v −1 has another neighbor u −2 ∈ B −2 − B −1 . In turn, by Property 3 again u −2 has a neighbor v −2 ∈ A −2 , which is non-adjacent to both u −1 and u , because u −2 is its sole neighbor in B −2 . Continuing in this manner we find a sequence of vertices with the following properties: each two consecutive vertices in the sequence are adjacent and for each i < j, v i is non-adjacent to u j . This is a quasi-path of length 2 . By Lemma D.44, there is a universal bound q depending only on C on the length of quasi-paths in G, implying that we may take p = q/2 .
E Proof of Lemma 32
Proof (of Lemma 32). It is enough to consider the case when I is an atomic operation. We assume that the input structure is a bundling K X of K, given by a function f : V ( K) → X. Note that elements of V ( K) can be identified in the structure as those that are in the domain of f . Let ∼ be the equivalence relation on V ( K), where x ∼ y if and only if f (x) = f (y). Note that ∼ can be added to the structure by an extension operation parameterized by the formula f (x) = f (y). We now consider cases depending on what atomic operation I is.
• If I is a reduct or restriction operation, then we set I = I (we may assume that a restriction does not remove elements of X by appropriate relativization, so that I indeed outputs a bundling).
• If I is an extension operation parameterized by a quantifier-free formula ϕ(x 1 , . . . , x k ), then set I to be the extension operation parameterized by the formula ϕ(x 1 , . . . , • If I is a function extension operation parameterized by a formula ϕ(x, y), then set I to be function extension operation parameterized by the formula ϕ(x, y) ∧ (x ∼ y).
• If I is a copy operation, then I is defined as the composition of a copy operation and a function extension operation that introduces a new function f in place of f defined as follows. We first define a function origin(x) as follows. Recall that when copying, we introduce a new unary predicate, say P , marking the newly created vertices and each vertex is made adjacent to its new copy. We let origin(x) be defined by ψ origin (x, y) := P (x) ∧ E(x, y). We now define f (x) = f (origin(x)). The resulting bundling is given by the function f .
• If I is a unary lift, say parameterized by a function σ, then set I to be the unary lift parameterized by the function σ that applies σ to each structure from K separately, investigates all possible ways of picking one output for each structure in K, and returns the set of bundlings of sets formed in this way.
F Quantifier elimination
In this section we provide the missing proofs of the lemmas from Section 6.
F.1 Proof of Lemma 33
Proof (of Lemma 33). We show that if C is a class of graphs of bounded expansion, G ∈ C and f : V (G) V (G) is a partial function that is guarded by G, then f is -guidable, for some depending only on C . Then the claim of the lemma follows by Lemma 26. First, consider the special case when C is a class of treedepth h, for some h ∈ N. For each G ∈ C , fix a forest F of depth h with V (F ) = V (G) such that every edge in G connects comparable nodes of F . Label every vertex v of G by the depth of v in the forest F , using labels {1, . . . , h}. It is easy to see that the corresponding partition of V (G) is a guidance system of order h for f . Now the general case, when C is a class which has a 2-cover U of bounded treedepth. Let N = sup{|U G | : G ∈ C }, and let h be the treedepth of the class C [U]. Let G ∈ C be a graph and let f : V (G) → V (G) be a function which is guarded by G. Then f | U is h-guidable by the previous case, and hence f is (h · N )-guidable by Lemma 25.
F.2 Proof of Lemma 35: quantifier elimination on trees of bounded depth
We first give a quantifier elimination procedure for colored trees of bounded depth. In the following, we consider Σ-labeled trees, that is, unordered rooted trees t where each node is labeled with exactly one element of Σ. We write t(v) for the label of a node v in the tree t. In this section we model trees by their parent functions, that is, we consider them as structures where the universe of the structure is the node set, there is a unary relation for each label from Σ, and there is one partial function that maps each node to its parent (the roots are not in the domain). A Γ-relabeling of a Σ-labeled tree t is any Γ-labeled tree whose underlying unlabeled tree is the same as that of t. As usual, a class of trees T has bounded height if there exists h ∈ N such that each tree in T has height at most h. For convenience we now regard sets of free variables of formulas, instead of traditional tuples. That is, if ϕ is a formula with free variables X and ν : X → V (t) is a valuation of variables from X in a tree t, then we write t, ν |= ϕ if the formula ϕ is satisfied in t when its free variables are evaluated as prescribed by ν.
Our quantifier elimination procedure is provided by the following lemma, which implies Lemma 35.
Lemma F.46. Let T be a class of Σ-labeled trees of bounded height and let ϕ be a first-order formula over the signature of Σ-labeled trees with free variables X. Then there exists a finite set of labels Γ, a Γ-relabeling t of t, and a quantifier-free formula ϕ over the signature of Γ-labeled trees with free variables X, such that for each valuation ν of X in t we have t, ν |= ϕ if and only if t, ν |= ϕ.
The result immediately lifts to classes of forests of bounded depth, which are modeled the same way as trees, i.e., using a unary parent function.
Corollary F.47. The same statement as above holds for a class F of Σ-labeled forests of bounded height and a first-order formula ψ over the signature Σ-labeled forests.
Proof. Let F be a class of Σ-labeled forests of bounded height and let ψ be a first-order formula with free variables X. Construct a class of Σ-labeled trees T , by prepending an unlabeled root r f to each forest f in F , yielding a tree t f . We may rewrite the formula ψ to a first-order formula ϕ such that f, ν |= ψ if and only if t f , ν |= ϕ, for every f ∈ F and every valuation ν of X in f .
Apply Lemma F.46 to T , yielding a relabeling t of each tree t in T , using some finite set of labels Γ. This relabeling yields a relabeling f of each forest f ∈ F , where each non-root node v is labeled by a pair of labels: the label of v in the tree t f , and the label of the root of t f . Furthermore, we have t f , ν |= ϕ if and only if t f , ν |= ϕ, for every valuation ν. Note that all quantifier-free properties involving the prepended root r f in the Γ-labeled tree t f can be decoded from the labeled forest f : the unary predicates that hold in r f are encoded in all the vertices of f , and r f is the parent of the roots of f (the elements for which the parent function is undefined). It follows that we may rewrite the formula ϕ to a formula ψ such that t f , ν |= ϕ if and only if f , ν |= ψ, for every valuation ν of X in f . Reassuming, f, ν |= ψ if and only if f , ν |= ψ, for every f ∈ F and every valuation ν of X in f . Corollary F.47 immediately implies Lemma 35. It remains to prove Lemma F.46. Before proving Lemma F.46, we recall some standard automata-theoretic techniques.
We define tree automata which process unordered labeled trees. Such automata process an input tree t from the leaves to the root assigning states to each node in the tree. The state assigned to the current node v depends only on the label t(v) and the multiset of states labeling the children of v, where the multiplicities are counted only up to a certain fixed threshold. Because of that, we call these automata threshold tree automata.
We develop all the simple facts about tree automata needed for our purposes below. We refer to [28] for a general introduction. Note that what is usually considered under the notion of tree automata are automata which process ordered trees, i.e., trees where the children of each node are ordered. Tree automata collapse in expressive power to threshold tree automata in the case when they are required to be independent of the order, i.e., if A is a tree automaton with the property that for any two ordered trees t, t which are isomorphic as unordered trees, either both t and t are accepted by A or both t and t are rejected by A, then the language (i.e., set) of trees accepted by A is equal to the language of trees accepted by some threshold automaton. Therefore, the theory of threshold tree automata is a very simple and special case of that of tree automata. We now recall some simple facts about such automata.
Fix a set of labels Q. A Q-multiset is a multiset of elements of Q. If τ is a number and X is a Q-multiset, then by X τ we denote the maximal multiset X ⊆ X where the multiplicity of each element is at most τ . In other words, for every element whose multiplicity in X is more than τ , we put it exactly τ times to X ; all the other elements retain their multiplicities.
We define threshold tree automata as follows. A threshold tree automaton is a tuple (Σ, Q, τ, δ, F ), consisting of • a finite input alphabet Σ; • a finite state space Q; • a threshold τ ∈ N; • a transition relation δ, which is a finite set of rules of the form (a, X, q), where a ∈ Σ, q ∈ Q, and X is a Q-multiset in which each element occurs at most τ times; and • an accepting condition F , which is a subset of Q.
A run of such an automaton over a Σ-labeled tree t is a Q-labeling ρ : V (t) → Q of t satisfying the following condition for every node x of t: If t(x) = a, ρ(x) = q and X is the multiset of the Q-labels of the children of x in t, then (a, X τ , q) ∈ δ.
The automaton accepts a Σ-labeled tree t if it has a run ρ on t such that ρ(r) ∈ F , where r is the root of t. The language of a threshold tree automaton is the set of Σ-labeled trees it accepts. A language L of Σ-labeled trees is threshold-regular if there is a threshold tree automaton whose language is L; we also say that this automaton recognizes L.
An automaton is deterministic if for all a ∈ Σ and all Q-multisets X in which each element occurs at most τ times there exists q such that (a, X, q) ∈ δ and whenever (a, X, q), (a, X, q ) ∈ δ, then q = q . Note that a deterministic automaton has a unique run on every input tree.
The next lemma explains basic properties of threshold tree automata and follows from standard automata constructions. In the lemma we speak about monadic secondorder logic (MSO), which is the extension of first-order logic by quantification over unary predicates.
Lemma F.48. The following assertions hold: (1) For every threshold automaton there is a deterministic threshold automaton with the same language.
(2) Threshold-regular languages are closed under boolean operations.
(3) If f : Σ → Γ is any function and L is a threshold-regular language of Σ-labeled trees, then the language f (L) comprising trees obtained from trees of L by replacing each label by its image under f is also threshold-regular.
(4) For every MSO sentence ϕ in the language of Σ-labeled trees there is a deterministic threshold automaton A ϕ whose language is the set of trees satisfying ϕ.
Proof. Assertion (1) follows by applying the standard powerset determinization construction. For assertion (2), it follows from (1) that every threshold-regular language is recognized by a deterministic threshold tree automaton. Then, for conjunctions we may use the standard product construction and for negation we may negate the accepting condition. For assertion (3), an automaton recognizing f (L) can be constructed from an automaton recognizing L by nondeterministically guessing labels from Σ consistently with the given labels from Γ, so that the guessed Σ-labeling is accepted by the automaton recognizing L. Now assertion (4) follows from (1), (2), and (3) in a standard way, because every MSO formula can be constructed from atomic formulas using boolean combinations and existential quantification (which can be regarded as a relabeling f that forgets the information about the quantified set).
Let X be a finite set of (first-order) variables and let Σ X = Σ × P(X). Given a tree t and a partial valuation ν : X V (t), let t ⊗ ν be the Σ X -tree obtained from t, by replacing, for each node u of t, the label a of u by the pair (a, Y ) where Y = ν −1 (u) ⊆ X.
Toward the proof of Lemma F.46, consider a first-order formula ϕ over Σ-labeled trees with free variables X. We can easily rewrite ϕ to a first-order sentence ψ over Σ X -labeled trees such that t, ν |= ϕ if and only if t ⊗ ν |= ψ for every Σ-labeled tree t and valuation ν : X → V (t). By Lemma F.48(4) there is a deterministic threshold automaton A ψ whose language is exactly the set of Σ X -labeled trees satisfying ψ.
Denote by Q the set of states and by K the threshold of A ψ , and let M = K + |X|. Denote by ∆ the set of Q-multisets in which every element occurs at most M times.
Given a Σ-labeled tree t and a partial valuation ν : X V (t), define ρ ν as the Q-labeling of t which is the unique run of A ψ over t ⊗ ν. For a node u of t, let C ν (u) be the Q-multiset defined as follows: C ν (u) = {ρ ν (w) : w is a child of u in t}.
Define a new set of labels Γ = Σ × ∆, and a Γ-relabeling t of t as follows: for each u ∈ V (t), say with label a ∈ Σ in t, the label of u in t is the pair (a, C ∅ (u) M ), where ∅ is the partial valuation that leaves all variables of X unassigned. Our goal now is to prove that this relabeling t of t satisfies the conditions expressed in Lemma F.46. To this end, given a valuation ν of X in t, let t| ν denote the Γ X -labeled tree obtained from t ⊗ ν by restricting the node set to the set of ancestors of nodes in the image ν(X) of ν.
Lemma F.49. There is a set of Γ X -labeled trees R such that for every Σ-labeled tree t and valuation ν of X in t, t, ν |= ϕ if and only if t| ν ∈ R.
Proof. Fix a tree t and a valuation ν of X in t. We say that a node u of t is nonempty if it has a descendant which is in the image of ν. For node u of t define the following Q-multisets: N ∅ (u) = {ρ ∅ (w) : w is a nonempty child of u}, N ν (u) = {ρ ν (w) : w is a nonempty child of u}.
Note that since there are at most |X| nonempty children of a given node u, there is a finite set Z independent of t and ν such that the functions N ν and N ∅ take values in Z. Fix a node u of t. Claim 6. The state ρ ν (u) is uniquely determined by the label of u in t ⊗ ν, and the Qmultisets C ∅ (u) M , N ∅ (u) and N ν (u), i.e., there is a function f : Σ X × ∆ × Z × Z → Q such that for every tree t, valuation ν and node u, ρ ν (u) = f ( label of u in t ⊗ ν , C ∅ (u) M , N ∅ (u) , N ν (u) ). (3) Proof. Clearly N ∅ (u) ⊆ C ∅ (u), as multisets. Moreover, the following equality among multisets holds: This is because the automaton A ψ is deterministic and therefore ρ ν (w) = ρ ∅ (w) for all nodes w which are not nonempty. From Equation 4, the fact that N ∅ (u) has at most |X| elements and M = K + |X|, it follows that ((C ∅ (u) M −N ∅ (u)) + N ν (u)) K = (C ν (u)) K .
By definition of the run of A ψ on t ⊗ ν, the state ρ ν (u) is determined by the label of u in t ⊗ ν and by (C ν (u)) K . It follows from Equation 5 that ρ ν (u) is uniquely determined by the label of u in t ⊗ ν, (C ∅ (u)) M , and the Q-multisets N ∅ (u) and N ν (u), proving the claim.
From Claim 6 it follows that the state ρ ν (r), where r is the root of t, depends only on the tree t| ν . Indeed, we can inductively compute the states ρ ν (u) and ρ ∅ (u), moving from the leaves of t| ν towards the root, as follows. Suppose u is a node of t| ν such that ρ ν (v) and ρ ∅ (v) have been computed for all the nonempty children v of u (in particular, this holds if u is a leaf of t| ν ). Then, we can determine the multisets N ν (u) and N ∅ (u) using their definitions, and consequently, we can determine ρ ν (u) by Equation 3, whereas ρ ∅ (u) only depends on C ∅ (u) K and on the label of u in t. Note that both the label of u in t and the multiset C ∅ (u) K are encoded in the label of u in t.
As shown above, for any tree t and valuation ν, the state of ρ ν at the root depends only on t| ν . On the other hand, t, ν |= ϕ if and only if the state of ρ ν (r) at the root is an accepting state. Hence, whether or not t, ν |= ϕ, depends only on the tree t| ν . This proves the lemma.
Finally, we observe the following.
Lemma F.50. For each Γ X -labeled tree s there exists a quantifier-free formula ψ s over the signature of Γ-labeled trees with free variables X such that the following holds: for every Γ-labeled tree t and valuation ν of X in t, we have t, ν |= ψ s if and only if t| ν is isomorphic to s.
Proof. Observe that the ancestors of nodes in ν(X) may be obtained by applying the parent function to them. Thus, using a quantifier-free formula we may check whether each node of ν(X) lies at depth as prescribed by s, whether its ancestors have labels as prescribed by s, and whether the depth of the least common ancestor of every pair of nodes of ν(X) is as prescribed by s. Then t| ν is isomorphic to s if and only if all these conditions hold.
With all the tools prepared, we may prove Lemma F.46.
Proof (of Lemma F.46). Let R h be the intersection of R with the class of trees of height at most h. Since each tree from R has at most |X| leaves by definition, R h is finite and its size depends only on |X| and h. By Lemma F.49, it now suffices to define ϕ as the disjunction of formulas ψ s provided by Lemma F.50 over s ∈ R h .
|
2018-07-16T23:43:46.234Z
|
2018-10-04T00:00:00.000
|
{
"year": 2018,
"sha1": "32ed120040e437265423082d5a44475093bebe6a",
"oa_license": "CCBY",
"oa_url": "https://drops.dagstuhl.de/opus/volltexte/2018/9130/pdf/LIPIcs-ICALP-2018-126.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "6d5005452cfd058d103007797cb5aa8e5be50a4d",
"s2fieldsofstudy": [
"Mathematics",
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
}
|
212834731
|
pes2o/s2orc
|
v3-fos-license
|
Attitude of Students with Disabilities to Tutorial Assistance at the University
The article presents the results of students' opinions sociological research in the framework of the inclusive education process, adaptation and social rehabilitation of students with disabilities. The article shows that the most students have a positive attitude towards co-education with special students and the participation of a tutor in organizing assistance to such students in the learning process. The results of the research revealed a lack of technical training tools used by totally blind and deaf students for the full assimilation of information, and also the need for further teaching staff to understand the problems that arise when working with such students. This research proves the need for the development of tutorial assistance and an accurate definition of tutor duties. Keywords—tutor; adaptation; student with disabilities; inclusive education; and research.
INTRODUCTION
The development of higher inclusive education (HIE) is focused on the creation of the measures to support and accompany students with different health pathologies. The basic principles of the state policy in the field of education are some serious problems, which are aimed at equality of the persons rights who have health problems, their successful training and that they receive the affordable and quality higher education, taking into account psychophysiological, intellectual and individual characteristics of the students with special educational needs [4].
In modern Russian education, teachers began to use innovative pedagogical practices. They include tutorial assistance [2,9]. Interest in tutoring, manifested due to the fact that the federal state educational standards involve the realization of individual educational programs for people with disabilities [3,11]. Currently, the inclusive education of the students with disabilities is building an individual educational trajectory, when there is a special social position in an adult tutor, who assist the people with disabilities and determine the individual educational route. The tutor is necessary for more rapid and complete adaptation of students with disabilities with deep health disorders to the living and learning conditions at the university, which change the psychophysiological state of the person.
II. LITERATURE REVIEW
Since the late 90s of the XX century, tutorial practice is gaining a broad focus in various regions of Russia, relying on the British historical model [1].
In their works the authors Baglieri S., Janice H. V. focus on inclusive practices and special education, which can be transformed using the perspective of studying a student's disability, in promoting social justice in the marginalization of a disabled person [10]. The transformation of tutoring in various forms, including university, local, volunteer and other forms and conditions of interaction, was shown in the works of foreign authors Burnish, K. Topping [12,13].
On practical experience the team of authors proved the need for teachers to use the adaptive structure in order to conduct psychological and behavioral interventions for students with disabilities to get the best learning outcomes [14].
Guanglun Michael Mu, Yang Hu, Yan Wang believe that teachers can significantly affect the growth of students with disabilities confidence in the learning process [16].
S. M. Mines argues that the problems of accompanying people with disabilities are largely due to psychosocial factors related with isolation and stigmatization. They make common problems in the formation of people with developmental disabilities [15].
An interesting position is expressed in the works of T.M. Kovaleva, who focuses on the development of tutoring with the goal of individualization and the formation of sustainable motivation for learning, adaptation and personal orientation in the educational environment. To obtain this, the tutor needs: to be master of psychological and pedagogical technologies that contribute to the formation of the student's ability to independently solve their problems and the possibilities of adaptation, socialization in the process of learning in high school; to support students' motivation in building an individual learning path [6].
Organized tutoring is necessary to design an individual image, a professional trajectory and to build the most appropriate mechanisms for their achievement. Effective work requires the involvement of volunteers working in the field of inclusion [5].
It should be noted that at present there is no tutor school in the country. Available regulatory documents required for the development of tutoring technologies, in accordance with the existing nosology of diseases among students, are not enough. As a result, not every teacher, without thorough preparation and poly-format specificity of tutoring, will be able to work confidently in this direction. Thus, tutor support can be considered as an innovative, universal educational technology, effective to achieve individualization of learning, determine your path in education, comprehend your order for education, and take responsibility for your future [7,8].
III. RESEARCH METHODOLOGY
The main objectives of the research are: learning the problems faced by students with disabilities during their study in high school, identifying the attitude of students with disabilities to tutor support, determining their desire for self-development, self-assessment of personal qualities. To solve these problems, sociological research methods were used (questionnaires, analysis of empirical data, statistical and mathematical methods).
The main array method was chosen during the questionnaire survey, where the total body of students with disabilities was 68 respondents. Respondents were asked to answer the questionnaire, which included 21 questions. Statistical processing of the obtained materials was carried out on a PC, the confidence interval did not exceed 5%.
The research is to develop guidelines for tutorial assistance of people with disabilities. The results will allow making adjustments to improve the quality of students with health problems education, to expand and deepen scientific and practical ideas about educational technologies and features of tutorial assistance.
IV. RESULTS
Based on the data obtained, it was found that 36% of respondents need tutorial assistance, 45% of respondents do not need it, 19% of respondents found it difficult to answer. The category of students with deep health problems was included in the group who need tutorial assistance. It should be noted that the lack of students awareness about the possibility of tutorial assistance at the university leads students to have difficulty choosing an answer.
According to 46% of students, a tutor is a mentor working individually with one or more students, 27% of respondents define a tutor as a coordinator between a student and the educational system and 27% -as an assistant, correcting the actions of students with disabilities in the learning process.
The results obtained by the respondents indicate a correct understanding of the tutor's activities to ensure the implementation of the ideas of a personality-oriented approach and the integration of learning.
As for determining the tutor's functions, 22% of respondents believe that the main function is to build an individual educational path, 19% -to teach students to plan their own activities, 36% -to reveal the full range of opportunities for self-realization, and 23% -found it difficult to answer this question.
Respondents consider the main qualities of a tutor are compassion, understanding of students with disabilities problems, sociability, responsibility, initiative, creativity of thinking. It should be noted that most respondents consider the main qualities of a tutor are the presence of organizational abilities, the ability to find contact with people, a positive attitude.
It is worth noting that 27% of the respondents have difficulties in the educational process due to the lack of the necessary technical equipment.
When asked about the availability and possibility of tutoring at the university, the following results were obtained: 49% of respondents answered negatively, 36%answered positively, 15% -refrained from answering. 48% respondents consider that teachers help to solve the problems that arise in the process of learning at the university, 40% respondents argue that classmates help them. The results are presented in the Table I. The opinions of the respondents to the question of whether you consider tutoring as a separate profession were divided almost equally. At the same time, according to the results of the survey, it was found that the majority of respondents (67%) believe that special knowledge is needed to fulfill the role of tutor.
The majority of students, when answering the question of whether you think that tutorial assistance is an important factor in the quality of higher education, answered affirmatively. The results are presented in the Table II. It should be noted that the survey is aimed at revealing the knowledge of students with disabilities about the job functions of the tutor and studying their opinions on the nature of the tutor's activities.
Advances in Economics, Business and Management Research, volume 114
In addition, the survey showed how students with disabilities understand the concepts of "tutoring", "tutoring technology" and "tutor"; revealed the need for students in tutorial assistance; reflected the expectations of students in interaction with the tutor and the scope of cooperation with him.
Based on the foregoing, we can conclude that according to students with health problems, tutorial assistance is not sufficiently developed in Transbaikal State University. It affects the process of adaptation and the success of training at the university.
For comprehensive tutoring support it is necessary to introduce modern educational and rehabilitation technologies; to provide psychological, pedagogical and social support for students with disabilities, to ensure the availability of technical training in sufficient quantities
IV. CONCLUSION
The research indicates the existence of a number of tasks for the teaching staff to develop tutoring at the university. This is the development of support and management systems for the development of an inclusive student; the formation of key competencies and the creation of a space for professional self-determination of students with special educational needs.
Tutoring as an educational technology includes many aspects of inclusive work at the university and beyond. A systematic study of practice-oriented educational technologies will give impetus to the development of systemic thinking, reflection, critical thinking, empathy, as well as the general social competence of the tutor. The results of the research allow us to conclude that the effective interaction of the tutor -teacher, teacher-student and tutorstudent allows to improve the quality of educational results of students with disabilities, will contribute to the implementation of teamwork technologies in the educational process, and will also make it possible to create an inclusive educational environment of the Transbaikal State University.
|
2020-02-13T09:21:27.513Z
|
2019-01-01T00:00:00.000
|
{
"year": 2019,
"sha1": "fb98a25eeca4bedee84473ea905a20c35eb29b40",
"oa_license": "CCBYNC",
"oa_url": "https://www.atlantis-press.com/article/125932417.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "5a35d757018a6e26c7b29bafebb62e85f0cb4030",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
119374764
|
pes2o/s2orc
|
v3-fos-license
|
Electromechanical Detection in Scanning Probe Microscopy: Tip Models and Materials Contrast
The rapid development of nanoscience and nanotechnology in the last two decades was stimulated by the emergence of scanning probe microscopy (SPM) techniques capable of accessing local material properties, including transport, mechanical, and electromechanical behavior on the nanoscale. Here, we analyze the general principles of electromechanical probing by piezoresponse force microscopy (PFM), a scanning probe technique applicable to a broad range of piezoelectric and ferroelectric materials. The physics of image formation in PFM is compared to Scanning Tunneling Microscopy and Atomic Force Microscopy in terms of the tensorial nature of excitation and the detection signals and signal dependence on the tip-surface contact area. It is shown that its insensitivity to contact area, capability for vector detection, and strong orientational dependence render this technique a distinct class of SPM. The relationship between vertical and lateral PFM signals and material properties are derived analytically for two cases: transversally-isotropic piezoelectric materials in the limit of weak elastic anisotropy, and anisotropic piezoelectric materials in the limit of weak elastic and dielectric anisotropies. The integral representations for PFM response for fully anisotropic material are also obtained. The image formation mechanism for conventional (e.g., sphere and cone) and multipole tips corresponding to emerging shielded and strip-line type probes are analyzed. Resolution limits in PFM and possible applications for orientation imaging on the nanoscale and molecular resolution imaging are discussed.
I. Introduction
Rapid progress in nanoscience and nanotechnology in the last two decades has been stimulated by and also necessitates further development of tools capable of addressing material properties on the nanoscale. 1 Following the development of Scanning Tunneling Microscopy 2 (STM) and Atomic Force Microscopy 3 (AFM) techniques that allowed visualizing and manipulating matter on the atomic level, a number of force-and current based Scanning Probe Microscopy (SPM) techniques were developed to address properties such as conductance, elasticity, adhesion, etc. on the nanoscale. 4,5 Currently, the central paradigms of existing SPM techniques are based on detection of current induced by the bias applied to the probe, and cantilever displacement induced by a force acting on the tip. The third possibility, electromechanical detection of surface displacements due to piezoelectric and electrostrictive effects induced by bias applied to the probe tip, is realized in Piezoresponse Force Microscopy (PFM). These three detection mechanisms can be implemented on both SPM and nanoindentor based-platforms. Finally, detection of current induced by force applied to the probe is limited by the smallness of the corresponding capacitance and has not been realized in SPM. However, such measurements have been realized on nanoindentor-based platforms. 6,7 PFM was originally developed for imaging, spectroscopy, and modification of ferroelectric materials with strong electromechanical coupling coefficients pm/V). 8,9,10 The ability to measure vertical and lateral components of the electromechanical response vector, perform local polarization switching, and measure local hysteresis loops (PFM spectroscopy) has attracted broad attention to this technique and resulted in a rapidly increasing number of publications. 11,12 Currently, PFM is one of the most powerful tools for nanoscale characterization of ferroelectric materials. However, until recently, it was believed that PFM was limited to ferroelectric materials, representing a relatively minor class of inorganic materials and, with few exceptions (e.g., polyvinilidendifluoride and its copolymers) non-existent in macromolecular materials and biopolymers.
However, piezoelectric coupling between electrical and mechanical phenomena is extremely common in inorganic materials (20 out of 32 symmetry classes are piezoelectric) and ubiquitous in biological polymers due to the presence of polar bonds and optical activity.
Hence, PFM is a natural technique for high-resolution imaging of these systems. One of the limitations in PFM of such materials is low electromechanical coupling coefficients, typically 1-2 orders of magnitude below that of ferroelectrics. However, the high vertical resolution, inherent to all SPM techniques, combined with large (~1-10 Vpp) modulation amplitudes allows local measurement of electromechanical coupling even in materials with small piezoelectric coefficients, e.g., III-V nitrides 13 and biopolymers. 14 , 15 In fact, the primary limitations in PFM imaging of weakly piezoelectric materials are not the smallness of the corresponding response magnitude, but the linear contribution to the PFM contrast due to capacitive tip-surface forces and the inability to use standard phase-locked loop based circuitry for resonance enhancement of weak electromechanical signal. 16 However, both of these limitations can be circumvented by electrically shielded probes, imaging in liquid environment, 17 and improved control and signal acquisition routines.
Finally, we note that while piezoelectricity, similar to elasticity and the dielectric constant, is a bulk property defined only for an atomically large (many unit cells) volume of material, the electromechanical coupling per se exists down to a single polar bond level. 18 Hence theoretically, electromechanical properties can be probed on molecular and atomic levels. As a consequence of piezoelectricity in polar bonds, all polar materials possess piezoelectric properties, unless forbidden by lattice symmetry. Symmetry breaking at surfaces and interfaces should give rise to piezoelectric coupling even in non-polar materials, and a number of novel electromechanical phenomena, including surface piezoelectricity and flexoelectricity, have been predicted. 19 To summarize, electromechanical coupling is extremely common on the nanoscale, it is manifest on the single molecule level, and novel forms are enabled at surfaces and in nanoscale systems unconstrained by bulk symmetry. PFM is a natural tool to address these phenomena, in turn necessitating the understanding of the relationship between signal formation mechanisms, material properties, and tip parameters. The relationship between the surface and tip displacement amplitudes, 20 cantilever dynamics in PFM, 21 and mechanisms for electrostatic force contribution 22,23 have been analyzed in detail. However, due to its inherent complexity, voltage-dependent tip-surface contact mechanics, the key element that relates applied modulation and measured response, is available only for a limited class of transversally-isotropic piezoelectric materials and simple tip geometries.
Here, we analyze the basic physics of PFM in terms of the tensorial nature of the response, the signal dependence on contact area, materials properties contributing to the signal, and signal-distance dependence. The image formation mechanisms in current-and force-based scanning probe microscopies is analyzed in terms of the tensor nature of the excitation and detection signal's sensitivity to contact area in Section II. The relationship between the PFM signal and material properties and tip geometry is analyzed using a linearized decoupled Green's function approach in Section III. Orientation imaging by PFM and implications for high-resolution electromechanical imaging are discussed in Section IV.
II. Classification of SPMs
In this section, current-and force-based SPM techniques are discussed in terms of the tensor nature of the measured signal and the dependence of the measured signal on the contact area. This means of classification is complementary to the widely used classification of the techniques based on the apparatus, in which microscopes using probes with the cantilever-or tuning-fork based displacement detection system are referred to as AFMs and systems employing current detection through etched metal probe are referred to as STMs. By addressing the physical mechanisms behind the contrast as opposed to instrumental platform, mechanism based classification clarifies the role of individual interactions and elucidates strategies for further technique development, such as applicability of resonant enhancement, sensitivity to topographic cross-talk, etc.
In a general local probe experiment implemented on either a SPM or nanoindentor platform, there are two independently controlled external variables, namely probe bias and indentation force (Fig. 1). Two independently detected parameters are cantilever deflection (or changes in the dynamic mechanical characteristics of the system) and probe current. In STM and conductive AFM, current induced by the bias applied to the tip is detected and is used as a feedback signal for tracing topography (STM) or local conductivity measurements (cAFM). In AFM and related techniques, including non-contact and intermittent contact AFM, atomic force acoustic microscopy, etc. the cantilever displacement induced by a force applied to the probe is detected. The force can also be of an electrostatic and magnetic nature, providing the basis of electrostatic and magnetic force microscopies. The response in this case scales reciprocally with cantilever spring constant. In PFM, mechanical displacement induced by an electric bias applied to the tip is detected and response is independent on cantilever spring constant. The reverse mechanism, detection of the current induced by the force, is limited by the smallness of the corresponding capacitance in an SPM experiment, and can be implemented on nanoindentator platforms where the contact areas are significantly larger (contact mode), 6,7 or by using especially large radius of curvature tips (non-contact mode).
From these considerations, the signal formation mechanism in PFM is distinctly different from that in either AFM or STM and conductive AFM. To establish the capacity of an SPM technique for quantitative measurements, we consider the signal formation mechanism in terms of the tensor nature of the input and output signals and the dependence of the signal on contact area. Here, we assume that the measurements are performed in the contact regime, in which the mechanical contact between the tip and the surface is well-defined. The necessary conditions for quantitative measurements by SPM are either (a) the contact area and tip geometry and properties are known or are easy to calibrate or (b) signal is insensitive to contact area and tip properties.
The first approach is adopted in nanoindentation, where calibration of the indentor shape is
Apply Bias
Apply Force
Measure Current
Measure displacement STM AFM PFM the crucial step of a quantitative experiment. 24 With few exceptions, tip shape calibration has not yet become routine in AFM-based measurements. 25,26 In addition, tip state often changes in STM and AFM, 27 requiring the development of rapid characterization methods. Therefore, of interest are SPM techniques in which this image formation mechanism is such that the signal does not depend on the contact area, either due to fundamental physics of tip-surface interactions or because the contact area is confined to single atom or molecule. The dependence of the signal on tip-surface separation is a key factor in determining the spatial resolution in the technique. Finally, the tensorial nature of the signal determines the number of independent data channels that can be accessed in the ideal experiment.
In current-based techniques such as STM, the input signal, i.e., bias applied to the probe, is a scalar quantity. Electrical current is a vector having three independent components, but, because of the point contact geometry inherent to SPM, the detected signal is current magnitude, i.e., a scalar quantity. Hence, STM and conductive AFM signal relates two scalar quantities and thus is a scalar. The relationship between the excitation and measured signals is where Σ is local conductance determined by material properties, contact area, tip-surface separation, and tip geometry. Note that here and everywhere we consider the case of semiinfinite uniform 3D material, for which the response to the point excitation is determined by local properties only and is independent on the boundary conditions. In force-based techniques, such as AFM, both the excitation signal (i.e., force) and the response signal (i.e., displacement) are vectors. Hence, the AFM signal is a rank two tensor.
In the coordinate system aligned with cantilever, where the 1-axis is oriented in the surface plane along the cantilever, the 2-axis is in-plane and perpendicular to the cantilever, and 3axis is the surface normal, the signal formation mechanism can be represented as where α, β, and χ are proportionality coefficients dependent on cantilever geometry and calibration. Hence, information on materials response is partially lost in cantilever based experiment. This coupling between longitudinal and normal displacements is a wellrecognized problem in AFM, hindering quantitative indentation measurements with standard cantilever sensors. 30 , 31 A number of attempts to develop 3D force sensors avoiding this limitation have been reported. 32,33 In the continuum mechanics limit, the contact stiffnesses are proportional to contact radius, 1 a a ij . This behavior holds down to length scales of a few atoms. When the contact area is single molecule, as in protein unfolding spectroscopy, the effective contact area is constant and 0 a a ij . Hence, quantitative force measurements are generally limited to the cases when the contact geometry is well characterized, as in nanoindentation techniques, or is weakly dependent on the probe, as in atomic-resolution imaging or molecular unfolding.
PFM employs electromechanical detection. The excitation signal is bias, whereas the electromechanical response of the surface is a vector. Hence, the PFM response is a vector, Here, the response components i d describe the electromechanical coupling in the material in the point contact geometry. In the uniform field case, ( ) ( ) 33 35 34 3 2 where ij d are longitudinal and shear elements of the piezoelectric constant tensor of the material in the laboratory coordinate system, as discussed in detail elsewhere. 20 Similarly to AFM, the torsional and flexural components of the cantilever oscillations, rather than the surface displacement components, are measured in PFM experiments, resulting in mixing between longitudinal, 1 d , and normal, 3 d , components of the signal. However, due to differences in the signal transduction mechanism between normal and shear surface vibrations and tip motion, the vertical signal can be measured using high frequency excitation.
At the same time, both in-plane components of the response vector can be detected by imaging before and after a 90° in-plane rotation of the sample. This approach, while tedious and limited to samples with clear topographic markings necessary for locating same region after rotation, allows the full response vector to be obtained. 34,35 Further progress can be achieved with 3D force probes. 32,33 The To summarize, the image formation mechanism in PFM is distinctly different from conventional force-and current-based SPM techniques. In the classical limit, the signal is independent of the contact area, thus providing a basis for quantitative measurements. All
III. Materials Contrast in PFM
In general, a calculation of the electromechanical response induced by a biased tip requires the solution of a coupled electromechanical indentation problem, currently available only for uniform transversally isotropic case. 36,37 This solution is further limited to the strong indentation case, in which the fields generated outside the contact area are neglected. While this approximation is valid for large contact areas, for small contacts the electrostatic field produced by the part of the tip not in contact with the surface can provide a significant contribution to the electromechanical response. This behavior is similar to the transition from Hertzian contact mechanics valid for macroscopic contacts to Dugdale-Maugis mechanics for nanoscale contacts. 38,39 An alternative approach for the calculation of the electromechanical response is based on the decoupling approximation. In this case, the electric field in the material is calculated using a rigid electrostatic model (no piezoelectric coupling), the strain or stress field is calculated using constitutive relations for piezoelectric material, and the displacement field is evaluated using the appropriate Green's function for an isotropic or anisotropic solid. This approach is rigorous for the materials with small piezoelectric coefficients. A simple estimation of the decoupling approximation applicability is based on the value of the square of the dimensionless electromechanical coupling coefficients developed for transversally isotropic materials corresponding to the case of c + -cdomains in tetragonal ferroelectrics in the limit of weak elastic anisotropy and full anisotropic material with weak elastic and dielectric anisotropies.
III.1. Electric fields distributions
The initial step in calculating the electromechanical response in the decoupled approximation is the determination of the electric field distributions. While for isotropic 45 and transversally isotropic 46 materials, the solution can be obtained using simple image charge method ( Fig. 2), the field is significantly more complex in materials with lower symmetry.
Here we analyze the case of the full dielectric anisotropy. The potential distribution in the anisotropic half-space with dielectric permittivity, ij ε , induced by a point charge, Q , located at the distance, d , above the surface can be obtained from the solution of the Laplace equation: with the boundary conditions for electric field and potential is the potential distribution in a free space. In a case of general dielectric anisotropy, the electrostatic potential ) (r V is found in the Fourier representation (Appendix B) as: The square root in Eq. (6b) is real for any real ( ) y x k k , since the dielectric constant tensor, ij ε , is positively defined.
For a transversally isotropic dielectric material ( where ρ = + 2 y x 2 and z are the radial and vertical coordinates respectively, 11 33 ε ε = κ is the effective dielectric constant, and 11 33
III.3. Electromechanical response to a point charge
The decoupling Green's function approach developed by Felten et al. 42 piezoelectric properties of a material to be varied independently (Appendix A). 44 In particular, we note that the dielectric and particularly elastic properties described by positively defined second-and fourth-rank tensors (invariant with respect to 180° rotation) are necessarily more isotropic than piezoelectric properties described by third-rank tensors (antisymmetric with respect to 180° rotation).
In the framework of this model, the displacement vector ( ) (8) where ξ is is the electric field produced by the probe. For most ferroelectric perovskites, the symmetry of the elastic properties can be approximated as cubic (anisotropy of elastic properties is much smaller than those of the dielectric and piezoelectric properties), and therefore, the approximation of elastic isotropy is used. The Green's function for an isotropic, semi-infinite half-plane is 47,48,49,50 ( ) , Y is Young's modulus, and ν is the Poisson ratio.
Corresponding expressions for transversally isotropic materials are available elsewhere. 51 Finally, for lower material symmetries, closed-form representations for elastic Green's functions are generally unavailable, but approximate solutions can be derived. 48
III.3.1. Transversally isotropic dielectric material
For the special case of a transversally isotropic material, the PFM response can be calculated in analytical form assuming weak elastic anisotropy. After lengthy manipulations, where the functions ( ) The vertical PFM signal is determined by the surface displacement at the position of the tip, The functions ( ) The polar components of the in-plane displacement vector , and hence the lateral PFM signals are zero in agreement with the rotational symmetry of the system.
In most materials, the Poisson ratio is .
The point-charge response in Eqs. (10) and (12) can be extended to realistic tipgeometries using an appropriate image charge model, e.g., an image charge series for a spherical tips or a line charge model for conical tips. 52 In particular, from the similarity between Eq. 7) and This is no longer the case for the in-plane components. In this case, the surface displacement fields are more complex and i Q and i d are the effective charges values and z-coordinates respectively. The Eqs. (16,17) allow estimating the in-plane response caused by the tip asymmetry.
The dependence of the vertical surface response only on the potential induced on the surface if the tip charges are located on the same line along the surface normal also applies for PFM signals for materials with lower symmetries, as will be analyzed below.
III.3.2. General piezoelectric anisotropy
One of the key problems in PFM is determining the response for a fully anisotropic material, in which case both normal and in-plane components of the surface displacement can be nonzero. Note, that this case corresponds both to materials with low symmetry (e.g., triclinic and monoclinic) and tetragonal perovskites for the case when the orientation of the crystallographic c-axis and the surface normal do not coincide.
In the case of general piezoelectric anisotropy, all elements of the stress piezoelectric tensor, jlk e , can be nonzero, necessitating the evaluation of all integrals in Eq. (8). In this case, with a third-rank tensor of piezoelectric constants, kjl e , which is symmetric on j and l . Thus, we symmetrize it on the indexes j , l and introduce the symmetrical tensor ijlk W . The vector of surface displacement ( ) where the components of tensor The Eq. (18) The electromechanical response in PFM is determined by the components of Eq. (18) evaluated at the origin, , where Voigt notation applies on indices j and l ( ) where Note that here we introduced Voigt matrix notations in the piezoelectric tensor without any factors α = k kjl e e . 53 In reduced notation, the surface displacement below the tip is: Eq. (21) The nontrivial elements of tensor k i U α are: Note that for a single point charge the potential on the surface below scales as . Hence, from Eqs. (7) and (21) we derive the following.
III.3.3. Elastically and dielectrically anisotropic materials
The analysis developed in Sections III.3.1 and III.3.2 has yielded analytical expressions for vertical and lateral PFM signals for two important cases, namely a transversally isotropic piezoelectric solid in the limit of elastic isotropy and an anisotropic piezoelectric solid in the limits of elastic and dielectric isotropy. These approximations are well justified for ferroelectric perovskites with a cubic paraelectric phase far from the Curie temperature and with relatively weak piezoelectric coupling, as well as ferroelectric ceramics and polymers poled in the direction normal to the surface. However, in many cases, e.g., materials such as BaTiO 3 , Rochelle salt, etc., the elastic and particularly the dielectric properties of the material are strongly anisotropic. In these cases, the analysis above becomes semiquantitative and can be improved by numerical evaluation of the integrals in Eq. (8) for an appropriate Green's function for an elastic solid and electrostatic field distribution evaluated by Eqs. (6). However, even in these cases, Eqs. (14)- (16) and (21) still provide general insight into the PFM mechanism because of a much stronger anisotropy in the piezoelectric properties (as compared to the elastic and dielectric properties) that will thus dominate the signal.
III.4. PFM response for multipole tips
Considered above were simple tip models corresponding to uniform conductive SPM tips. In this and subsequent sections, we consider more complex models, corresponding to SPM tips with shielding or formed by strip lines under different biases (Fig. 4). In these cases, the tip can no longer be represented by a surface of constant potential or image charges of the same sign; rather, more complex potential distributions are required. In addition, these models allow estimation of the contribution of higher-order multipole moments to the surface response even for conventional tips, e.g., to allow consideration of the effects of tip asymmetry. Here, we analyze the PFM signal formation mechanism for tips with complex electrostatics modeled using multipole representations for the tip field. (a) Quadrupole tip can be used to create quadrupolar , or rotating in-plane dipole tip, Shielded tip can be used to localize field and minimize electrostatic contribution to PFM signal,
III.4.3. Dipole tip model (vertical)
The simplest example of a multipole AFM tip, in which the electric field has an equipotential line with zero potential at a distance, a, from the sample surface. Such potentials can be approximated by two point charges of different signs, 0 Q ± , aligned on the surface normal and separated by distance p (Fig. 5). 14) and (17) are then substituted by The potential on the surface below the tip is From the response theorems derived in Sections III.2.1 and III.3.2, the electromechanical response of the surface is proportional to the tip-induced potential, Eqs. (14)(15)(16) and (21). This case is thus trivial. The surface displacement fields can be easily calculated from the results in Appendix E. More sophisticated two-charge tip models were discussed by Abplanalp 54 for samples of finite thickness.
III.4.4. Dipole tip model (horizontal)
The nontrivial behavior can be expected if the electric field below the tip has a large in-plane component. Such fields can be created by standard, pyramidal AFM tips with stripline type electrodes where one side is biased positively and the other negatively (Fig. 4).
Similar tips with four independent electrodes can be used to create dipolar electric field rotating in the surface plane, where the corresponding torsional or flexural component of cantilever response is measured.
The electric field in this case can be modeled using an in-plane dipole with moment Assuming that the tip-surface contact corresponds to the center of the dipole, the potential at the contact, derived from Eq. (26a), is zero, The second in-plane component is ( ) 0 , as expected from symmetry considerations.
Note that that the distance dependence of the response is
III.4.5. Quadrupole tip model
A promising approach for increasing resolution and minimizing the electrostatic force contribution in PFM is based on the use of shielded tips, as shown in Fig. 7.
The approximate Eq. (29) represents the potential on the surface as a function of separation and quadruple moments. As with Eq. (29), the response components of anisotropic materials can be found using the displacement field components given by Eq. (22).
IV. Discussion
Based on the analysis of PFM, the image formation mechanism in Section III, we discuss the implications for imaging piezoelectric materials. The orientation dependence of the PFM signal and the potential for molecular and crystallographic orientational imaging are discussed in Section IV.1. The distance dependence of the electromechanical and electrostatic contributions to the PFM signal are analyzed in Section IV.2. Finally, the distance dependence of electrostatic and electromechanical contributions to PFM signal and the resolution limits are discussed in Section IV.3.
IV.1. Orientation dependence
A unique feature of PFM is that in the ideal case, the signal is independent of the tipsurface contact area and is determined solely by material properties. Furthermore, if the contact nonideality leads to a potential drop between the tip and the surface, all of the response components are reduced proportionately. Finally, in the 3D Vector PFM experiment, all three components of the electromechanical response vector can be determined. It has been suggested that these factors allow Vector PFM to be applied to mapping crystallographic and molecular orientation on the nanoscale. 20,55 Briefly, the orientation of a solid body in 3D space is given by three Euler angles (Fig. 8). In PFM, all three components of the displacement vector are measured, which provide three independent equations from which the local Euler angles can be recovered. The relationship between the ijk e tensor in the laboratory coordinate system and the 0 ijk e tensor in the crystal coordinate system is 56 20 However, the case of a uniform field rigorously corresponds to systems with a continuous top electrode, which necessarily affects the signal transduction between the surface and the tip and limits the resolution. Moreover, the fabrication of top electrodes for materials such as biopolymers or soft condensed matter systems is not straightforward.
= 3'
In the PFM geometry, the electric field produced by the tip is strongly non-uniform and the response components are given by Eq. (22). Here, we analyze the applicability of Eqs. 2 d e → , 35 35 2 d e → . Note, that for cases of tetragonal symmetry (i.e., BaTiO 3 ) the response is independent of ϕ, indicative of the rotational symmetry along the 3axis.
The dependence of the piezoelectric tensor component 33 e vs. the orientation of the crystallographic axes with respect to the laboratory coordinate system for LiTaO 3 crystal is shown in Fig. 9a. The vertical displacement below the tip vs. the orientation of the crystallographic axes with respect to the laboratory coordinate system for LiTaO 3 crystal is shown in Fig. 9b. From the data, the angular dependence of vertical displacement 3 u , is smoother, more isotropic, and much more convex than the one for 33 e . The maximum value A common feature of the displacement surfaces shown in Figs. 9-11 is that the 1 u angular distribution is smoother, much more symmetric, and convex than the one for 35 e .
Similarly to the longitudinal components of the piezoelectric tensors e 33 In the analysis above, the dielectric properties of the material were assumed to be close to isotropic and hence the electric field distribution is insensitive to sample orientation.
The effect of the orientational dependence of the dielectric properties can be incorporated in a straightforward manner using the analysis in Section III. The displacement in this case is given by a threefold integral:
IV.2. Distance and contact dependence of the PFM signal
One of the key elements in the description of the signal formation mechanism in SPM is the dependence of the signal on tip-surface separation. In the strong indentation limit, corresponding to the classical indentation case, the response is constant, when the tip is in contact, , 0 > a and zero otherwise (Fig. 14). This behavior is a direct consequence of the boundary conditions employed in a classical indentation problem. However, even when the conductive part of the tip is not in direct contact with the surface (e.g., due to the dielectric gap, oxide layer, etc.), the electric field can partially penetrate into the material, resulting in a nonzero electromechanical response. We represent the tip as a charged conductive sphere of radius 0 R . Its apex is located at distance R ∆ from the sample surface (Fig. 15). and , is the charge of an isolated tip (U is the voltage applied between the tip and the bottom electrode) and m q are dimensionless image charges located at distances m d from the sphere center. The components of the surface displacement field are found from Eqs. (14)(15)(16) where In contact, ( ) 5 . (Response theorems 1,2). Hence, Eqs. (37)(38)(39) describe the distance dependence of the PFM signal for a spherical tip when the tip is above the surface.
IV.3. Electrostatic vs. electromechanical contributions to PFM signal
One of the key factors that affect PFM imaging is the effect of electrostatic forces.
Electrostatic interactions result in a linear (in dc) tip bias contribution to the PFM signal, which does not allow unambiguous separation of the bias-independent (for piezoelectric materials) piezoelectric signal. The electrostatic signal contains two primary contributions: electrostatic forces acting on the tip that results in a second component of surface deformation, and distributed electrostatic force acting on the cantilever that result in additional nonlocal contribution due to flexural vibrations of the cantilever. From the equivalent mechanical model shown in Fig. 16, the PFM signal in the low-frequency limit can be written as The electromechanical response is determined by the effective electromechanical response of the material, 3 d , and the ratio of the ac tip potential to the ac surface potential of the ferroelectric in ambient (i.e., the potential drop in the tip-surface gap of thickness, h 0 ), ( ) The electrostatic force contribution is governed by capacitance z-gradients due to the spherical, Here, we consider local effects on the PFM signal relevant to high-resolution imaging.
The signal due to cantilever-surface interactions is nonlocal, and hence does not contribute to nanoscale contrast. A similar argument applies to the signal produced by the conical part of the tip located at significant distance from the surface. Furthermore, the conical and cantilever contributions can be reduced by using shielded probes. Hence, we consider the effects due to the spherical part of the tip. In this case, Eq. (40) can be simplified as To analyze the distance dependence of the PFM signal, we use a simple Hertzian approximation for the tip-surface contact. The relationship between the indentation depth, h, tip radius of curvature, 0 R , and load, P, is 58 where E * is the effective Young's modulus of the tip-surface system. The contact radius, a, is related to the indentation depth as 0 hR a = . The contact stiffness is given by and from Eq. (42), Shown in Fig. 17a is the distance dependence of the electrostatic and Fig. 17 is that quantitative probing of the electromechanical response requires using cantilevers with large spring constants in order to minimize nonlocal electrostatic contributions and at large indentation forces in order to maximize the spring constant of the tip-surface junction. These requirements have long been established as guidelines for quantitative imaging. 21,22,23 Note that in the Hertzian model, the electromechanical signal dominates for a penetration depth of ~1 A, corresponding to a contact radii on the order of ~2 nm for R = 50 nm, imposing a limit on the spatial resolution of the technique. The analysis becomes more complicated if adhesive effects are taken into account. In this case, the contact mechanics are described by the Johnson-Kendall-Roberts model. 38,39 In this case, the contact radius is where σ is the work of adhesion, P is the load and indentation depth is is contact radius at zero force. Shown in Fig. 18 (a) are force vs. indentation depth curves calculated for σ = 0 (Hertzian), 10 -3 , 10 -2 , 10 -1 , and 1 J/m 2 . Shown in Fig. 18 (b,c) are corresponding contact stiffnesses. Note that adhesive contact results in rapid change of contact stiffness from 0 to the value corresponding to contact, resulting in a well-defined boundary between free and bound cantilevers. Finally, shown in Fig. 18 (d) is (solid), 10 -3 (dot), 10 -2 (dash), 10 -1 (dash-dot), and 1 J/m 2 (dash-dot-dot).
In both Hertzian and JKR models, the transition to the predominantly electromechanical contrast occurs for contact areas larger than a certain critical value. From Eq. (40), this condition can be generalized for materials with arbitrary properties as where * a is the critical contact radius corresponding to equality of the electrostatic and For soft systems, the signal is likely to represent the convolution of electrostatic and electromechanical signals.
V. Summary
The image formation mechanism in SPM is analyzed in terms of the tensorial nature of a measured signal and its dependence on the contact radius. It is shown that the PFM signal is only weakly dependent on contact area, distinguishing this technique from AFM and STM. with Oak Ridge National Laboratory, managed and operated by UT-Battelle, LLC.
Appendix A. Decoupling approximation and
Fourier representation for Green's function.
For a linear piezoelectric material, the relationship between strain ij U , displacement i D , stress kl X , and electric field m E is The applicability of decoupling approximation can be established as follows. Eq. (A.2) can be rewritten as The pressure acting on the sample surface 0 = z is The displacement is thus: Integration by parts of Eq. (A.6) leads to the following expression: Since where the Green's function components in Fourier representation are: 49 Using the Fourier transform of the electric field distribution (Appendix B) Eq. (A.7) for displacement can be rewritten as: The displacement is given by a threefold integral in this representation, which however has much simpler structure than the initial Eq. (A.7). Moreover, Eq. (A.13) can be used to calculate the displacement field for materials with arbitrary symmetry of elastic properties provided that approximate or exact Fourier representation for corresponding elastic Green's function is known.
Appendix B. Fourier Representation for Electric field components.
Here, we derive the representation for the electric field induced by a point charge Q located at a distance d above the surface of the anisotropic half-space with the dielectric permittivity tensor ij ε . The potential distributions below the surface, , and above the surface, , are obtained from the solution of Laplace's equations: Here, we introduce Fourier transforms Where the Fourier transform for Dirac-delta function is ( ) And thus, The solution of Eq. (B.2) at 0 3 ≥ x can be found in the form The root with positive real part is: The square root in Eq. (B.5) is real for any real ( ) And the solution of (B.6) yields Thus, potential distribution in the Fourier space is given by 33 2 32 1 31 0 3 3 2 1 4 exp 2 ) , and in real space From Eq. (B.9), the electric field ( ) can be found as The general expression (B.9) for fully anisotropic dielectric material can be significantly simplified in the case of a transversally isotropic material ( ij ii ij δ ε = ε , 33 22 11 ε ≠ ε = ε ), as discussed in Section III.
Here we consider the integrate ( ) The elements ( ) x ijlk W containing indexes "1" and/or "2", can be obtained one from another by simultaneous permutation of indexes "1" ↔ "2" and coordinates x ↔ y, e.g., ) , . This is the case
Appendix E. Tip-surface potential.
Here, we derive the approximations for tip-induced surface potential as a function of tip-surface separation, avoiding the summation over large number (N > 1000) of image charges. The tip is represented by a biased conductive sphere of radius 0 R . Its apex is located at a distance R ∆ from the sample surface. The potential at 0 > z is ( )
|
2019-04-14T02:11:55.484Z
|
2006-07-21T00:00:00.000
|
{
"year": 2006,
"sha1": "0b96885206caa09c9635b3d63a04abd3e23e84ec",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0607543",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "cacc38aa11498423c5e24b5e30f70f2fbad5b15a",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
}
|
220650748
|
pes2o/s2orc
|
v3-fos-license
|
Development of a Simple In Vitro Assay To Identify and Evaluate Nucleotide Analogs against SARS-CoV-2 RNA-Dependent RNA Polymerase
Nucleotide analogs targeting viral RNA polymerase have been proved to be an effective strategy for antiviral treatment and are promising antiviral drugs to combat the current severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) pandemic. In this study, we developed a robust in vitro nonradioactive primer extension assay to quantitatively evaluate the efficiency of incorporation of nucleotide analogs by SARS-CoV-2 RNA-dependent RNA polymerase (RdRp). Our results show that many nucleotide analogs can be incorporated into RNA by SARS-CoV-2 RdRp and that the incorporation of some of them leads to chain termination.
hepatitis B (lamivudine), hepatitis C (sofosbuvir), and herpes (acyclovir). The proven success of using nucleoside-based drugs to effectively treat viral infections and the current knowledge of the pathway for developing this class of inhibitors make them promising antiviral agents to combat the current pandemic of COVID-19 (9,10).
Following intracellular phosphorylation, nucleoside analog 5=-triphosphates (the active form of nucleoside analogs) are competitively incorporated into nascent viral RNA chains and thus inhibit viral replication (5). The mechanism of action varies for different nucleotide analogs. The most common mechanism of action is chain termination caused by the incorporation of nucleotide analogs, resulting in the formation of incomplete viral RNA chains (11,12). Nucleoside analogs not causing chain termination may also be used as antiviral drugs, such as ribavirin and favipiravir, which induce mutagenesis due to their ambiguous base-pairing properties after being incorporated into viral RNA (13)(14)(15)(16)(17).
Like other coronaviruses, SARS-CoV-2 encodes an RNA-dependent-RNA polymerase (RdRp) on the nsp12 gene (18). This protein, which catalyzes RNA synthesis, forms a replication complex with nsp7, nsp8, and other virally encoded proteins and host proteins that are responsible for mRNA synthesis, as well as the synthesis of genomic RNA for progeny viruses (19). The structure of the nsp12-nsp8-nsp7 complex was recently determined using cryo-electron microscopy (20). Additionally, unlike other viruses, SARS-CoV-2 contains an exonuclease gene (nsp14) and can perform proofreading activity to remove the mismatched nucleotides from viral RNA (16,21,22). This is probably one of the difficulties in developing nucleotide polymerase inhibitors against SARS-CoV-2, and this proofreading activity of SARS-CoV-2 should be taken into consideration when a nucleoside-based antiviral drug against SARS-CoV-2 is being developed. Nucleotide analogs with different mechanisms of action (chain terminators, delayed terminators, and mutagenic nucleoside analogs) may respond differently to SARS-CoV-2 proofreading activity mediated by nsp14. So far, numerous nucleoside/nucleotide analogs have been described to inhibit SARS-CoV-2, including remdesivir, ribavirin, BCX4430, gemcitabine hydrochloride, -D-N4-hydroxycytidine, and 6-azauridine (9). The detailed mechanisms of action of those nucleotide analogs against SARS-CoV-2 still need to be addressed.
In this study, we developed a robust in vitro nonradioactive primer extension assay using a fluorescently labeled RNA primer annealed to an RNA template. The incorporation efficiency and chain termination ability of a series of nucleotide analogs were determined using this assay. The method is very valuable for evaluation of the antiviral potential of nucleotide analogs, and the structure-activity information derived from these studies can be used to explore the mechanism of action of given nucleotide analogs and to design nucleotide analogs with better properties.
RESULTS
Expression and purification of SARS-CoV-2 nsp12, nsp7, and nsp8. A number of reports have shown that a functional SARS-CoV-2 nsp12-nsp8-nsp7 complex (RdRp) can be expressed and purified from insect cells (23). In this study, nsp12, nsp7, and nsp8 were successfully expressed and purified from Escherichia coli (Fig. 1), and the functional nsp12-nsp8-nsp7 complex was assembled by simply mixing nsp12, nsp7, and nsp8 together in vitro. nsp12 was expressed as a C-terminally His-tagged protein and purified according to the published methods, with modifications (20,24). nsp7 and nsp8 were expressed as N-terminally His-tagged protein and purified by using nickel agarose resin. It has been reported that a preformed nsp7-nsp8 complex has a better ability to promote nsp12 polymerase activity (19); copurification of nsp7 and nsp8 was also performed by simply mixing cells expressing nsp8 and nsp7 before lysing cells, which allowed nsp7 and nsp8 to form a stable complex during purification. The term "nsp8-7" is used here to represent copurified nsp8 and nsp7. After purification, the identity of the purified proteins was confirmed by mass spectrometry analysis (Fig. S5, S6, and S7).
nsp12-nsp8-nsp7 complex assembly and activity measurement. To measure the polymerase activity of the purified enzyme, a primer extension assay employing an RNA template (40-mer RNA corresponding to the sequence of the 3= end of the SARS-CoV-2 RNA genome) and a fluorescently labeled RNA primer (30-mer) was developed ( Fig. 2A). The details of the RNA primer extension method were described previously (25)(26)(27). It was reported that nsp7 and nsp8 could form a complex with nsp12, which is essential FIG 1 nsp12, nsp7, nsp8, and nsp8-7 PAGE protein gel stained with Coomassie blue. nsp12mut contains an active-site mutation (active-site motif SDD changed to SAA). nsp8-7 represents copurified nsp8 and nsp7. M, protein molecular weight markers. The sizes of protein markers (in kilodaltons) are indicated on the left.
FIG 2
Analysis of nsp12 polymerase activity using a primer extension assay. (A) RNA primer and template used in this assay. The 30-mer primer (top) contains a fluorescent label (Cy5.5) at the 5= end, and the arrow indicates the location and direction of primer extension to form a 40-mer product. (B) Analysis of the nsp12 polymerase activity in the presence of nsp7, nsp8, or nsp8-7 (copurified nsp8 and nsp7). The enzymes used in each reaction are indicated at the bottom of the gel. The concentrations for nsp12, nsp12mut, nsp7, nsp8, and nsp8-7 (copurified nsp8 and nsp7) are 50 nM, 50 nM, 10 M, 2 M, and 2 M (2 M nsp8, 10 M nsp7), respectively. Different enzymes and P/T (5 nM) were incubated in reaction buffer, and the reactions were initiated by the addition of 100 M rNTPs, continued at 37°C for 1 h, and then were stopped by addition of stopping solution. The products were separated on denaturing polyacrylamide gels. (C) Primer extension activity using nsp12 and nsp8, nsp7, or nsp8-7 (copurified). nsp12 (50 nM) was used in all the samples. Primer extension reaction was performed as described above. In lanes 2 to 8, copurified nsp8-7 was used, and in lanes 9 to 15, nsp8 and nsp7 were added to the reaction mixture separately. The final concentrations of nsp8 and nsp7 in the reaction mixtures are given under each lane in micromolar units. for RNA synthesis catalyzed by nsp12. In this experiment, different combinations of purified protein (nsp12, nsp7, nsp8, and nsp8-7) were used in the primer extension assay to test the RNA synthesis ability of different combinations (Fig. 2B). The result showed that nsp12, nsp7, or nsp8 alone did not have any measured RNA synthesis ability in the reaction conditions described in this experiment (Fig. 2B, lanes 2, 3, and 6). nsp12 and nsp7 together also did not have RNA polymerase activity (Fig. 2B, lane 8), and nsp12 and nsp8 together had weak RNA polymerase activity (Fig. 2B, lane 9). Maximum polymerase activity was observed in the presence of nsp12, nsp8, and nsp7 together (Fig. 2B, lanes 10 and 11). An active-site mutation (SDD to SAA) of nsp12 (nsp12mut) resulted in no RNA synthesis in the presence of nsp8 and nsp7, which confirms that RNA synthesis is mediated by nsp12. The ability of nsp8-7 to promote nsp12 polymerase was further evaluated through an enzyme dilution assay and compared with that of separately added nsp8 and nsp7 (Fig. 2C). The result showed that nsp8-7 had a better ability to promote nsp12-catalyzed RNA synthesis. nsp8-7 was used in the primer extension assays described in the report, and the nsp8 concentration in nsp8-7 was used to represent the concentration of nsp8-7.
Primer extension assay development and optimization. It has been shown that the primer/template (P/T) scaffold could have a major influence on the efficiency of primer extension reaction catalyzed by mitochondrial DNA-dependent RNA polymerase (26). In this study, three RNA primers (primers I, II, and III) complementary to different regions of the 3= end of the template (40-mer) were used to form three different P/T scaffolds (Fig. 3A). The efficiency of RNA synthesis by SARS-CoV-2 RdRp using three P/T scaffolds was tested in a primer extension assay with a serial dilution of nsp12 and a constant concentration of nsp8-7 (Fig. 3B). The result showed that the SARS-CoV-2 RdRp prefers a longer primer (primer III; 30-mer) for annealing with the RNA template. As a comparison, the efficiency of RNA synthesis using three P/T scaffolds by dengue virus RdRp was similar (data not shown). Primer III (30-mer) (Fig. 3A) was used in the primer extension assays described below. To develop a robust polymerase assay, optimal nsp8-7 and nsp12 concentrations used in the reaction should be carefully determined. An enzyme dilution assay with different concentrations of nsp8-7 and nsp12 was performed (Fig. 4). The concentration of nsp8 was used to represent the concentration of copurified nsp8-7. The result showed that the primer extension products correlated with the concentration of nsp12 (Fig. 4C), and a higher concentration of nsp8-7 had a better ability to promote nsp12 polymerase activity (Fig. 4D). In the primer extension assays described below, 50 nM nsp12 and 2 M nsp8-7 (2 M nsp8 and 10 M nsp7) were used. Under this assay condition, RNA primer (5 nM) can be extended completely while consuming less nsp12 enzyme.
Nucleotide incorporation and chain termination. To inhibit viral RNA synthesis or disrupt viral RNA function, a nucleotide analog must be incorporated into newly synthesized viral RNA by RNA polymerase. Several different nucleotide analogs (Fig. 5) that have been used as antiviral drugs or in various applications were selected, and the utilization of those nucleotide analogs by SARS-CoV-2 RdRp and their ability to cause chain termination (one of the major mechanisms of inhibition of viral RNA polymerase) after incorporation were evaluated in the primer extension assay (Fig. 6). The results showed that all the nucleotide analogs tested can be incorporated into RNA by SARS-CoV-2 RdRp. Due to the possibility of misincorporation, the termination ability of those nucleotide analogs may need further investigation.
Efficiency of nucleotide analog incorporation by RdRp. The relative efficiency of incorporation of nucleotide analogs versus natural nucleotides (discrimination value) by viral RNA-dependent RNA polymerase has been used to evaluate the antiviral potential of nucleotide analogs targeting Zika virus and dengue virus RNA-dependent RNA polymerase (25), and it has also been used to evaluate potential mitochondrial toxicity of nucleotide analogs in a primer extension assay by mitochondrial DNAdependent RNA polymerase (26). Using a similar method, the discrimination values of
Incorporation of Nucleotide Analogs by SARS-CoV-2 RdRp
Antimicrobial Agents and Chemotherapy nucleotide analogs, measured in the SARS-CoV-2 RdRp primer extension assay developed in this study, were employed to evaluate the relative efficiency of incorporation of nucleotide analogs by SARS-CoV-2 RdRp. An initial time course experiment showed that RNA synthesis starting from a preformed RNA/RdRp complex (31-mer) was very fast and was finished within 20 s when 100 M ribonucleoside triphosphate (rNTP) (Fig. S1B) was used. As a comparison, it took up to 20 min to finish RNA synthesis when no preincubation of RNA and RdRp was performed (Fig. S1C). Those results suggested that the speed-limiting step of RNA synthesis in our assay is the formation of the RNA/RdRp complex. The efficiency of nucleotide analog incorporation was measured in a single-turnover condition with preformed RNA/RdRp complex as described by Fung et al. (11). To get a quantitative measurement of K 1/2 (the nucleotide concentration at which half of the 31-mer product is extended to the 32-mer product), a short reaction time (20 s) after addition of different nucleotide analogs was used. Figure 7A and B show the primer/ template design and the results of testing of ATP and two ATP analogs (remdesivir-TP and 6-chloropurine-TP). The measured values of K 1/2 for ATP (K 1/2, ATP ), remdesivir-TP (K 1/2, remdesivir-TP ), and 6-chloropurine-TP (K 1/2, 6-chloropurine-TP ) were 0.04167 M, 0.03305 M, and 3.351 M, respectively, and the calculated discrimination values D remdesivir-TP and D 6-chloropurine-TP were 0.79 and 80, respectively (Fig. 7C). This result suggested that remdesivir-TP was incorporated into RNA by SARS-CoV-2 RdRp more efficiently than natural ATP. The efficiency of incorporation of other ATP analogs by SARS-CoV-2 RdRp was much lower than that of ATP. To get a quantitative measurement of K 1/2 values, a longer reaction time (15 min) is needed to increase the percentage of incorporation of nucleotide analog by SARS-CoV-2 RdRp. Since the K 1/2 of natural ATP is impossible to measure directly under such conditions (it is below the limit of sensitivity of the assay), the K 1/2 value of 6-chloropurine-TP was used as a surrogate comparator. A similar strategy has been used to evaluate the efficiency of incorporation of the nucleotide analog by mitochondrial DNA-dependent RNA polymerase (26). Figure 7D and E show the testing of ATP analogs and discrimination value calculation. In this assay, the K 1/2 values of several ATP analogs were measured and compared to the K 1/2 value of 6-chloropurine-TP, which was used as a reference to calculate the D* ATP analog value (where D* ATP analog ϭ K 1/2 , ATP analog /K 1/2, 6-chloropurine-TP ). The discrimination values of the different ATP analogs are summarized in Table 1. D cal is a calculated discrimination value obtained using the equation D cal ATP analog ϭ D* ATP analog ϫ D 6-chloropurine . D cal represents the discrimination of incorporation by RdRp between natural ATP and a tested ATP analog. Misincorporation of GTP base-pairing with uridine in the template was also measured, which can be used as a guideline to evaluate the possibility of nucleotide analog incorporation in the cell. If the incorporation efficiency of an ATP analog is lower than GTP misincorporation efficiency, it probably has less chance to be incorporated in the cell, unless very high intracellular nucleotide analog concentrations can be reached.
As shown in Table 1, D cal values for remdesivir-TP, 6-chloropurine-TP, clofarabine-TP, ribavirin-TP, favipiravir-TP, tenofovir-DT, and GTP were 0.78 Ϯ 0.02, 78.0 Ϯ 3.5, Ͼ112,242, 24,999 Ϯ 828, 7,343 Ϯ 752, Ͼ112,242, and 8,683 Ϯ 600, respectively. Based on comparison with D cal GTP (natural GTP misincorporation as ATP), remdesivir-TP and 6-chloropurine-TP can be incorporated into RNA very efficiently by RdRp; ribavirin-TP and favipiravir-TP showed less ability to be incorporated into RNA by RdRp. Only a small fraction of primer was extended in the presence of clofarabine-TP and tenofovir-DP in the primer extension conditions used in this assay, which suggested that the incorporation efficiency of those two nucleotides is very low. Using a similar strategy, several GTP analogs (Fig. S2) and UTP analogs (Fig. S3) were also tested, and the data are summarized in Table 1. For GTP analogs, 6-thio-GTP and 2=-C-methyl-GTP can be incorporated into RNA very efficiently; the incorporation efficiencies of 6-methylthio-GTP, oxo-GTP, remdesivir-TP, ribavirin-TP, and favipiravir-TP are lower than or close to the misincorporation of ATP base-pairing with cytidine in the template, which suggested that they may not be incorporated into RNA as GTP analogs in the cell. For UTP analogs, 2=-amino-UTP, 2=-azido-UTP and ara-UTP showed higher incorporation efficiency; incorporation efficiency for stavudine-TP, 2=-O-methyl-UTP, sofosbuvir-TP, and ribavirin-TP was very low. Remdesivir-TP and favipiravir-TP were also tested as UTP analogs, and no incorporation was observed under the conditions used in this assay (15 min incubation).
Influence of remdesivir-TP incorporation to RNA synthesis. It was shown previously that remdesivir-TP could be incorporated into RNA, and this incorporation caused delayed chain termination (23). In this study, the influence of remdesivir-TP incorporation on subsequent nucleotide incorporation during RNA synthesis was tested in a primer extension assay (Fig. 8). To rule out the possibility of premature RNA synthesis caused by RNA sequence variation, a poly(A) sequence was used downstream of uridine (remdesivir-TP incorporation site) in the template (Fig. 8A, C, E, and G). In this study, 4 different templates (shown at the top of each gel image in Fig. 8) were used to test the influence of single, double, triple, and quadruple incorporations of remdesivir-TP on subsequent nucleotide incorporation. After incorporation of ATP or remdesivir-TP, different concentrations of UTP were added to test the RNA synthesis by RdRp. Compared with ATP, single incorporation of remdesivir-TP did not lead to chain termination (Fig. 8B). Interestingly, the incorporation efficiency of UTP after single remdesivir-TP incorporation was increased, as evidenced by a stronger 33-mer band at a low concentration of UTP in the presence of remdesivir-TP compared with that in the presence of ATP (Fig. 8B). Figure 8C and D show that double incorporation of remdesivir-TP also did not lead to chain termination. Figure 8E and F show that triple incorporation of remdesivir-TP did decrease the RNA synthesis efficiency after remdesivir-TP incorporation, as evidenced by the 34-mer band persisting at a high concentration of UTP (Fig. 8F), which suggests a partial termination caused by triple incorporation of remdesivir-TP. Figure 8G and H show that quadruple incorporation of remdesivir-TP greatly decreased the RNA synthesis (Fig. 8H, 35-mer band), which suggests a strong termination effect caused by quadruple incorporation of remdesivir-TP.
In a paper published by Gordon et al. (23), it was shown that incorporation of remdesivir-TP at position i caused termination of RNA synthesis at position i ϩ 3. Our results shown in Fig. 8B suggest a partial termination at position i ϩ 1 after remdesivir-TP incorporation, as evidenced by the persistence of 33-mer band, and this termination effect can be overcome by higher concentrations of UTP (the next nucleotide to be incorporated). Since the template RNA sequence may have some influence a D* ATP analog ϭ K 1/2 ATP analog /K 1/2, 6-chloropurine-TP ; D cal ATP analog ϭ D* ATP analog ϫ D 6-chloropurine-TP ; D* GTP analog ϭ K 1/2 GTP analog /K 1/2, 2=-C-methyl-GTP ; D cal GTP analog ϭ D* GTP analog ϫ D 2=-C-methyl-GTP ; D* UTP analog ϭ K 1/2 UTP analog /K 1/2, 2=-amino-UTP ; D cal UTP analog ϭ D* UTP analog ϫ D 2=-amino-UTP . Data are from two independent experiments. on RNA synthesis, a template having sequence downstream of the RNA synthesis initiation site similar to the template used by Gordon et al. (23) was utilized (Fig. 9A). With this template, a strong termination at position i ϩ 3 was observed after remdesivir-TP incorporation, and this termination can be overcome by higher concentrations of the next nucleotide to be incorporated (Fig. 9B).
DISCUSSION
In this study, SARS-CoV-2 nsp12, nsp8, and nsp7 were successfully constructed, expressed, and purified from E. coli. The nsp12-nsp8-nsp7 complex (RdRp) was assembled in vitro and showed robust RNA synthesis activity. Using purified RdRp, we developed an in vitro nonradioactive primer extension assay and demonstrated that it can be used as a tool to identify nucleotide analog substrates which could be developed into antiviral drugs against SARS-CoV-2. The primer extension assay described in this report can also be used to develop a high-throughput screen assay based on the detection of PP i released from the polymerase reaction to identify nonnucleotide analog polymerase inhibitors against SARS-CoV-2 (25).
It has been shown that many nucleotide analogs currently used as antiviral drugs can be incorporated into RNA by SARS-CoV-2 RNA polymerase (28,29), which suggests that they have the potential to be developed into antiviral drugs against SARS-CoV-2.
As an alternate substrate of RNA polymerase, a nucleotide analog must compete with natural rNTP for incorporation into viral RNA. The relative incorporation efficiency of a nucleotide analog versus natural rNTP (discrimination value) is an important criterion in evaluating the antiviral potential of nucleotide analogs. Like other studies (23), our data showed that remdesivir-TP can be incorporated into RNA as an ATP analog by SARS-CoV-2 RdRp, and the incorporation efficiency is higher than that of natural ATP (D remdesivir-TP ϭ 0.78 Ϯ 0.02). We also tested remdesivir-TP against C, A, and G in the template, and the result showed that remdesivir-TP can be incorporated as a GTP analog (Fig. S2) with low efficiency and cannot be incorporated as a UTP or CTP analog ( Fig. S3 and S4). The efficiency of incorporation of ribavirin-TP and favipiravir-TP as either ATP or GTP analogs measured in our study is very low (even lower than that of GTP or ATP misincorporation), which is in disagreement with the studies done by Ferron et al. (16) and Shannon et al. (17). In those studies, ribavirin-TP and favipiravir-TP were readily incorporated into RNA by SARS-CoV RdRp. One possible explanation for this discrepancy is the stability of the ribavirin-TP and favipiravir-TP used in our assay. Further studies are needed to verify the concentration of ribavirin-TP and favipiravir-TP used in our assay. Sofosbuvir, an approved anti-hepatitis C virus (HCV) drug, has been proposed to be used for treating SARS-CoV-2. Our data show that sofosbuvir-TP can be incorporated into RNA by SARS-CoV-2 RdRp, but the incorporation efficiency is very low, and it probably cannot compete with natural UTP for incorporation by SARS-CoV-2 RdRp in the cell.
Remdesivir has been used as an antiviral drug against SARS-CoV-2, and a number of studies have been performed to study the antiviral mechanism of action of remdesivir (23,30,31). It has been shown that incorporation of remdesivir-TP causes delayed chain termination, which in turn blocks viral RNA synthesis. The influence of remdesivir-TP incorporation on RNA synthesis catalyzed by SARS-CoV-2 RdRp was studied using a primer extension assay. Our data showed that single or double incorporation of remdesivir-TP did not lead to immediate chain termination. Quadruple incorporation of remdesivir-TP caused a strong termination. Similar to the data presented by Gordon et al. (23), delayed chain termination was observed due to the incorporation of remdesivir-TP in our assay, but the delayed chain termination pattern is different when templates with different sequences are used. Our data showed that the incorporation efficiency of UTP at position i ϩ 1 after remdesivir-TP incorporation was increased greatly (Fig. 9), which suggested that incorporation of remdesivir-TP may have some influence on the kinetics of subsequent nucleotide incorporation, besides causing delayed chain termination.
Since our data showed that remdesivir-TP can be incorporated into RNA as an ATP analog and a GTP analog (Fig. S2) but cannot be incorporated into RNA as a CTP analog (Fig. S4) or a UTP analog (Fig. S3), the incorporation pattern of remdesivir-TP shown in Fig. 6B, lane 11, and Fig. 6D, lane 13, may suggest that a significant U-G mismatch occurs following remdesivir-TP incorporation. Further studies are needed to understand the influence of remdesivir-TP on the kinetics and fidelity of subsequent nucleotide incorporation and RNA synthesis.
MATERIALS AND METHODS
Chemicals. ATP, UTP, CTP, and GTP were purchased as 100 mM solutions from Thermo Fisher Scientific (Massachusetts, USA). Urea, taurine, dithiothreitol (DTT), MgCl 2 , imidazole, and isopropyl--Dthiogalactopyranoside (IPTG) were purchased from Bidepharm (Shanghai, China). LB medium, NaCl, and HisPur Ni-nitrilotriacetic acid (NTA) agarose resin were purchased from Thermo Fisher Scientific (Massachusetts, USA). Fluorescently labeled RNA oligonucleotides, as well as unlabeled RNA oligonucleotides, were chemically synthesized and purified by high-performance liquid chromatography (HPLC) by Gen-Script (Nanjing, China). Stellar competent cells were purchased from TaKaRa nsp12 protein expression and purification. The SARS-CoV-2 nsp12 gene (corresponding to amino acids 4393 to 5324; UniProt code P0DTD1) was synthesized de novo by GenScript (Nanjing, China) and constructed on the pET22b vector between the NdeI and XhoI sites. nsp12 was expressed with a C-terminal 10-His tag in BL21(DE3) cells at 16°C; 2 mM MgCl 2 and 50 M ZnSO 4 were used to supplement the culture during induction. After overnight cultivation, cells were harvested and lysed with a highpressure homogenizer in buffer containing 25 mM HEPES (pH 7.5), 150 mM NaCl, 4 mM MgCl 2 , 50 M ZnSO 4 , 10% glycerol, 2.5 mM DTT, and 20 mM imidazole. Cell debris was removed by centrifugation at 13,000 rpm. nsp12 was then purified by nickel-affinity chromatography followed by ion-exchange chromatography (using HisTrap FF and Capto HiResQ 5/50 columns, respectively; GE Healthcare, USA). nsp12 eluates with a conductivity of 19 to 23 mS/cm were combined and injected onto a Superdex 200 Increase 10/300 GL column (GE Healthcare, USA) in a buffer containing 25 mM HEPES (pH 7.5), 250 mM NaCl, 1 mM MgCl 2 , 1 mM tris(2-carboxyethyl)phosphine, and 10% glycerol. Peak fractions were combined, concentrated to 10 M, and stored at Ϫ80°C before enzymatic assay. The nsp12 loss-of-function mutant (SDD-to-SAA mutation; amino acids 5151 to 5153; UniProt code P0DTD1) was prepared in an identical process. Protein identification by mass spectrometry was performed by Biotech-Pack Scientific (Beijing, China). nsp7 and nsp8 protein expression and purification. SARS-CoV-2 nsp8 gene (nucleotides 12092 to 12685; strain name, Wuhan-Hu-1; GenBank no. MN908947.3) was synthesized de novo by GenScript (Nanjing, China) and cloned into a pMal-c5X vector under tac promoter control (without a maltosebinding protein [MBP] sequence). Specifically, the entire MBP sequence of pMal-c5X (nucleotides 1527 to 2628) was replaced with the nsp7 gene sequence with the addition of an N-terminal 6-His tag. The SARS-CoV-2 nsp7 gene (nucleotides 11846 to 12091; GenBank no. MN908947.3) was synthesized de novo by GenScript (Nanjing, China) and cloned into the pMal-c5X vector in the same way as nsp8. The expression plasmids were transformed into Stellar competent cells. Protein expression was induced at 16°C overnight night by addition of 0.3 mM IPTG. Cells were harvested, and cell pellets were resuspended in cell lysis buffer (20 mM HEPES [pH 7.5], 10% glycerol, 100 mM NaCl, 0.05% Tween 20, 10 mM DTT, 1 mM MgCl 2 , 20 mM imidazole, 1ϫ protease inhibitor cocktail). Cell disruption was performed at 4°C for 10 min using a high-pressure homogenizer. The cell extract was clarified by centrifugation at 12,000 rpm for 10 min at 4°C. Individual nsp7 and nsp8 were purified by HisPur Ni-NTA agarose resin separately, and the enzymes were eluted from the resin with elution buffer (20 mM HEPES [pH 7.5], 50 mM NaCl, 300 mM imidazole, 10 mM DTT, 0.01% Tween 20). The eluted enzymes were adjusted to 40% glycerol and stored at Ϫ80°C. For copurification of nsp8 and nsp7, nsp8 and nsp7 were expressed separately, and the expression cells were mixed before cell lysis. Protein purification was the same as for individual nsp7 and nsp8 proteins. Under this condition, nsp8 and nsp7 were purified together at once. Protein identification by mass spectrometry was performed by Biotech-Pack Scientific (Beijing, China).
Primer and template annealing. To generate RNA primer-template complexes, 1 M fluorescently (Cy5.5) labeled RNA primer and 5 M unlabeled RNA template were mixed in 50 mM NaCl in deionized water, incubated at 98°C for 10 min, and then slowly cooled to room temperature. The annealed primer-template (P/T) complexes were stored at Ϫ20°C before use in the primer extension assay.
Primer extension assay. The ability of RNA synthesis by purified polymerase was determined in a primer extension reaction using P/T complexes prepared by annealing Cy5.5-labeled RNA primer and unlabeled RNA template (described above). A typical primer extension reaction was performed in a 10-l reaction mixture containing reaction buffer (20 mM HEPES [pH 7.5], 5 mM MgCl 2 , 10 mM DTT, 0.01% Tween 20), 5 nM P/T, 50 nM nsp12, and 2 M nsp8-7 (copurified protein mixture containing 2 M nsp8 and 10 M nsp7) unless otherwise specified. The reaction was initiated by the addition of rNTPs at a final concentration of 100 M, unless otherwise specified, followed by incubation for 1 h at 37°C. The reactions were quenched by the addition of 20 l stopping solution (8 M urea, 90 mM Tris base, 29 mM taurine, 10 mM EDTA, 0.02% SDS, 0.1% bromophenol blue). The quenched samples were denatured at 95°C for 10 min, and the primer extension products were separated using 10% denaturing polyacrylamide gel electrophoresis (urea-PAGE) in 1ϫ TTE buffer (90 mM Tris base, 29 mM taurine, 0.5 mM EDTA). After electrophoresis, the gels were scanned using an Odyssey infrared imaging system (LI-COR Biosciences, Lincoln, NE). The images were analyzed, and the proper RNA bands were quantified using Image Studio Lite (version 5.2; LI-COR Bioscience, Lincoln, NE). Data were analyzed using GraphPad Prism 7.
Analysis of chain termination ability of nucleotide analogs. Primer extension reactions were performed as described above. Incorporation and chain termination of tested nucleotides were measured in two separate assays. For the nucleotide analog incorporation assay, P/T complexes (5 nM) and the nsp12-nsp8-nsp7 complex (RdRp) (50 nM nsp12, 2 M nsp8-7) were incubated with a natural rNTP (the first nucleotide to be incorporated) and tested nucleotide analogs (the second nucleotide to be incorporated), and the reactions were continued at 37°C for 30 min before the addition of stopping solution. For the chain termination assay, nucleotide analogs were incorporated as described above. Then, two natural rNTPs (the third and fourth nucleotides to be incorporated) were added to the reaction mixture, and reactions were continued at 37°C for another 30 min before the addition of stopping solution. The quenched samples were heated at 95°C for 10 min and analyzed by denaturing urea-PAGE as described above. The concentration and identity of nucleotides added for each reaction are described in the legend to Fig. 6.
Measurement of nucleotide analog incorporation efficiency. Different P/T complexes were designed to test individual analogs using the method described previously (25,26). To perform the reaction, 5 nM P/T, 50 nM nsp12, and 2 M nsp8-7 were incubated in reaction buffer in the presence of a 0.1 M concentration of the first natural ribonucleotide for 30 min at 37°C, and then different concentrations of the nucleotide analogs to be tested were added to the reaction mixtures. The reactions were continued at 22°C for the times indicated in each figure legend and subsequently quenched and analyzed by urea-PAGE as described above. After electrophoresis, the gels were scanned using the Odyssey infrared imaging system. The intensities of the different RNA bands were quantified using Image Studio Lite. The incorporation efficiencies of the different nucleotide analogs were evaluated by measurement of the K 1/2 values (the analog triphosphate concentrations resulting in 50% product extension) and the corresponding discrimination values (D analog , defined as K 1/2, analog /K 1/2, natural nucleotide when both were measured under the same assay condition), as previously described (11,25,26).
SUPPLEMENTAL MATERIAL
Supplemental material is available online only. SUPPLEMENTAL FILE 1, PDF file, 0.6 MB.
|
2020-07-21T13:13:53.934Z
|
2020-07-17T00:00:00.000
|
{
"year": 2020,
"sha1": "3cb6756883b3a3373a3f43f32674cb533e051f2c",
"oa_license": null,
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7927875",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "46361636bb49fa36f2246008c87ee0d5997050b9",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Chemistry"
]
}
|
203271068
|
pes2o/s2orc
|
v3-fos-license
|
Research and Development of Network Security and Risk Management
With the advent of the 21st century, China is in a period of social transformation and is in the stage of deepening reform. Social contradictions are complex, hot events are frequent, and public opinion is active. Under the conditions of the Internet, especially in the context of the prosperous media, the public opinion pattern has undergone new changes. With the popularity of the Internet and the increasing number of Internet users, the Internet has become a large social place. After a hot event occurs, it often causes heated discussion, and overheated public opinion or embarrassing grievances will also form corresponding events. Improper handling often results in “secondary disasters”. In recent years, the concept of lyricism has been frequently used, and public opinion analysis has been promoted at the same time in both theoretical research and practical application. The related network risk management has also received increasing attention. Various situations have shown that lyric research is heating up. Based on this status quo, this paper sorts out and comments, summarizes the research status at home and abroad, and focuses on the rational situation of the situation of public opinion research.
ISAI 2019 IOP Conf. Series: Journal of Physics: Conf. Series 1302 (2019) 022001 IOP Publishing doi:10.1088/1742-6596/1302/2/022001 2 more emotional theories. Many people do not know the truth, preconceived, exaggerated facts, easily tempted by posters, and even used and manipulated by people with ulterior motives, it is easy to form a one-sided paradox situation. Before the truth is published, often some sensational information is more likely to satisfy people's curiosity, which encourages people. In an untrue network information environment, false and one-sided information often causes people's cognitive bias and has serious impact. [1] It will take a lot of effort to go back and forth. Therefore, the impact of sudden online grievances on social stability and harmony has emerged.
Network Public Opinion and Related Concepts
Lyricism can be said to be a relatively "Chinese character" concept. The concept of lyrics has undergone different stages of expansion and innovation. Early lyric research generally defines grievances as public social political positions, that is, in social activities, around the emergence, development, and evolution of certain social events, as the subject of the public to the social managers as objects and their The social and political position generated and held by political attitudes.
Wang Laihua believes that "sentimentality" refers to the social and political stance generated by the public under the stimulation of certain social issues, while online public opinion mainly refers to the social and political stance of those who use the Internet, also known as netizens. Zhang Kesheng believes that public opinion is a social objective situation that is inevitable in the decision-making activities of the state decision-making body, the public life (the public sentiment), the social production (the people's power) and the knowledge and intelligence (the people's wisdom) in the people, and the public is recognizing On the basis of knowledge, emotion and will, the social and political attitudes towards the objective situation of the society and the state decision-making, that is, social conditions and public opinion. Internet public opinion is the reflection of this political attitude, that is, social conditions and public opinion on the Internet. Its core is to add a "network" to limit it before the public opinion. [2]Zeng Runxi believes that online public opinion is a collection of all cognitions, attitudes, emotions, and behavioral tendencies that are generated by various social events and transmitted through the Internet. Liu Yi believes that online public opinion is the sum of the various emotions, attitudes and opinions held by various public organizations that are concerned with or closely related to their own interests in a certain social space. . Internet public opinion is a subset of the lyric concept and falls within the category of lyricism. This paper believes that online public opinion is the sum of various emotions, attitudes and opinions held by various public organizations and individuals in various public organizations. Public affairs include social events, social hot issues, social conflicts, Social activities, etc., also include what public figures say and do.
Anonymity and Virtuality of Online Public Opinion
The biggest difference between the form of online public opinion and traditional public opinion is the anonymity and virtuality of its subject. In the traditional form, the formation process is the direct participation of the publisher, although it does not force it to indicate its true identity, but in any case can not achieve complete anonymity and virtual existence. Since it is impossible to achieve complete anonymity, it is necessary to scrutinize the possible consequences, and the publisher cannot express his own speech without any scruples. [3] Therefore, his freedom of speech will naturally be restricted to a certain extent, and sometimes it is difficult to form sensation. The Internet public opinion is generated in the virtual space of cyberspace. The publisher is different from the general public, but also different from the social group and the social organization. Instead, it appears in the cyberspace with an anonymous, virtual ID identity. Others do not. Learn about the real identity in real life. A person can have only one ID identity, or multiple IDs at the same time. Compared to traditional forms, this anonymous, virtual identity gives publishers a more free "right to speak," and they can express their opinions directly to social hotspots without hesitation. Regardless of whether their views can be supported by everyone, the publisher is always in an anonymous state, without worrying about other factors.
Difficulties and Rapidity of Online Public Opinion
The uncontrollable and rapid nature of the online public opinion process is also an important feature that distinguishes traditional lyrics. Due to the strict restrictions and related defects of the relevant laws, regulations and policies, the traditional media needs a certain time or even a long wait. The Internet is a highly open space, and anyone can be the publisher and leader of public opinion. For a large number of network users, it is impossible for us to check every statement published in the cyberspace, and it is even less likely to make an accurate evaluation of it in the first place, which makes the network public opinion process control complicated and difficult to control. With the help of the rapid spread of the network, time-space, resource sharing and other advantages, often a public opinion topic has just been proposed. Below, there are constantly netizens who post comments and can accumulate a lot of heat in a relatively short period of time. The speed is so fast that traditional media can't match it. No wonder some people sigh: In the traditional media era, what happens in the morning can only be seen from the traditional media at night. Now, we often have the feeling that the world will change when we wake up. [4]
Equality and Emotionality of Online Public Opinion
In cyberspace, the anonymity of the screen name in the speech and the virtuality of its identity have made the exchanges between netizens unprecedentedly equal. No matter who they are, whether they are from different culture levels, from different regions and from different occupations, they can express their different views on social hotspots. In cyberspace, everyone involved in the discussion is not limited to a theoretical equal position, nor is it merely providing a fair and open communication platform in form, but truly feeling from the hearts of netizens. [5] To this equality, we also agree with this equality. While enjoying this opportunity for equal communication, netizens are also more convenient to use cyberspace to vent their emotions accumulated in the real space, or to express their own subjective and strong comments, so the speech has a strong emotional . We are sure that the relaxed network environment is more favorable.
Freedom of expression in the public, to a certain extent, acts as a social "reducing valve", but this sentiment has a strong appeal and negative influence. [6]Once spread in cyberspace, it is easy to obtain the approval of netizens who share the same mentality. It is highly probable that a strong social mobilization effect will result, leading to public action in the real world, directly jeopardizing the stability of social order.
Development Status of Network Public Opinion Professional Research Institutions
The earliest research institute in China with "sentimentality" as its research goal was the Institute of Public Opinion of Tianjin Academy of Social Sciences, which was founded in October 1999. Since its establishment, the has been committed to the development of the discipline of lyric research, to improve the basic theoretical research work in the field of lyrics, and published a number of high-quality books such as "Introduction to Lyric
Internet Public Opinion Based on Big Data Research
Domestic scholars have exerted a lot of efforts in big data application research. Scholars in the fields of computer, engineering management and statistics are uniquely positioned in this regard. Lin Lina and Wei Dezhi are committed to the construction of the corresponding model in response to the current hot spot of online public opinion. The model uses relevant factors affecting hot events as evaluation indicators. The relevant data of the indicators are all objective data, which is conducive to the objectivity of model evaluation. In order to solve the model, the entropy weight method is used to determine the weight of the indicators in the model. Then the TOPSIS method and the grey correlation method are combined and calculated in the form of relative closeness to judge the pros and cons of the scheme. This is a partial technology study, which shows that science and engineering scholars use big data to actively participate in social issues research, and their results can be used as reference for journalism communication scholars. In February 2017, Science Magazine published a special article stating that researchers have made some progress in combining human cognition and big data methods to solve complex problems; data-driven predictions and decisions of human behavior and social events will become science. Research frontiers.
Network Lyric Emotion Analysis
Studying the emotional research done by foreign scholars can give us a broad perspective on lyric-emotional research. Arnold argues that emotion is a tendency toward an experience that is beneficial to the perception of consciousness and that leaves the perception as harmful, and that this tendency to experience is accompanied by a corresponding pattern of physiological change that approaches or retreats. This wise definition has a useful discussion of emotional connotations. American scholar K. T. Stallman's "Emotional Psychology" explores a series of issues such as emotional physiological mechanisms, cognitive and emotional relationships, emotional phenomenology, emotional behavior, emotional development, emotional performance and emotional recognition, and abnormal emotions. The basic framework for the study of emotions is quite systematic.
Scholars from the Department of Psychology at the intersection of science and engineering and liberal arts and sciences, with their professional expertise, focus on the emotions in group/cluster events with skillful computer technology and meticulous thinking, and research topics related to emotions. : Negative emotional communication mechanism, negative emotional dynamic mechanism, group emotion monitoring and early warning of emergencies, group emotional cohesion and its production mechanism, and social network group emotion model. These achievements reflect the pursuit and interest of interdisciplinary research, broaden the research horizon and research ideas of sentiment analysis, and are similar to injecting a clear spring for research, and at the same time competing with the netizens' emotions from the perspective of humanities and social sciences.
Research on Network Public Opinion Risk Management
According to the famous British public policy scholar Giddens, contemporary society has entered a risk society. The risk mainly refers to the "risk created by man-made", which is accompanied by the process of globalization. The risk is deeply rooted in modernity and is the most prominent feature of modern society. The famous German scholar Baker, when talking about the characteristics of risk, believes that the severity of the risk exceeds the ability of early warning detection and post-processing.
Chinese scholar Cheng Boqing studied the risk society and pointed out: "For the basic characteristics of a risk society, we still have a hard time to give a clear grasp, because this social form has just leapt to the horizon, we can only see its rough The outline. But we may wish to use the globalization and individualization of risk for the time being -this does not fully demonstrate the complexity of the risk society, but it should also be reflected in the basic dimensions." The two basic dimensions are proposed, It is good for researchers to think about the risk society. Xu Yong and Xiang Jiquan believe that in traditional society, people believe that human rational power can control nature and society, and make human society develop in an orderly and regular manner. However, with the development of science and technology and globalization, this "normal" society has become unrecognizable. The uncertainty and unpredictability of society are increasing, and people have to face more risks. More importantly, due to the high level of modern information technology, the sense of fear and distrust caused by risks and disasters will spread rapidly to the whole society through modern means of information.
Prospects of Network Public Opinion and Risk Management
From the current situation in China, the research on network public opinion is still in its infancy. Most of them are still accustomed to the traditional research paradigm. They regard the Internet as a purely technical system, ignoring the influence of human activities and activities. It still belongs to the traditional linear and stable state paradigm. Although it has obtained a lot of valuable theoretical and applied results, it enriches the theoretical vision and methodological guidance of network public opinion research, guidance and utilization, but it is open, nonlinear and dynamic. The evolution of complex network giant systems still has limitations. Compared with foreign related research, China has a slight lag in interpreting the Internet with complex adaptive systems, using chaos theory and self-organization principle to interpret the network public opinion mechanism, but in the "butterfly effect" and "long tail theory," The research and application of theories such as "herd theory", "self-organization theory", "mutation theory" and "phase transition theory" are gradually deepening, and relevant theoretical research and case analysis and network public opinion guidance for the application of complex adaptive systems It has opened up broad prospects and put forward many new methods and new ideas for solving problems. Looking forward to the future of network public opinion, first, in the spatial dimension, based on the local and global perspective, we must build a "harmonious world" as the concept and expand the network. Lyric research and guidance have attracted new connotations in depth and breadth. The concept of "harmonious world" is "open-minded thinking, pursuing peace between people, peace between nations, humans and nature." Harmony, the idea of seeking coexistence and win-win through dialogue and cooperation is the core connotation of China's century foreign policy with ideal pursuit and global strategy.
|
2019-09-17T02:47:04.717Z
|
2019-08-01T00:00:00.000
|
{
"year": 2019,
"sha1": "627704b9b2354c770ae6d178ce1cc613f65522d0",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1302/2/022001",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "c03953d1b669b2a0eb061859479480a0827a2244",
"s2fieldsofstudy": [
"Computer Science",
"Political Science"
],
"extfieldsofstudy": [
"Political Science",
"Physics"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.